![]() |
Publications of Interest |
The Publications of Interest section contains bibliographical citations, abstracts if available, and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years; others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness to current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: news@scienceofsecurity.net
(ID#:16-9546)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence
![]() |
Autonomic Security 2015 |
Autonomic computing refers to the self-management of complex distributed computing resources that can adapt to unpredictable changes with transparency to operators and users. Security is one of the four key elements of autonomic computing and includes proactive identification and protection from arbitrary attacks. The articles cited here describe research into the security problems associated with a variety of autonomic systems and were published in 2015. Topics include autonomic security regarding vulnerability assessments, intelligent sensors, encryption, services, and the Internet of Things.
Harshe, O.A.; Teja Chiluvuri, N.; Patterson, C.D.; Baumann, W.T., "Design and Implementation of a Security Framework for Industrial Control Systems," in Industrial Instrumentation and Control (ICIC), 2015 International Conference on, pp. 127-132, 28-30 May 2015. doi: 10.1109/IIC.2015.7150724
Abstract: We address the problems of network and reconfiguration attacks on an industrial control system (ICS) by describing a trustworthy autonomic interface guardian architecture (TAIGA) that provides security against attacks originating from both supervisory and plant control nodes. In contrast to the existing security techniques which attempt to bolster perimeter security at supervisory levels, TAIGA physically isolates trusted defense mechanisms from untrusted components and monitors the physical process to detect an attack. Trusted components in TAIGA are implemented in programmable logic (PL). Our implementation of TAIGA integrates a trusted safety-preserving backup controller, and a mechanism for preemptive switching to a backup controller when an attack is detected. A hardware implementation of our approach on an inverted pendulum system illustrates how TAIGA improves resilience against software reconfiguration and network attacks.
Keywords: control engineering computing; industrial control; nonlinear systems; pendulums; production engineering computing; programmable controllers; software engineering; switching systems (control);trusted computing; ICS; TAIGA; industrial control system; inverted pendulum system; network attack; perimeter security; plant control node; preemptive switching; programmable logic; reconfiguration attack; security framework; security technique; software reconfiguration; supervisory control node; supervisory level; trusted defense mechanism; trusted safety-preserving backup controller; trustworthy autonomic interface guardian architecture; untrusted component; Production; Safety; Security; Sensors; Servomotors; Switches (ID#: 15-8185)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150724&isnumber=7150576
Mulcahy, J.J.; Shihong Huang, "An Autonomic Approach to Extend the Business Value of a Legacy Order Fulfillment System," in Systems Conference (SysCon), 2015 9th Annual IEEE International, pp. 595-600, 13-16 April 2015. doi: 10.1109/SYSCON.2015.7116816
Abstract: In the modern retailing industry, many enterprise resource planning (ERP) systems are considered legacy software systems that have become too expensive to replace and too costly to re-engineer. Countering the need to maintain and extend the business value of these systems is the need to do so in the simplest, cheapest, and least risky manner available. There are a number of approaches used by software engineers to mitigate the negative impact of evolving a legacy systems, including leveraging service-oriented architecture to automate manual tasks previously performed by humans. A relatively recent approach in software engineering focuses upon implementing self-managing attributes, or “autonomic” behavior in software applications and systems of applications in order to reduce or eliminate the need for human monitoring and intervention. Entire systems can be autonomic or they can be hybrid systems that implement one or more autonomic components to communicate with external systems. In this paper, we describe a commercial development project in which a legacy multi-channel commerce enterprise resource planning system was extended with service-oriented architecture an autonomic control loop design to communicate with an external third-party security screening provider. The goal was to reduce the cost of the human labor necessary to screen an ever-increasing volume of orders and to reduce the potential for human error in the screening process. The solution automated what was previously an inefficient, incomplete, and potentially error-prone manual process by inserting a new autonomic software component into the existing order fulfillment workflow.
Keywords: enterprise resource planning; service-oriented architecture; software maintenance; ERP systems; autonomic approach; autonomic behavior; autonomic control loop design; autonomic software component; business value; error-prone manual process; human error; human monitoring; hybrid systems; legacy multichannel commerce enterprise resource planning system; legacy order fulfillment system; legacy software systems; order fulfillment workflow; retailing industry; service-oriented architecture; software applications; software engineering; third party security screening provider; Business; Complexity theory; Databases; Manuals; Monitoring; Software systems; autonomic computing; legacy software systems; self-adaptive systems; self-managing systems; service-oriented architecture; software evolution; software maintenance ;systems interoperability; systems of systems (ID#: 15-8186)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116816&isnumber=7116715
Boussard, M.; Dinh Thai Bui; Ciavaglia, L.; Douville, R.; Le Pallec, M.; Le Sauze, N.; Noirie, L.; Papillon, S.; Peloso, P.; Santoro, F., "Software-Defined LANs for Interconnected Smart Environment," in Teletraffic Congress (ITC 27), 2015 27th International, pp. 219-227, 8-10 Sept. 2015. doi: 10.1109/ITC.2015.33
Abstract: In this paper, we propose a solution to delegate the control and the management of the network connecting the many devices of a smart environment to a software entity, while keeping end-users in control of what is happening in their networks. For this, we rely on the logical manipulation of all connected devices through device abstraction and network programmability. Applying Software Defined Networking (SDN) principles, we propose a software-based solution that we call Software-Defined LANs in order to interconnect devices of smart environments according to the services the users are requesting or expecting. We define the adequate virtualization framework based on Virtual Objects and Communities of Virtual Objects. Using these virtual entities, we apply the SDN architectural principles to define a generic architecture that can be applied to any smart environment. Then we describe a prototype implementing these concepts in the home networking context, through a scenario in which users of two different homes can easily interconnect two private but shareable DLNA devices in a dedicated video-delivery SD-LAN. Finally we provide a discussion of the benefits and challenges of our approach regarding the generalization of SDN principles, autonomic features, Internet of Things scalability, security and privacy aspects enabled by SD-LANs intrinsic properties.
Keywords: Internet of Things; computer network management; computer network security; data privacy; home networks; local area networks; software defined networking; virtualisation; DLNA devices; Internet-of-things scalability aspect; SDN architectural principles; autonomic features; device abstraction; home networking context; interconnected smart environment; network control; network management; network programmability; privacy aspect; security aspect; software defined networking principles; software entity; software-based solution; software-defined LAN; virtual objects; virtualization framework; Avatars; Computer architecture; Context; Home automation; Security; Software; Virtualization (ID#: 15-8187)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7277446&isnumber=7277413
Tunc, C.; Hariri, S.; De La Peña Montero, F.; Fargo, F.; Satam, P., "CLaaS: Cybersecurity Lab as a Service -- Design, Analysis, and Evaluation," in Cloud and Autonomic Computing (ICCAC), 2015 International Conference on, pp. 224-227, 21-25 Sept. 2015. doi: 10.1109/ICCAC.2015.34
Abstract: The explosive growth of IT infrastructures, cloud systems, and Internet of Things (IoT) have resulted in complex systems that are extremely difficult to secure and protect against cyberattacks that are growing exponentially in the complexity and also in the number. Overcoming the cybersecurity challenges require cybersecurity environments supporting the development of innovative cybersecurity algorithms and evaluation of the experiments. In this paper, we present the design, analysis, and evaluation of the Cybersecurity Lab as a Service (CLaaS) which offers virtual cybersecurity experiments as a cloud service that can be accessed from anywhere and from any device (desktop, laptop, tablet, smart mobile device, etc.) with Internet connectivity. We exploit cloud computing systems and virtualization technologies to provide isolated and virtual cybersecurity experiments for vulnerability exploitation, launching cyberattacks, how cyber resources and services can be hardened, etc. We also present our performance evaluation and effectiveness of CLaaS experiments used by students.
Keywords: cloud computing; security of data; virtualisation; CLaaS; cloud computing system; cybersecurity lab as a service; virtual cybersecurity; virtualization technology; Cloud computing; Computer crime; IP networks; Servers; Virtualization; CLaaS; cybersecurity; education; virtual lab; virtualization (ID#: 15-8188)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312161&isnumber=7312127
Stephen, J.J.; Gmach, D.; Block, R.; Madan, A.; AuYoung, A., "Distributed Real-Time Event Analysis," in Autonomic Computing (ICAC), 2015 IEEE International Conference on, pp. 11-20, 7-10 July 2015. doi: 10.1109/ICAC.2015.12
Abstract: Security Information and Event Management (SIEM) systems perform complex event processing over a large number of event streams at high rate. As event streams increase in volume and event processing becomes more complex, traditional approaches such as scaling up to more powerful systems quickly become ineffective. This paper describes the design and implementation of DRES, a distributed, rule-based event evaluation system that can easily scale to process a large volume of non-trivial events. DRES intelligently forwards events across a cluster of nodes to evaluate complex correlation and aggregation rules. This approach enables DRES to work with any rules engine implementation. Our evaluation shows DRES scales linearly to more than 16 nodes. At this size it successfully processed more than half a million events per second.
Keywords: distributed processing; security of data; SIEM system; aggregation rule; complex event processing; correlation rule; distributed realtime event analysis; distributed rule-based event evaluation system; security information and event management system; Connectors; Correlation; Data structures; Engines; Real-time systems; Servers; Throughput; Distributed event analysis; enterprise security (ID#: 15-8189)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266930&isnumber=7266915
Tunc, C.; Hariri, S.; De La Pena Montero, F.; Fargo, F.; Satam, P.; Al-Nashif, Y., "Teaching and Training Cybersecurity as a Cloud Service," in Cloud and Autonomic Computing (ICCAC), 2015 International Conference on, pp. 302-308, 21-25 Sept. 2015. doi: 10.1109/ICCAC.2015.47
Abstract: The explosive growth of IT infrastructures, cloud systems, and Internet of Things (IoT) have resulted in complex systems that are extremely difficult to secure and protect against cyberattacks which are growing exponentially in complexity and in number. Overcoming the cybersecurity challenges is even more complicated due to the lack of training and widely available cybersecurity environments to experiment with and evaluate new cybersecurity methods. The goal of our research is to address these challenges by exploiting cloud services. In this paper, we present the design, analysis, and evaluation of a cloud service that we refer to as Cybersecurity Lab as a Service (CLaaS) which offers virtual cybersecurity experiments that can be accessed from anywhere and from any device (desktop, laptop, tablet, smart mobile device, etc.) with Internet connectivity. In CLaaS, we exploit cloud computing systems and virtualization technologies to provide virtual cybersecurity experiments and hands-on experiences on how vulnerabilities are exploited to launch cyberattacks, how they can be removed, and how cyber resources and services can be hardened or better protected. We also present our experimental results and evaluation of CLaaS virtual cybersecurity experiments that have been used by graduate students taking our cybersecurity class as well as by high school students participating in GenCyber camps.
Keywords: Internet of Things; cloud computing; computer aided instruction; computer science education; educational courses; security of data; virtualisation; CLaaS; GenCyber camps; IT infrastructures; Internet connectivity; Internet of things; IoT; cloud computing systems; cloud service; cyber resources; cybersecurity lab as a service; cybersecurity teaching; cybersecurity training; graduate students; virtual cybersecurity experiments; virtualization technologies; Cloud computing; Computer crime; Network interfaces; Protocols; Servers; CLaaS and cloud computing; cybersecurity experiments; education; virtual cloud services; virtualization (ID#: 15-8190)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312173&isnumber=7312127
Ahad, R.; Chan, E.; Santos, A., "Toward Autonomic Cloud: Automatic Anomaly Detection and Resolution," in Cloud and Autonomic Computing (ICCAC), 2015 International Conference on, pp. 200-203, 21-25 Sept. 2015. doi: 10.1109/ICCAC.2015.32
Abstract: In this paper we describe an approach to implement an autonomic cloud. Our approach is based on our belief that if a computing system can automatically detect and correct anomalies - including response time anomalies, load anomalies, resource usage anomalies, and outages - then it can go a long way in reducing human involvement in keeping the system up, and that can lead to an autonomic system. We focus on a class of anomalies that are defined by normal values expected of key metrics. We describe a hierarchical rule-based anomaly detection and resolution framework for such a class of metrics.
Keywords: cloud computing; security of data; automatic anomaly detection; automatic anomaly resolution; autonomic cloud; load anomalies; outages; resource usage anomalies; response time anomalies; Assembly; Cloud computing; Computer architecture; Containers; Measurement; Monitoring; Quality of service; Anomaly; Autonomic Systems; Cloud; Rule-Based (ID#: 15-8191)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312155&isnumber=7312127
Tawalbeh, L.; Al-Qassas, R.S.; Darwazeh, N.S.; Jararweh, Y.; AlDosari, F., "Secure and Efficient Cloud Computing Framework," in Cloud and Autonomic Computing (ICCAC), 2015 International Conference on, pp. 291-295, 21-25 Sept. 2015. doi: 10.1109/ICCAC.2015.45
Abstract: Cloud computing is a very useful solution to many individual users and organizations. It can provide many services based on different needs and requirements. However, there are many issues related to the user data that need to be addressed when using cloud computing. Among the most important issues are: data ownership, data privacy, and storage. The users might be satisfied by the services provided by the cloud computing service providers, since they need not worry about the maintenance and storage of their data. On the other hand, they might be worried about unauthorized access to their private data. Some solutions to these issues were proposed in the literature, but they mainly increase the cost and processing time since they depend on encrypting the whole data. In this paper, we are introducing a cloud computing framework that classifies the data based on their importance. In other words, more important data will be encrypted with more secure encryption algorithm and larger key sizes, while less important data might even not be encrypted. This approach is very helpful in reducing the processing cost and complexity of data storage and manipulation since we do not need to apply the same sophisticated encryption techniques to the entire users data. The results of applying the proposed framework show improvement and efficiency over other existing frameworks.
Keywords: cloud computing; data privacy; security of data; cloud computing service providers; data encryption algorithm; data maintenance; data ownership; data privacy; data storage complexity; secure cloud computing framework; Cloud computing; Encryption; Mobile communication; Servers; Yttrium; Cloud Computing; Cryptography; Efficient framework; Information Security (ID#: 15-8192)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312171&isnumber=7312127
Zhimin Gao; Desalvo, N.; Pham Dang Khoa; Seung Hun Kim; Lei Xu; Won Woo Ro; Verma, R.M.; Weidong Shi, "Integrity Protection for Big Data Processing with Dynamic Redundancy Computation," in Autonomic Computing (ICAC), 2015 IEEE International Conference on, pp. 159-160, 7-10 July 2015. doi: 10.1109/ICAC.2015.34
Abstract: Big data is a hot topic and has found various applications in different areas such as scientific research, financial analysis, and market studies. The development of cloud computing technology provides an adequate platform for big data applications. No matter public or private, the outsourcing and sharing characteristics of the computation model make security a big concern for big data processing in the cloud. Most existing works focus on protection of data privacy but integrity protection of the processing procedure receives little attention, which may lead the big data application user to wrong conclusions and cause serious consequences. To address this challenge, we design an integrity protection solution for big data processing in cloud environments using reputation based redundancy computation. The implementation and experiment results show that the solution only adds limited cost to achieve integrity protection and is practical for real world applications.
Keywords: Big Data; cloud computing; data integrity; data privacy; Big Data processing; cloud computing technology; dynamic redundancy computation; integrity protection solution; reputation based redundancy computation; Conferences; MapReduce; cloud computing; integrity protection (ID#: 15-8193)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266957&isnumber=7266915
Sicari, S.; Rizzardi, A.; Coen-Porisini, A.; Grieco, L.A.; Monteil, T., "Secure OM2M Service Platform," in Autonomic Computing (ICAC), 2015 IEEE International Conference on, pp. 313-318, 7-10 July 2015. doi: 10.1109/ICAC.2015.59
Abstract: Machine-to-Machine (M2M) paradigm is one of the main concern of Internet of Things (IoT). Its scope is to interconnect billions of heterogeneous devices able to interact in various application domains. Since M2M suffers from a high vertical fragmentation of current M2M markets and lacks of standards, the European Telecommunications Standards Institute (ETSI) released a set of specifications for a common M2M service platform. An ETSI-compliant M2M service platform has been proposed in the context of the open source OM2M project. However such a platform currently only marginally addresses security and privacy issues, which are fundamental requirements for its large-scale adoption. Therefore, an extension of the OM2M platform is proposed, defining a new policy enforcement plug in, which aims to manage the access to the resources provided by the platform itself and to handle any violation attempts of the policies.
Keywords: Internet of Things; computer network security; data privacy; ETSI-compliant M2M service platform; European Telecommunications Standards Institute; Internet of Things; IoT; M2M markets;M2M paradigm; heterogeneous devices; machine-to-machine paradigm; open source OM2M project; policy enforcement plug; privacy issues; secure OM2M service platform; security issues; violation attempts; Global Positioning System; Interoperability; Logic gates; Privacy; Protocols; Security; Standards; Internet of Things; OM2M; Security Enforcement (ID#: 15-8194)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266986&isnumber=7266915
Zhimin Gao; Desalvo, N.; Pham Dang Khoa; Seung Hun Kim; Lei Xu; Won Woo Ro; Verma, R.M.; Weidong Shi, "Integrity Protection for Big Data Processing with Dynamic Redundancy Computation," in Autonomic Computing (ICAC), 2015 IEEE International Conference on, pp. 159-160, 7-10 July 2015. doi: 10.1109/ICAC.2015.34
Abstract: Big data is a hot topic and has found various applications in different areas such as scientific research, financial analysis, and market studies. The development of cloud computing technology provides an adequate platform for big data applications. No matter public or private, the outsourcing and sharing characteristics of the computation model make security a big concern for big data processing in the cloud. Most existing works focus on protection of data privacy but integrity protection of the processing procedure receives little attention, which may lead the big data application user to wrong conclusions and cause serious consequences. To address this challenge, we design an integrity protection solution for big data processing in cloud environments using reputation based redundancy computation. The implementation and experiment results show that the solution only adds limited cost to achieve integrity protection and is practical for real world applications.
Keywords: Big Data; cloud computing; data integrity; data privacy; Big Data processing; cloud computing technology; dynamic redundancy computation; integrity protection solution; reputation based redundancy computation; Conferences; MapReduce; cloud computing; integrity protection (ID#: 15-8195)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266957&isnumber=7266915
Kantert, J.; Spiegelberg, H.; Tomforde, S.; Hahner, J.; Muller-Schloer, C., "Distributed Rendering in an Open Self-Organised Trusted Desktop Grid," in Autonomic Computing (ICAC), 2015 IEEE International Conference on, pp. 267-272, 7-10 July 2015. doi: 10.1109/ICAC.2015.66
Abstract: Grid systems are an ideal basis to parallelise computationally intensive tasks that efficiently can be split into parts. One possible application domain for such systems is rendering of films. Since small companies and underground film producers do not have the possibility to maintain appropriate computing environments for their own films, grid-based approaches can be used to build a self-organised and autonomic computing infrastructure. In order to avoid such systems from being exploited by malicious agents, we present a novel approach introducing technical trust which results in the Trusted Desktop Grid. In this paper, we demonstrate that the system is able to automatically isolate malicious agents and support an efficient utilisation for benevolent agents -- resulting in a self-protecting and self-healing system.
Keywords: distributed processing; grid computing; rendering (computer graphics);security of data; trusted computing; autonomic computing infrastructure; benevolent agents; distributed rendering; grid systems; malicious agents; open self-organised trusted desktop grid; Bandwidth; Computational modeling; Law; Mathematical model; Rendering (computer graphics); Security; autonomous computing; distributed rendering; multi-agent systems; organic computing; trust; trusted desktop grid (ID#: 15-8196)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266978&isnumber=7266915
Schlatow, J.; Moestl, M.; Ernst, R., "An Extensible Autonomous Reconfiguration Framework for Complex Component-Based Embedded Systems," in Autonomic Computing (ICAC), 2015 IEEE International Conference on, pp. 239-242, 7-10 July 2015. doi: 10.1109/ICAC.2015.18
Abstract: We present a framework based on constraint satisfaction that adds self-integration capabilities to component-based embedded systems by identifying correct compositions of the desired components and their dependencies. This not only allows autonomous integration of additional functionality but can also be extended to ensure that the new configuration does not violate any extra-functional requirements, such as safety or security, imposed by the application domain.
Keywords: embedded systems; object-oriented programming; application domain; complex component-based embedded systems; extensible autonomous reconfiguration framework; self-integration capabilities; Adaptation models; Component architectures; Computer architecture; Contracts; Embedded systems; Encoding; Modeling; based; constraint satisfaction; embedded systems; incremental self-integration; software deployment (ID#: 15-8197)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266973&isnumber=7266915
Bowu Zhang; Jinho Hwang; Ma, L.; Wood, T., "Towards Security-Aware Virtual Server Migration Optimization to the Cloud," in Autonomic Computing (ICAC), 2015 IEEE International Conference on, pp. 71-80, 7-10 July 2015. doi: 10.1109/ICAC.2015.45
Abstract: Cloud computing, featured by shared servers and location independent services, has been widely adopted by various businesses to increase computing efficiency, and reduce operational costs. Despite significant benefits and interests, enterprises have a hard time to decide whether or not to migrate thousands of servers into the cloud because of various reasons such as lack of holistic migration (planning) tools, concerns on data security and cloud vendor lock-in. In particular, cloud security has become the major concern for decision makers, due to the nature weakness of virtualization -- the fact that the cloud allows multiple users to share resources through Internet-facing interfaces can be easily taken advantage of by hackers. Therefore, setting up a secure environment for resource migration becomes the top priority for both enterprises and cloud providers. To achieve the goal of security, security policies such as firewalls and access control have been widely adopted, leading to significant cost as additional resources need to employed. In this paper, we address the challenge of the security-aware virtual server migration, and propose a migration strategy that minimizes the migration cost while promising the security needs of enterprises. We prove that the proposed security-aware cost minimization problem is NP hard and our solution can achieve an approximate factor of 2. We perform an extensive simulation study to evaluate the performance of the proposed solution under various settings. Our simulation results demonstrate that our approach can save 53%moving cost for a single enterprise case, and 66% for multiple enterprises case comparing to a random migration strategy.
Keywords: cloud computing; cost reduction; resource allocation; security of data; virtualisation; Internet-facing interfaces; NP hard problem; cloud computing; cloud security; cloud vendor lock-in; data security; moving cost savings; resource migration; resource sharing; security policy; security-aware cost minimization problem; security-aware virtual server migration optimization; virtualization; Approximation algorithms; Approximation methods; Cloud computing; Clustering algorithms; Home appliances; Security; Servers; Cloud Computing; Cloud Migration; Cloud Security; Cost Minimization (ID#: 15-8198)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266936&isnumber=7266915
da Silva Machado, Roger; Borges Almeida, Ricardo; Correa Yamin, Adenauer; Marilza Pernas, Ana, "LogA-DM: An Approach of Dynamic Log Analysis," in Latin America Transactions, IEEE (Revista IEEE America Latina) , vol. 13, no. 9, pp. 3096-3102, Sept. 2015. doi: 10.1109/TLA.2015.7350064
Abstract: In ubiquitous computing high levels of connectivity are needed. Considering that, preoccupations related with security aspects are indispensable. One strategy that can be applied for improve security is the log analysis. Such strategies can be used to promote systems' understanding, in particular, the detection of intrusion attempts. The operation of modern computing systems, as the ones used in ubiquitous computing, tend to generate a large number of log records, which require the use of automatic tools to an easier analysis. Tools that employ data mining techniques to log analysis have been used in order to detect attempted attacks on computer systems, assisting security management. Thus, this paper proposes an approach to perform log analysis with intuit to prevent attack situations. The proposed solution explores two fronts: (i) log records of applications, and (ii) log records from the network and transport layers. To evaluate the proposed approach was designed a prototype that employs modules for collection and normalization of data. The normalization module also adds contextual information in order to assist the analysis of critical security situations. To conserve the system's autonomic operation, the records of the network and transport layers are collected and evaluated from connections in progress. Tests were developed in the proposed solution, showing good result for typical categories of attack.
Keywords: Data mining; Middleware; Monitoring; Security; Ubiquitous computing; Visualization; Context-awareness; Data Mining; Log Analysis; Ubiquitous Computing (ID#: 15-8199)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7350064&isnumber=7350023
Beach, T.; Rana, O.; Rezgui, Y.; Parashar, M., "Governance Model for Cloud Computing in Building Information Management," in Services Computing, IEEE Transactions on, vol. 8, no. 2, pp. 314-327, March-April 2015. doi: 10.1109/TSC.2013.50
Abstract: The AEC (Architecture Engineering and Construction) sector is a highly fragmented, data intensive, project based industry, involving a number or very different professions and organisations. The industry's strong data sharing and processing requirements means that the management of building data is complex and challenging. We present a data sharing capability utilising Cloud Computing, with two key contributions: 1) a governance model for building data, based on extensive research Pand industry consultation. This governance model describes how individual data artefacts within a building information model relate to each other and how access to this data is controlled; 2) a prototype implementation of this governance model, utilising the CometCloud autonomic cloud computing engine, using the Master/Work paradigm. This prototype is able to successfully store and manage building data, provide security based on a defined policy language and demonstrate scale-out in case of increasing demand or node failure. Our prototype is evaluated both qualitatively and quantitatively. To enable this evaluation we have integrated our prototype with the 3D modelling software-Google Sketchup. We also evaluate the prototype's performance when scaling to utilise additional nodes in the Cloud and to determine its performance in case of node failures.
Keywords: architecture; buildings (structures); civil engineering computing; cloud computing; fault tolerant computing; information management; solid modelling;3D modelling software; AEC sector; CometCloud autonomic cloud computing engine; Google Sketchup; architecture engineering and construction sector; building data management; building information management; cloud computing; governance model; industry data processing requirements; industry data sharing requirement; master-work paradigm; policy language; project based industry; Buildings; Cloud computing; Collaboration; Computational modeling; Data models; Solid modeling; Cloud computing; building information modelling; data management; distributed tuple space (ID#: 15-8200)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6654157&isnumber=7080963
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Botnets 2015 |
Botnets, a common security threat, are used for a variety of attacks: spam, distributed denial of service (DDOS), ad and spyware, scareware and brute forcing services. Their reach and the challenge of detecting and neutralizing them is compounded in the cloud and on mobile networks. The research cited here was presented in 2015.
Carvalho, M., "Resilient Command and Control Infrastructures for Cyber Operations," in Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2015 IEEE/ACM 10th International Symposium on , vol., no., pp.97-97, 18-19 May 2015
doi: 10.1109/SEAMS.2015.17
Abstract: The concept of command and control (C2) is generally associated with the exercise of authority, direction and coordination of assets and capabilities. Traditionally, the concept has encompassed important operational functions such as the establishment of intent, allocation of roles and responsibilities, definition of rules and constraints, and the monitoring and estimation of system state, situation, and progress. More recently, the notion of C2 has been extended beyond military applications to include cyber operation environments and assets. Unfortunately this evolution has enjoyed faster progress and adoption on the offensive, rather than defensive side of cyber operations. One example is the adoption of advanced peer-to-peer C2 infrastructures for the control of malicious botnets and coordinated attacks, which have successfully yielded very effective and resilient control infrastructures in many instances. Defensive C2 is normally associated with a system's ability to monitor, interpret, reason, and respond to cyber events, often through advanced human-machine interfaces, or automated actions. For defensive operations, the concept is gradually evolving and gaining momentum. Recent research activities in this area are now showing great potential to enable truly resilient cyber defense infrastructures. In this talk I will introduce some of the motivations, requirements, and challenges associated with the design of distributed command and control infrastructures for cyber operations. The talk will primarily focus on the resilience aspects of distributed C2, and will cover a brief overview of the prior research in the field, as well as discussions on some of the current and future challenges in this important research domain.
Keywords: command and control systems; security of data; advanced peer-to-peer C2 infrastructures; coordinated attacks ;cyber operation environments; malicious botnets; military applications; resilient command and control infrastructures; resilient cyber defense infrastructures; Adaptive systems; Command and control systems; Computer science; Computer security; Mechanical engineering; Monitoring; Software engineering; command and control; resilience; self-adaptation (ID#: 15-8224)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194663&isnumber=7194643
Al-Hakbani, M.M.; Dahshan, M.H., "Avoiding Honeypot Detection in Peer-to-Peer Botnets," in Engineering and Technology (ICETECH), 2015 IEEE International Conference on, pp. 1-7, 20-20 March 2015. doi: 10.1109/ICETECH.2015.7275017
Abstract: A botnet is group of compromised computers that are controlled by a botmaster, who uses them to perform illegal activities. Centralized and P2P (Peer-to-Peer) botnets are the most commonly used botnet types. Honeypots have been used in many systems as computer defense. They are used to attract botmasters to add them in their botnets; to become spies in exposing botnet attacker behaviors. In recent research works, improved mechanisms for honeypot detection have been proposed. Such mechanisms would enable bot masters to distinguish honeypots from real bots, making it more difficult for honeypots to join botnets. This paper presents a new method that can be used by security defenders to overcome the authentication procedure used by the advanced two-stage reconnaissance worm (ATSRW). The presented method utilizes the peer list information sent by an infected host during the ATSRW authentication process and uses a combination of IP address spoofing and fake TCP three-way handshake. The paper provides an analytical study on the performance and the success probability of the presented method. We show that the presented method provide a higher chance for honeypots to join botnets despite security measures taken by botmasters.
Keywords: message authentication; peer-to-peer computing; ATSRW authentication process; IP address spoofing; advanced two-stage reconnaissance worm; centralized botnet; fake TCP three-way handshake; honeypot detection; peer-to-peer botnets; success probability; Authentication; Computers; Delays; Grippers; IP networks; Peer-to-peer computing;P2P;botnet;detecting; honeypot; honeypot aware; peer-to-peer (ID#: 15-8225)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275017&isnumber=7274993
Bock, Leon; Karuppayah, Shankar; Grube, Tim; Muhlhauser, Max; Fischer, Mathias, "Hide and Seek: Detecting Sensors in P2P Botnets," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 731-732, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346908
Abstract: Many cyber-crimes, such as Denial of Service (DoS) attacks and banking frauds, originate from botnets. To prevent botnets from being taken down easily, botmasters have adopted peer-to-peer (P2P) mechanisms to prevent any single point of failure. However, sensor nodes that are often used for both, monitoring and executing sinkholing attacks, are threatening such botnets. In this paper, we introduce a novel mechanism to detect sensor nodes in P2P botnets using the clustering coefficient as a metric. We evaluated our mechanism on the real-world botnet Sality over the course of a week and were able to detect an average of 25 sensors per day with a false positive rate of 20%.
Keywords: Monitoring; Peer-to-peer computing (ID#: 15-8226)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346908&isnumber=7346791
Eslahi, M.; Rohmad, M.S.; Nilsaz, H.; Naseri, M.V.; Tahir, N.M.; Hashim, H., "Periodicity Classification of HTTP Traffic to Detect HTTP Botnets," in Computer Applications & Industrial Electronics (ISCAIE), 2015 IEEE Symposium on, pp. 119-123, 12-14 April 2015. doi: 10.1109/ISCAIE.2015.7298339
Abstract: Recently, the HTTP based Botnet threat has become a serious challenge for security experts as Bots can be distributed quickly and stealthily. With the HTTP protocol, Bots hide their communication flows within the normal HTTP flows making them more stealthy and difficult to detect. Furthermore, since the HTTP service is being widely used by the Internet applications, it is not easy to block this service as a precautionary measure and other techniques are required to detect and deter the Bot menace. The HTTP Bots periodically connect to particular web pages or URLs to get commands and updates from the Botmaster. In fact, this identifiable periodic connection pattern has been used in several studies as a feature to detect HTTP Botnets. In this paper, we review the current studies on detection of periodic communications in HTTP Botnets as well as the shortcomings of these methods. Consequently, we propose three metrics to be used in identifying the types of communication patterns according to their periodicity. Test results show that in addition to detecting HTTP Botnet communication patterns with 80% accuracy, the proposed method is able to efficiently classify communication patterns into several periodicity categories.
Keywords: Internet; invasive software; pattern classification; telecommunication security; telecommunication traffic; transport protocols; HTTP based botnet threat; HTTP botnet communication patterns; HTTP botnets detection; HTTP flows; HTTP protocol; HTTP service; HTTP traffic; Internet applications; URL; Web pages; botmaster; communication flows; periodic communications detection; periodic connection pattern; periodicity categories; periodicity classification; Command and control systems; Decision trees; Internet; Measurement; Radio frequency; Security; Servers; Botnet Detection; Command and Control Mechanism; HTTP Botnet; Internet Security; Mobile Botnets; Periodic Pattern (ID#: 15-8227)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298339&isnumber=7298288
Venkatesan, Sridhar; Albanese, Massimiliano; Jajodia, Sushil, "Disrupting Stealthy Botnets through Strategic Placement of Detectors," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 95-103, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346816
Abstract: In recent years, botnets have gained significant attention due to their extensive use in various kinds of criminal or otherwise unauthorized activities. Botnets have become increasingly sophisticated, and studies have shown that they can significantly reduce their footprint and increase their dwell time. Therefore, modern botnets can operate in stealth mode and evade detection for extended periods of time. In order to address this problem, we propose a proactive approach to strategically deploy detectors on selected network nodes, so as to either completely disrupt communication between bots and command and control nodes, or at least force the attacker to create more bots, therefore increasing the footprint of the botnet and the likelihood of detection. As the detector placement problem is intractable, we propose heuristics based on several centrality measures. Simulations results confirm that our approach can effectively increase complexity for the attacker.
Keywords: Command and control systems; Communication networks; Detectors; Mission critical systems; Peer-to-peer computing; Security; Servers (ID#: 15-8228)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346816&isnumber=7346791
Lysenko, S.; Pomorova, O.; Savenko, O.; Kryshchuk, A.; Bobrovnikova, K., "DNS-Based Anti-Evasion Technique for Botnets Detection," in Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2015 IEEE 8th International Conference on, vol. 1, pp. 453-458, 24-26 Sept. 2015. doi: 10.1109/IDAACS.2015.7340777
Abstract: A new DNS-based anti-evasion technique for botnets detection is proposed. It is based on a cluster analysis of the features obtained from the payload of DNS-messages. The method uses a semi-supervised fuzzy c-means clustering. Usage of the developed method makes it possible to detect botnets that use the DNS-based evasion techniques with high efficiency.
Keywords: fuzzy set theory; invasive software; pattern clustering; DNS; antievasion technique; botnets detection; cluster analysis; semisupervised fuzzy c-means clustering; Buildings; Entropy; Feature extraction; IP networks; Internet; Payloads; Servers; DNS-tunneling; botnet; botnet detection; botnet's evasion technique; domain flux; fast-flux service network (ID#: 15-8229)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7340777&isnumber=7340677
An Wang; Mohaisen, A.; Wentao Chang; Songqing Chen, "Delving into Internet DDoS Attacks by Botnets: Characterization and Analysis," in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, pp. 379-390, 22-25 June 2015. doi: 10.1109/DSN.2015.47
Abstract: Internet Distributed Denial of Service (DDoS) at- tacks are prevalent but hard to defend against, partially due to the volatility of the attacking methods and patterns used by attackers. Understanding the latest DDoS attacks can provide new insights for effective defense. But most of existing understandings are based on indirect traffic measures (e.g., backscatters) or traffic seen locally. In this study, we present an in-depth analysis based on 50,704 different Internet DDoS attacks directly observed in a seven-month period. These attacks were launched by 674 botnets from 23 different botnet families with a total of 9,026 victim IPs belonging to 1,074 organizations in 186 countries. Our analysis reveals several interesting findings about today's Internet DDoS attacks. Some highlights include: (1) geolocation analysis shows that the geospatial distribution of the attacking sources follows certain patterns, which enables very accurate source prediction of future attacks for most active botnet families, (2) from the target perspective, multiple attacks to the same target also exhibit strong patterns of inter-attack time interval, allowing accurate start time prediction of the next anticipated attacks from certain botnet families, (3) there is a trend for different botnets to launch DDoS attacks targeting the same victim, simultaneously or in turn. These findings add to the existing literature on the understanding of today's Internet DDoS attacks, and offer new insights for designing new defense schemes at different levels.
Keywords: IP networks; computer network security; telecommunication traffic; Internet DDoS attacks; Internet distributed denial-of-service; botnet families; geolocation analysis; geospatial distribution; indirect traffic measures; interattack time interval; source prediction; start time prediction; victim IP; Cities and towns; Computer crime; Geology; IP networks; Internet; Malware; Organizations (ID#: 15-8230)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266866&isnumber=7266818
Qiben Yan; Yao Zheng; Tingting Jiang; Wenjing Lou; Hou, Y.T., "PeerClean: Unveiling Peer-to-Peer Botnets Through Dynamic Group Behavior Analysis," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 316-324, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218396
Abstract: Advanced botnets adopt a peer-to-peer (P2P) infrastructure for more resilient command and control (C&C). Traditional detection techniques become less effective in identifying bots that communicate via a P2P structure. In this paper, we present PeerClean, a novel system that detects P2P botnets in real time using only high-level features extracted from C&C network flow traffic. PeerClean reliably distinguishes P2P bot-infected hosts from legitimate P2P hosts by jointly considering flow-level traffic statistics and network connection patterns. Instead of working on individual connections or hosts, PeerClean clusters hosts with similar flow traffic statistics into groups. It then extracts the collective and dynamic connection patterns of each group by leveraging a novel dynamic group behavior analysis. Comparing with the individual host-level connection patterns, the collective group patterns are more robust and differentiable. Multi-class classification models are then used to identify different types of bots based on the established patterns. To increase the detection probability, we further propose to train the model with average group behavior, but to explore the extreme group behavior for the detection. We evaluate PeerClean on real-world flow records from a campus network. Our evaluation shows that PeerClean is able to achieve high detection rates with few false positives.
Keywords: command and control systems; feature extraction; invasive software; pattern classification; peer-to-peer computing; probability; statistical analysis; telecommunication traffic; C&C network flow traffic; P2P bot-infected host; P2P botnet; PeerClean; command and control; detection probability; detection technique; dynamic group behavior analysis; flow level traffic statistic; high-level feature extraction; multiclass classification model; network connection pattern; peer-to-peer botnet; Computers; Conferences; Feature extraction; Peer-to-peer computing; Robustness; Support vector machines; Training (ID#: 15-8231)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218396&isnumber=7218353
Singh, K.J.; De, T., "DDOS Attack Detection and Mitigation Technique Based on Http Count and Verification Using CAPTCHA," in Computational Intelligence and Networks (CINE), 2015 International Conference on, pp. 196-197, 12-13 Jan. 2015. doi: 10.1109/CINE.2015.47
Abstract: With the rapid development of internet, the number of people who are online also increases tremendously. But now a day's we find not only growing positive use of internet but also the negative use of it. The misuse and abuse of internet is growing at an alarming rate. There are large cases of virus and worms infecting our systems having the software vulnerability. These systems can even become the clients for the bot herders. These infected system aid in launching the DDoS attack to a target server. In this paper we introduced the concept of IP blacklisting which will blocked the entire blacklisted IP address, http count filter will enable us to detect the normal and the suspected IP addresses and the CAPTCHA technique to counter check whether these suspected IP address are in control by human or botnet.
Keywords: Internet; client-server systems; computer network security; computer viruses; transport protocols; CAPTCHA; DDOS attack detection; DDOS attack mitigation technique; HTTP count filter; HTTP verification; IP address; IP blacklisting; Internet; botnet; software vulnerability; target server; virus; worms; CAPTCHAs; Computer crime; IP networks; Internet; Radiation detectors; Servers; bot; botnets; captcha; filter; http; mitigation (ID#: 15-8232)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053830&isnumber=7053782
Sanatinia, A.; Noubir, G., "OnionBots: Subverting Privacy Infrastructure for Cyber Attacks," in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, pp. 69-80, 22-25 June 2015. doi: 10.1109/DSN.2015.40
Abstract: Over the last decade botnets survived by adopting a sequence of increasingly sophisticated strategies to evade detection and take overs, and to monetize their infrastructure. At the same time, the success of privacy infrastructures such as Tor opened the door to illegal activities, including botnets, ransomware, and a marketplace for drugs and contraband. We contend that the next waves of botnets will extensively attempt to subvert privacy infrastructure and cryptographic mechanisms. In this work we propose to preemptively investigate the design and mitigation of such botnets. We first, introduce OnionBots, what we believe will be the next generation of resilient, stealthy botnets. OnionBots use privacy infrastructures for cyber attacks by completely decoupling their operation from the infected host IP address and by carrying traffic that does not leak information about its source, destination, and nature. Such bots live symbiotically within the privacy infrastructures to evade detection, measurement, scale estimation, observation, and in general all IP-based current mitigation techniques. Furthermore, we show that with an adequate self-healing network maintenance scheme, that is simple to implement, OnionBots can achieve a low diameter and a low degree and be robust to partitioning under node deletions. We develop a mitigation technique, called SOAP, that neutralizes the nodes of the basic OnionBots. In light of the potential of such botnets, we believe that the research community should proactively develop detection and mitigation methods to thwart OnionBots, potentially making adjustments to privacy infrastructure.
Keywords: IP networks; computer network management; computer network security; data privacy; fault tolerant computing; telecommunication traffic; Cyber Attacks; IP-based mitigation techniques; OnionBots; SOAP; Tor; botnets; cryptographic mechanisms; destination information; host IP address; illegal activities; information nature; node deletions; privacy infrastructure subversion; resilient-stealthy botnets; self-healing network maintenance scheme; source information; Cryptography; IP networks; Maintenance engineering; Peer-to-peer computing; Privacy; Relays; Servers; Tor; botnet; cyber security; privacy infrastructure; self-healing network (ID#: 15-8233)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266839&isnumber=7266818
Karuppayah, S.; Roos, S.; Rossow, C.; Muhlhauser, M.; Fischer, M., "Zeus Milker: Circumventing the P2P Zeus Neighbor List Restriction Mechanism," in Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, pp. 619-629, June 29 2015-July 2 2015. doi: 10.1109/ICDCS.2015.69
Abstract: The emerging trend of highly-resilient P2P botnets poses a huge security threat to our modern society. Carefully designed countermeasures as applied in sophisticated P2P botnets such as P2P Zeus impede botnet monitoring and successive takedown. These countermeasures reduce the accuracy of the monitored data, such that an exact reconstruction of the botnet's topology is hard to obtain efficiently. However, an accurate topology snapshot, revealing particularly the identities of all bots, is crucial to execute effective botnet takedown operations. With the goal of obtaining the required snapshot in an efficient manner, we provide a detailed description and analysis of the P2P Zeus neighbor list restriction mechanism. As our main contribution, we propose ZeusMilker, a mechanism for circumventing the existing anti-monitoring countermeasures of P2P Zeus. In contrast to existing approaches, our mechanism deterministically reveals the complete neighbor lists of bots and hence can efficiently provide a reliable topology snapshot of P2P Zeus. We evaluated ZeusMilker on a real-world dataset and found that it outperforms state-of-the-art techniques for botnet monitoring with regard to the number of queries needed to retrieve a bot's complete neighbor list. Furthermore, ZeusMilker is provably optimal in retrieving the complete neighbor list, requiring at most 2n queries for an n-elemental list. Moreover, we also evaluated how the performance of ZeusMilker is impacted by various protocol changes designed to undermine its provable performance bounds.
Keywords: computer network security; invasive software; peer-to-peer computing; telecommunication network topology;P2P Zeus impede botnet monitoring;P2P Zeus neighbor list restriction mechanism; ZeusMilker mechanism; anti-monitoring countermeasures; botnet topology exact reconstruction; effective botnet takedown operations; highly-resilient P2P botnets; n-elemental list; security threat; topology snapshot; Algorithm design and analysis; Complexity theory; Crawlers; Monitoring; Peer-to-peer computing; Protocols; Topology; Anti-monitoring countermeasures; P2P Zeus; XOR metric; botnet; milking (ID#: 15-8234)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164947&isnumber=7164877
Garg, V.; Camp, L.J., "Spare the Rod, Spoil the Network Security? Economic Analysis of Sanctions Online," in Electronic Crime Research (eCrime), 2015 APWG Symposium on, pp. 1-10, 26-29 May 2015
doi: 10.1109/ECRIME.2015.7120800
Abstract: When and how should we encourage network providers to mitigate the harm of security and privacy risks? Poorly designed interventions that do not align with economic incentives can lead stakeholders to be less, rather than more, careful. We apply an economic framework that compares two fundamental regulatory approaches: risk based or ex ante and harm based or ex post. We posit that for well known security risks, such as botnets, ex ante sanctions are economically efficient. Systematic best practices, e.g. patching, can reduce the risk of becoming a bot and thus can be implemented ex ante. Conversely risks, which are contextual, poorly understood, and new, and where distribution of harm is difficult to estimate, should incur ex post sanctions, e.g. information disclosure. Privacy preferences and potential harm vary widely across domains; thus, post-hoc consideration of harm is more appropriate for privacy risks. We examine two current policy and enforcement efforts, i.e. Do Not Track and botnet takedowns, under the ex ante vs. ex post framework. We argue that these efforts may worsen security and privacy outcomes, as they distort market forces, reduce competition, or create artificial monopolies. Finally, we address the overlap between security and privacy risks.
Keywords: computer network security; data privacy; invasive software; risk management; Do Not Track approach; botnet takedowns; botnets; economic incentives; ex-ante sanction approach; ex-post sanction approach; fundamental regulatory approaches; harm based approach; information disclosure; network security; online sanction economic analysis; patching method privacy risks; risk reduction; risk-based approach; security risks; Biological system modeling; Companies; Economics; Google; Government; Privacy; Security (ID#: 15-8235)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120800&isnumber=7120794
Khatri, V.; Abendroth, J., "Mobile Guard Demo: Network Based Malware Detection," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 1177-1179, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.501
Abstract: The growing trend of data traffic in mobile networks brings new security threats such as malwares, botnets, premium SMS frauds etc, and these threats affect the network resources in terms of revenue as well as performance. Some end user devices are using antivirus and anti-malware clients for protection against malware attacks, but the malicious activity affects mobile network elements as well. Therefore, a network based malware detection system, such as Mobile Guard, is essential in detecting malicious activities within a network, as well as protecting end users from malware attacks that are propagated through mobile operator's network. We present Mobile Guard -- a network based malware detection system and discuss its necessity, solution architecture and key features.
Keywords: computer network security; invasive software; mobile computing; radio networks; antimalware clients; botnets; data traffic; malicious activity detection; malware attacks; mobile network elements; mobile networks; mobile operator network; network based malware detection system; premium SMS frauds; security threats; Conferences; Malware; Mobile communication; Mobile computing; Mobile handsets; Privacy; Antivirus; Malware; Mobile Guard; Mobile Network; Network Based Malware Detection (ID#: 15-8236)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345409&isnumber=7345233
Leszczyna, R.; Wrobel, M.R., "Evaluation of Open Source SIEM for Situation Awareness Platform in the Smart Grid Environment," in Factory Communication Systems (WFCS), 2015 IEEE World Conference on, pp. 1-4, 27-29 May 2015. doi: 10.1109/WFCS.2015.7160577
Abstract: The smart grid as a large-scale system of systems has an exceptionally large surface exposed to cyber-attacks, including highly evolved and sophisticated threats such as Advanced Persistent Threats (APT) or Botnets. When addressing this situation the usual cyber security technologies are prerequisite, but not sufficient. The smart grid requires developing and deploying an extensive ICT infrastructure that supports significantly increased situational awareness and enables detailed and precise command and control. The paper presents one of the studies related to the development and deployment of the Situation Awareness Platform for the smart grid, namely the evaluation of open source Security Information and Event Management systems. These systems are the key components of the platform.
Keywords: Internet; computer network security; grid computing; public domain software; APT; ICT infrastructure; advanced persistent threats; botnets; command-and-control; cyber-attacks; open source SIEM evaluation; open source security information-and-event management systems; situation awareness platform; smart grid environment; Computer security; NIST; Sensor systems; Smart grids; Software; SIEM; evaluation; situation awareness; smart grid (ID#: 15-8237)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160577&isnumber=7160536
Badis, H.; Doyen, G.; Khatoun, R., "A Collaborative Approach for a Source Based Detection of Botclouds," in Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, pp. 906-909, 11-15 May 2015. doi: 10.1109/INM.2015.7140406
Abstract: Since the last years, cloud computing is playing an important role in providing high quality of IT services. However, beyond a legitimate usage, the numerous advantages it presents are now exploited by attackers, and botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use. In this paper, we present an original approach that enables a collaborative egress detection of DDoS attacks leveraged by a botcloud. We provide an early evaluation of our approach using simulations that rely on real workload traces, showing our detection system effectiveness and low overhead, as well as its support for incremental deployment in real cloud infrastructures.
Keywords: cloud computing; computer network security; groupware; software agents; DDoS attacks; IT services; botclouds; botnets; cloud computing; cloud infrastructures; collaborative approach; collaborative egress detection; incremental deployment; source based detection; workload traces; Biomedical monitoring; Cloud computing; Collaboration; Computer crime; Monitoring; Peer-to-peer computing; Principal component analysis (ID#: 15-8238)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140406&isnumber=7140257
Vokorokos, L.; Drienik, P.; Fortotira, O.; Hurtuk, J., "Abusing Mobile Devices for Denial of Service Attacks," in Applied Machine Intelligence and Informatics (SAMI), 2015 IEEE 13th International Symposium on, pp. 21-24, 22-24 Jan. 2015. doi: 10.1109/SAMI.2015.7061886
Abstract: The growing popularity of mobile devices have led to the rise of mobile malware. It is also one of the reasons why amount of new mobile malware families, which are secretly connected over the internet to a remote Command & Control server, is increasing. It gives attackers possibility to create botnets for Denial of Service attacks or mining of cryptocurrencies. This paper discusses state of the art in computer and mobile security. Paper also presents proof of concept which can be used to abuse mobile devices' capabilities for malicious purposes. Distributed Denial of Service attack scenario is presented using smartphones with Android operating system against wireless network. Measured results and techniques are presented such as description of Android application specially created for this purposes.
Keywords: Android (operating system);computer crime; computer network security; invasive software; mobile computing; smart phones; Android application; Android operating system; Internet; botnets; computer security; cryptocurrencies mining; distributed denial of service attack; malicious purposes; mobile devices capabilities; mobile malware; mobile security; remote command & control server; smartphones; wireless network; Androids; Computer crime; Humanoid robots; Mobile communication; Servers; Smart phones (ID#: 15-8239)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7061886&isnumber=7061844
Shanthi, K.; Seenivasan, D., "Detection Of Botnet by Analyzing Network Traffic Flow Characteristics Using Open Source Tools," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, vol., no., pp.1-5, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282353
Abstract: Botnets are emerging as the most serious cyber threat among different forms of malware. Today botnets have been facilitating to launch many cybercriminal activities like DDoS, click fraud, phishing attacks etc. The main purpose of botnet is to perform massive financial threat. Many large organizations, banks and social networks became the target of bot masters. Botnets can also be leased to motivate the cybercriminal activities. Recently several researches and many efforts have been carried out to detect bot, C&C channels and bot masters. Ultimately bot maters also strengthen their activities through sophisticated techniques. Many botnet detection techniques are based on payload analysis. Most of these techniques are inefficient for encrypted C&C channels. In this paper we explore different categories of botnet and propose a detection methodology to classify bot host from the normal host by analyzing traffic flow characteristics based on time intervals instead of payload inspection. Due to that it is possible to detect botnet activity even encrypted C&C channels are used.
Keywords: computer crime; computer network security; fraud; invasive software; pattern classification; public domain software; C&C channels; DDoS; bot host classification; bot masters; botnet activity detection; botnet detection technique; click fraud; cyber threat; cybercriminal activities; encrypted C&C channel; financial threat; malware; network traffic flow characteristics analysis; open source tools; payload analysis; payload inspection; phishing attack; Bluetooth; Conferences; IP networks; Mobile communication; Payloads; Servers; Telecommunication traffic; Bot; Bot master; Botnet; Botnet cloud; Mobile Botnet (ID#: 15-8240)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282353&isnumber=7282219
Han Zhang; Papadopoulos, C., "BotTalker: Generating Encrypted, Customizable C&C Traces," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-6, 14-16 April 2015. doi: 10.1109/THS.2015.7225305
Abstract: Encrypted botnets have seen an increasing use in recent years. To enable research in detecting encrypted botnets researchers need samples of encrypted botnet traces with ground truth, which are very hard to get. Traces that are available are not customizable, which prevents testing under various controlled scenarios. To address this problem we introduce BotTalker, a tool that can be used to generate customized encrypted botnet communication traffic. BotTalker emulates the actions a bot would take to encrypt communication. It includes a highly configurable encrypted-traffic converter along with real, non-encrypted bot traces and background traffic. The converter is able to convert non-encrypted botnet traces into encrypted ones by providing customization along three dimensions: (a) selection of real encryption algorithm, (b) flow or packet level conversion, SSL emulation and (c) IP address substitution. To the best of our knowledge, BotTalker is the first work that provides users customized encrypted botnet traffic. In the paper we also apply BotTalker to evaluate the damage result from encrypted botnet traffic on a widely used botnet detection system - BotHunter and two IDS' - Snort and Suricata. The results show that encrypted botnet traffic foils bot detection in these systems.
Keywords: IP networks; authorisation; computer network security; cryptography; invasive software; telecommunication traffic; BotHunter; BotTalker; IDS; IP address substitution; SSL emulation; Snort; Suricata; background traffic; botnet detection system; configurable encrypted-traffic converter; customized encrypted botnet traffic; encrypted botnet traces; encrypted customizable C&C traces; flow level conversion; ground truth; packet level conversion; real encryption algorithm; Ciphers; Emulation; Encryption; IP networks; Payloads; Servers (ID#: 15-8241)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225305&isnumber=7190491
Zigang Cao; Gang Xiong; Li Guo, "MimicHunter: A General Passive Network Protocol Mimicry Detection Framework," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 271-278, 20-22 Aug. 2015
doi: 10.1109/Trustcom.2015.384
Abstract: Network based intrusions and information theft events are becoming more and more popular today. To bypass the network security devices such as firewall, intrusion detection/prevention system (IDS/IPS) and web application firewall, attackers use evasive techniques to circumvent them, of which protocol mimicry is a very useful approach. The technique camouflages malicious communications as common protocols or generally innocent applications to avoid network security audit, which has been widely used in advanced Trojans, botnets, as well as anonymous communication systems, bringing a great challenge to current network management and security. To this end, we propose a general network protocol mimicry behavior discovery framework named MimicHunter to detect such evasive masquerade behaviors, which exploits protocol structure and state transition verifications, as well as primary protocol behavior elements. Experiment results on several datasets demonstrate the effectiveness of our method in practice. Besides, MimicHunter is flexible in deployment and can be easily implemented in passive detection systems with only a little cost compared with the active methods.
Keywords: security of data; IDS-IPS; MimicHunter framework; Web application firewall; evasive techniques; firewall; information theft events; intrusion detection system; intrusion prevention system; network based intrusion; network security audit; network security devices; passive network protocol mimicry detection framework; Inspection; Intrusion detection; MIMICs; Malware; Payloads; Protocols; evasive attack; intrusion detection; protocol mimicry; protocol structure; state transition (ID#: 15-8242)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345292&isnumber=7345233
Ichise, H.; Yong Jin; Iida, K., "Analysis of Via-Resolver DNS TXT Queries and Detection Possibility of Botnet Communications," In Communications, Computers and Signal Processing (PACRIM), 2015 IEEE Pacific Rim Conference on, pp. 216-221, 24-26 Aug. 2015. doi: 10.1109/PACRIM.2015.7334837
Abstract: Recent reports on Internet security have indicated that the DNS (Domain Name System) protocol is being used for botnet communication in various botnets; in particular, botnet communication based on DNS TXT record type has been observed as a new technique in some botnet-based cyber attacks. One of the most fundamental Internet protocols, the DNS protocol is used for basic name resolution as well as many Internet services, so it is not possible to simply block out all DNS traffic. To block out only malicious DNS TXT record based botnet communications, it would be necessary to distinguish them from legitimate DNS traffic involving DNS TXT records. However, the DNS TXT record is also used in many legitimate ways since this type is allowed to include any plain text up to a fairly long length. In this paper, we mainly focus on the usage of the DNS TXT record and explain our analysis using about 5.5 million real DNS TXT record queries obtained for over 3 months in our campus network. Based on the analysis findings, we discuss a new method to detect botnet communication. Our analysis results show that 330 unique destination IP addresses (cover approximately 22.1% of unknown usages of DNS TXT record queries) may have been involved in malicious communications and this proportion is a reasonable basis for network administrators to perform detailed manual checking in many organizations.
Keywords: Internet; invasive software; DNS TXT record type; Internet protocols ;Internet security; botnet-based cyber attacks; domain name system protocol; malicious DNS TXT record based botnet communications; via-resolver DNS TXT queries; Computers; Electronic mail; IP networks; Internet; Postal services; Protocols; Servers; Botnet; C&C; DNS; TXT record; botnet communication; detection method (ID#: 15-8243)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7334837&isnumber=7334793
Graham, M.; Winckles, A.; Sanchez-Velazquez, E., "Botnet Detection Within Cloud Service Provider Networks Using Flow Protocols," in Industrial Informatics (INDIN), 2015 IEEE 13th International Conference on, pp. 1614-1619, 22-24 July 2015. doi: 10.1109/INDIN.2015.7281975
Abstract: Botnets continue to remain one of the most destructive threats to cyber security. This work aims to detect botnet traffic within an abstracted virtualised infrastructure, such as is found within cloud service providers. To achieve this an environment is created based on Xen hypervisor, using Open vSwitch to export NetFlow Version 9. This paper provides experimental evidence for how flow export can capture network traffic parameters for identifying the presence of a command and control botnet within a virtualised infrastructure. The conceptual framework described within this paper presents a non-intrusive detection element for a botnet protection system for cloud service providers. Such a system could protect the type of virtualised environments that will form the building blocks for the Internet of Things.
Keywords: Internet of Things; cloud computing; invasive software; protocols; telecommunication traffic; Internet of Things; NetFlow Version 9;Open vSwitch; Xen hypervisor; abstracted virtualised infrastructure; botnet detection; botnet protection system; cloud service provider networks; command and control botnet; conceptual framework; cyber security; flow protocols; network traffic parameters; non-intrusive detection element;5G mobile communication; Bismuth; botnet detection; cloud service provider; netflow; virtualised infrastructure (ID#: 15-8244)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7281975&isnumber=7281697
Kalaivani, K.; Suguna, C., "Efficient Botnet Detection Based on Reputation Model and Content Auditing in P2P Networks," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp. 1-4, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282358
Abstract: Botnet is a number of computers connected through internet that can send malicious content such as spam and virus to other computers without the knowledge of the owners. In peer-to-peer (p2p) architecture, it is very difficult to identify the botnets because it does not have any centralized control. In this paper, we are going to use a security principle called data provenance integrity. It can verify the origin of the data. For this, the certificate of the peers can be exchanged. A reputation based trust model is used for identifying the authenticated peer during file transmission. Here the reputation value of each peer can be calculated and a hash table is used for efficient file searching. The proposed system can also verify the trustworthiness of transmitted data by using content auditing. In this, the data can be checked against trained data set and can identify the malicious content.
Keywords: authorisation; computer network security; data integrity; information retrieval; invasive software; peer-to-peer computing; trusted computing;P2P networks; authenticated peer; botnet detection; content auditing; data provenance integrity; file searching; file transmission; hash table; malicious content; peer-to-peer architecture; reputation based trust model; reputation model; reputation value; security principle; spam; transmitted data trustworthiness; virus; Computational modeling; Cryptography; Measurement; Peer-to-peer computing; Privacy; Superluminescent diodes; Data provenance integrity; content auditing; reputation value; trained data set (ID#: 15-8245)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282358&isnumber=7282219
Okayasu, S.; Sasaki, R., "Proposal and Evaluation of Methods Using the Quantification Theory and Machine Learning for Detecting C&C Server Used in a Botnet," in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3, pp. 24-29, 1-5 July 2015. doi: 10.1109/COMPSAC.2015.165
Abstract: In recent years, the damage caused by botnets has increased and become a big problem. To solve this problem, we proposed a method to detect unjust C&C servers by using Hayashi's quantification theory class II. This method is able to detect unjust C&C servers, even if they are not included in a blacklist. However, it was predicted that the detection rate for this method decreases with passing time. Therefore, we have been continuing the investigation of the detection rate and adjusting the optimal detection method in different time periods. This paper deals with the results of an investigation for 2014. In addition, we newly introduce a method using a support vector machine (SVM) for comparison with quantification theory class II. We found that the detection rates by using quantification theory class II and those by the SVM are both very good, with very little difference in accuracy between them.
Keywords: invasive software; learning (artificial intelligence); support vector machines; C&C server; Hayashi quantification theory class II;SVM; botnet; detection rate; machine learning; optimal detection method; support vector machine; Accuracy; Data models; Electronic mail; Malware; Mathematical model; Servers; Support vector machines; Botnet; C& C Server; DNS; Hayashi's quantification methods; SVM (ID#: 15-8246)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273318&isnumber=7273299
Stevanovic, M.; Pedersen, J.M., "An Analysis of Network Traffic Classification for Botnet Detection," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-8, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7361120
Abstract: Botnets represent one of the most serious threats to the Internet security today. This paper explores how network traffic classification can be used for accurate and efficient identification of botnet network activity at local and enterprise networks. The paper examines the effectiveness of detecting botnet network traffic using three methods that target protocols widely considered as the main carriers of botnet Command and Control (C&C) and attack traffic, i.e. TCP, UDP and DNS. We propose three traffic classification methods based on capable Random Forests classifier. The proposed methods have been evaluated through the series of experiments using traffic traces originating from 40 different bot samples and diverse non-malicious applications. The evaluation indicates accurate and time-efficient classification of botnet traffic for all three protocols. The future work will be devoted to the optimization of traffic analysis and the correlation of findings from the three analysis methods in order to identify compromised hosts within the network.
Keywords: Botnet; Botnet Detection; Features Selection; MLAs; Random Forests; Traffic Analysis; Traffic Classification (ID#: 15-8247)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7361120&isnumber=7166109
Al-Duwairi, Basheer; Al-Hammouri, Ahmad; Aldwairi, Monther; Paxson, Vern, "GFlux: A Google-based System for Fast Flux Detection," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 755-756, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346920
Abstract: Fast Flux Networks (FFNs) are a technique used by botnets rapidly change the IP addresses associated with botnet infrastructure and spam websites by adopting mechanisms similar to those used in Content Distribution Networks (CDNs) and Round Robin DNS Systems (RRDNS). In this work we present a novel approach, called GFlux, for fast flux detection. GFlux analyzes result pages returned by the Google search engine for queries consisting of IP addresses associated with suspect domain names. We base the Gflux approach on the observation that the number of hits returned by Google for queries associated with FFNs domains should generally be much lower than those associated with legitimate domains, particularly those used by CDNs. Our preliminary results show that number of hits provides a key feature that can aid with accurately classifying domain names as either fast flux domains and non-fast-flux domains.
Keywords: Electronic mail; Feature extraction; Google; IP networks; Internet; Search engines; Security (ID#: 15-8248)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346920&isnumber=7346791
Pengkui Luo; Torres, Ruben; Zhi-Li Zhang; Saha, Sabyasachi; Sung-Ju Lee; Nucci, Antonio; Mellia, Marco, "Leveraging Client-Side DNS Failure Patterns to Identify Malicious Behaviors," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 406-414, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346852
Abstract: DNS has been increasingly abused by adversaries for cyber-attacks. Recent research has leveraged DNS failures (i.e. DNS queries that result in a Non-Existent-Domain response from the server) to identify malware activities, especially domain-flux botnets that generate many random domains as a rendezvous technique for command-&-control. Using ISP network traces, we conduct a systematic analysis of DNS failure characteristics, with the goal of uncovering how attackers exploit DNS for malicious activities. In addition to DNS failures generated by domain-flux bots, we discover many diverse and stealthy failure patterns that have received little attention. Based on these findings, we present a framework that detects diverse clusters of suspicious domain names that cause DNS failures, by considering multiple types of syntactic as well as temporal patterns. Our evolutionary learning framework evaluates the clusters produced over time to eliminate spurious cases while retaining sustaining (i.e., highly suspicious) clusters. One of the advantages of our framework is in analyzing DNS failures on per-client basis and not hinging on the existence of multiple clients infected by the same malware. Our evaluation on a large ISP network trace shows that our framework detects at least 97% of the clients with suspicious DNS behaviors, with over 81% precision.
Keywords: Clustering algorithms; Conferences; Electronic mail; Feature extraction; Malware; Servers; Syntactics (ID#: 15-8249)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346852&isnumber=7346791
Ghafir, I.; Prenosil, V., "DNS Traffic Analysis for Malicious Domains Detection," in Signal Processing and Integrated Networks (SPIN), 2015 2nd International Conference on, pp. 613-918, 19-20 Feb. 2015. doi: 10.1109/SPIN.2015.7095337
Abstract: The web has become the medium of choice for people to search for information, conduct business, and enjoy entertainment. At the same time, the web has also become the primary platform used by miscreants to attack users. For example, drive-by-download attacks, which could be through malicious domains, are a popular choice among bot herders to grow their botnets. In this paper we present our methodology for detecting any connection to malicious domain. Our detection method is based on a blacklist of malicious domains. We process the network traffic, particularly DNS traffic. We analyze all DNS requests and match the query with the blacklist. The blacklist of malicious domains is updated automatically and the detection is in the real time. We applied our methodology on a packet capture (pcap) file which contains traffic to malicious domains and we proved that our methodology can successfully detect the connections to malicious domains. We also applied our methodology on campus live traffic and showed that it can detect malicious domain connections in the real time.
Keywords: Internet; invasive software; query processing; telecommunication traffic; DNS traffic analysis; Web; bot herders; campus live traffic; drive-by-download attacks; malicious domain blacklist; malicious domain connections; malicious domain detection; network traffic; packet capture file; pcap file; Computers; IP networks; Malware; Monitoring; Real-time systems; Web sites; Cyber attacks; botnet; intrusion detection system; malicious domain; malware (ID#: 15-8250)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7095337&isnumber=7095159
Compagno, Alberto; Conti, Mauro; Lain, Daniele; Lovisotto, Giulio; Mancini, Luigi Vincenzo, "Boten ELISA: A novel approach for botnet C&C in Online Social Networks," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 74-82, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346813
Abstract: The Command and Control (C&C) channel of modern botnets is migrating from traditional centralized solutions (such as the ones based on Internet Relay Chat and Hyper Text Transfer Protocol), towards new decentralized approaches. As an example, in order to conceal their traffic and avoid blacklisting mechanisms, recent C&C channels use peer-to-peer networks or abuse popular Online Social Networks (OSNs). A key reason for this paradigm shift is that current detection systems become quite effective in detecting centralized C&C.
Keywords: Command and control systems; Conferences; Facebook; Malware; Proposals; Protocols (ID#: 15-8251)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346813&isnumber=7346791
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
CAPTCHAs 2015 |
CAPTCHA (acronym for Completely Automated Public Turing test to tell Computers and Humans Apart) technology has become a standard security tool. In the research presented here, some novel uses are presented, including use of Captchas as graphical passwords, motion-based captchas, and defeating a captcha using a gaming technique. These works were presented or published in 2015.
Salas Avila, W.G.; Osorio Angarita, M.A.; Moreno Canadas, A., "Matrix Problems to Generate Mosaic-based CAPTCHAs," in Imaging for Crime Prevention and Detection (ICDP-15), 6th International Conference on, pp. 1-5, 15-17 July 2015. doi: 10.1049/ic.2015.0114
Abstract: Matrix problems and in particular matrix representations of partially ordered sets (posets) are used to formally define and generate emerging and multistable images. Images induced by such representations are mosaics which can be used to design different types of Human Interaction Proofs.
Keywords: image representation; image segmentation; matrix algebra; security of data; human interaction proofs; matrix representations; mosaic-based CAPTCHA; multistable images; partially ordered sets; Authentication; CAPTCHA; emerging image; gestalt; matrix representation; module; multistable image; poset representation (ID#: 15-8252)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7318002&isnumber=7244054
Ramaiah, C.; Plamondon, R.; Govindaraju, V., "A Sigma-Lognormal Model For Character Level CAPTCHA Generation," in Document Analysis and Recognition (ICDAR), 2015 13th International Conference on, vol., no., pp. 966-970, 23-26 Aug. 2015. doi: 10.1109/ICDAR.2015.7333905
Abstract: Word level handwritten CAPTCHA generation involves picking a handwritten word from a pre-existing database and cumulatively applying distortions and noise models. In principle, the addition of distortion and noise makes the CAPTCHA robust to automated attacks. However, the primary drawback of the word level CAPTCHA generation is that it limits us to words that already exist in our data set. If the primary building block of this approach was a character, we could move away from a lexicon based CAPTCHA generation and generate CAPTCHAs which are resistant to a dictionary based attack. In this paper, we propose a Sigma-Lognormal based approach to generate character level CAPTCHAs. Next, we increase the robustness of the model by applying ideas from accents in handwriting to our problem. Finally, we demonstrate the efficacy of our approach by simulating an attack by an automated word recognizer.
Keywords: handwritten character recognition; security of data; automated word recognizer; character level CAPTCHA generation; completely automated public turing test-to-tell computer-and-human apart; dictionary based attack; lexicon-based CAPTCHA generation; sigma-lognormal model; sigma-lognormal-based approach; word level handwritten CAPTCHA generation; CAPTCHAs; Computers; Handwriting recognition; Image recognition; Robustness (ID#: 15-8253)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7333905&isnumber=7333702
Singhal, Sarthak; Sharma, Ashish; Garg, Shivam; Jatana, Nishtha, "Vulnerabilities of CAPTCHA used by IRCTC and an Alternative Approach of Split Motion Text (SMT) CAPTCHA," in Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, pp. 1-6, 2-4 Sept. 2015. doi: 10.1109/ICRITO.2015.7359287
Abstract: Online web services are commonly protected through CAPTCHAs and they are regarded as a class of Human-Interactive Proof (HIP). Numerous CAPTCHA schemes have been proposed in the past to prevent spam and brute-force attacks by automated scripts but many of such CAPTCHAs have been subjected to be broken by decoders. Our paper breaks one such CAPTCHA system used by one of the India's most visited e-commerce website IRCTC.co.in using modern OCRs and list out its vulnerabilities as well. We also propose an alternative scheme called Split Motion Text CAPTCHA (SMT-CAPTCHA) which capitalizes on gestalt perception of vision to read broken animated text. SMT-CAPTCHA focuses on working against the segmentation part of the decoding by splitting and animating each character randomly, making it difficult for decoders to segment and extract text from the CAPTCHA. On experimentation, it was observed that modern OCRs and decoding methodologies fail to break our SMT-CAPTCHA system, whereas, the average computer success of decoding IRCTC's CAPTCHA is significantly high.
Keywords: Animation; Artificial intelligence; CAPTCHAs; Computers; Decoding; Noise measurement; Optical character recognition software; CAPTCHA; Human Interactive Proof; IRCTC; OCR; Segmentation (ID#: 15-8254)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359287&isnumber=7359191
Beheshti, S.M.R.S.; Liatsis, P., "How Humans Can Help Computers to Solve an Artificial Problem?," in Systems, Signals and Image Processing (IWSSIP), 2015 International Conference on, pp. 291-294, 10-12 Sept. 2015. doi: 10.1109/IWSSIP.2015.7314233
Abstract: The idea of using CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) was to protect websites from attacks initiated by the automated computer scripts or computer robots (bots). One of the most important issues about CAPTCHAs is that the test has to be designed in a way that makes it too hard or almost impossible for the computer programs to break the test however, at the same time it should be fairly easy for human users to solve. ReCAPTCHA is known as one of the most popular CAPTCHA models which is being used by the majority of well-known websites such as Yahoo!, Google, Facebook and etc. ReCAPTCHA is being used in order to help digitizing old text books and notes. In this paper we investigate the algorithm behind reCAPTCHA more in depth and find out how basically a simple script-based computer program can get use of real human users in order to solve an artificial problem for the machines. We also review some of the most important security accepts of the reCAPTCHA model.
Keywords: Web sites; artificial intelligence; data protection; optical character recognition; problem solving; security of data; CAPTCHA; OCR; Web site protection; artificial problem solving; completely automated public Turing test to tell computers and humans apart; optical character recognition; script-based computer program; CAPTCHAs; Character recognition; Computers; Image recognition; Optical character recognition software; Text recognition; CAPTCHA; Game with a Purpose (GWAP); HIPs; Human Interactive Proofs; reCAPTCHA (ID#: 15-8255)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7314233&isnumber=7313917
Srihari, V.; Kalpana, P.; Anitha, R., "Spam over IP Telephony Prevention using Dendritic Cell Algorithm," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-7, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219895
Abstract: Spam over IP Telephony (SPIT) is an emerging threat in the telecom era of Voice over IP Networks (VoIP). Though evolved from email spam, SPIT is more obstructive and intrusive in nature as they require response from the callee. Contemplating the behavior of SPIT, a provider based system is contributed with the proposed mechanism installed on the SIP proxy server. In this work, a biologically inspired Dendritic Cell Algorithm (DCA) is proposed to prevent the spam callers from penetrating the network. The algorithm uses Dendritic Cells (DCs) to collect signals from multiple inputs and perform data fusion with them. To study the behavior of spam calls and the impact of proposed mechanism, experimental testbed is formed in the research lab and the performance evaluation is accomplished. Results obtained are convincing and hence validating the performance and accuracy of the system.
Keywords: Internet telephony; VoIP; dendritic cell algorithm; spam callers; spam over IP telephony; voice over IP networks; CAPTCHAs; Servers; Telephony; Unsolicited electronic mail; Dendritic Cell Algorithm (DCA); Spam over IP Telephony (SPIT); Voice over IP (VoIP) (ID#: 15-8256)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219895&isnumber=7219823
Nanglae, N.; Bhattarakosol, P., "Attitudes Towards Text-based CAPTCHA from Developing Countries," in Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2015 12th International Conference on, pp. 1-4, 24-27 June 2015. doi: 10.1109/ECTICon.2015.7207116
Abstract: CAPTCHA, especially Text-based CAPTCHA, is the most widely used for security over the online environment in the present. That was used for identification automatic program computer and real human users. This technology was introduced by IBM that is a very high-end company in a very high-end country when comparing with countries in this study. This research was performed using a questionnaire to samples in three countries and found that the nationality of users also has impact in using Text-based CAPTCHA. The used attitudes of users in different countries are also dissimilar according to the education background and economic ranking.
Keywords: security of data; text analysis; user interfaces; IBM; automatic program computer identification; online environment; real human users; security; text-based CAPTCHA; Authentication; CAPTCHAs; Computers; Education; Internet; Usability; Biometric information; CAPTCHA; Text-based CAPTCHA (ID#: 15-8257)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207116&isnumber=7206924
Saxena, M.; Khan, P.M., "Spamizer: An Approach to Handle Web Form Spam," in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, pp. 1095-1100, 11-13 March 2015. Doi: (not provided)
Abstract: The Spam Emails are regularly causing huge losses to business on a regular basis. The Spam filtering is an automated technique to identity SPAM and HAM (Non-Spam). The Web Spam filters can be categorized as: Content based spam filters and List based spam filters. In this research work, we have studied the spam statistics of a famous Spambot `Srizbi'. We have also discussed different approaches for Spam Filtering and finally proposed a new algorithm which is made on the basis of behavioral approaches of Spammers and to restrict the budding economical growth of Spam generating company's. We have used the hidden Honeypot and a Honeytrap module to minimize the spam generated from Contact and Feedback forms on public and social networking CMS websites.
Keywords: Internet; e-mail filters; information filters; invasive software; social networking (online);unsolicited e-mail; HAM; Honeypot; Honeytrap module; Spambot; Spamizer; Srizbi; Web form; Web spam filter; content based spam filter; list based spam filter; nonspam; social networking CMS Web site; spam email; spam filtering; spam generating company; spam statistics; spammer; CAPTCHAs; IP networks; Information filters; Servers; Unsolicited electronic mail; HoneyTrap; Honeypots; Spam bots; Spamizer; Srizbi; Web Form Comment Spam (ID#: 15-8258)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100417&isnumber=7100186
Guerar, M.; Merlo, A.; Benmohammed, M.; Migliardi, M.; Messabih, B., "A Completely Automatic Public Physical Test to Tell Computers and Humans Apart: A Way to Enhance Authentication Schemes in Mobile Devices," in High Performance Computing & Simulation (HPCS), 2015 International Conference on, pp 203-210, 20-24 July 2015. doi: 10.1109/HPCSim.2015.7237041
Abstract: Nowadays, data security is one of the most - if not the most important aspects in mobile applications, web and information systems in general. On one hand, this is a result of the vital role of mobile and web applications in our daily life. On the other hand, though, the huge, yet accelerating evolution of computers and software has led to more and more sophisticated forms of threats and attacks which jeopardize user's credentials and privacy. Today's computers are capable of automatically performing authentication attempts replaying recorded data. This fact has brought the challenge of access control to a whole new level, and has urged the researchers to develop new mechanisms in order to prevent software from performing automatic authentication attempts. In this research perspective, the Completely Automatic Public Turing test to tell Computers and Humans Apart (CAPTCHA) has been proposed and widely adopted. However, this mechanism consists of a cognitive intelligence test to reinforce traditional authentication against computerized attempts, thus it puts additional strain on the legitimate user too and, quite often, significantly slows the authentication process. In this paper, we introduce a Completely Automatic Public Physical test to tell Computers and Humans Apart (CAPPCHA) as a way to enhance PIN authentication scheme for mobile devices. This test does not introduce any additional cognitive strain on the user as it leverages only his physical nature. We prove that the scheme is even more secure than CAPTCHA and our experiments show that it is fast and easy for users.
Keywords: Turing machines; authorisation; cognition; data privacy; mobile computing; CAPPCHA; CAPTCHA; Completely Automatic Public Physical test to tell Computers and Humans Apart; PIN authentication scheme; Web systems; authentication schemes; cognitive intelligence test; completely automatic public Turing test to tell computers and humans apart; completely automatic public physical test; information systems; mobile applications; mobile devices; user credentials; user privacy; Authentication; CAPTCHAs; Computers; Sensors; Smart phones (ID#: 15-8259)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237041&isnumber=7237005
Haque, A.; Singh, S., "Anti-Scraping Application Development," in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, pp. 869-874, 10-13 Aug. 2015. doi: 10.1109/ICACCI.2015.7275720
Abstract: Scraping is the activity of retrieving data from a website, often in an automated manner and without the permission of the owner. This data can further be used by the scraper in whatever way he desires. The activity is deemed illegal, but the change in legality has not stopped people from doing the same. Anti-scraping solutions are being offered as rather expensive services, which although are effective, are also slow. This paper aims to list challenges and proposes mitigations techniques to develop a Software as a Product (SaaP) anti-scraping application for small to medium scale websites.
Keywords: Web sites; information retrieval; SaaP; anti-scraping application development; data retrieval; mitigation techniques; small to medium scale Web sites; software as a product; CAPTCHAs; Databases; IP networks; Loading; Security; Servers; Software (ID#: 15-8260)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275720&isnumber=7275573
Tingre, S.; Mukhopadhyay, D., "An Approach for Segmentation of Characters in CAPTCHA," in Computing, Communication & Automation (ICCCA), 2015 International Conference on, pp. 1054-1059, 15-16 May 2015. doi: 10.1109/CCAA.2015.7148562
Abstract: In the area of image processing and Optical Character Recognition, Segmentation is one of the steps which plays an important role in dealing with offline and online text images. Character segmentation means breaking an image with word into a sequence of characters. A broad perspective of segmentation lies in segmenting the characters in CAPTCHA (Completely Automated Public Turing Tests to Tell Computers and Humans Apart). It is a test that authenticated users have to pass to gain access to their respective mail accounts. Malicious programs like bots, attack the accounts and are a threat to the data integrity, privacy and confidentiality. To avoid this, CAPTCHA was introduced. Segmentation of characters acts as the basis of analyzing the strength of CAPTCHA. Stronger the CAPTCHA, more difficult it is to break. The proposed work is about a CAPTCHA segmenter. It segments the CAPTCHA image with the help of a CAPTCHA Trainer. It is a user created set of pre-processing operations on images which can simply be re-used for segmenting the similar types of images, thus saving time. The operations that can be performed on any image are, gray scale conversion, dot removal, line removal, slant correction, color extraction, thinning.
Keywords: data integrity; data privacy; image segmentation; image sequences; message authentication; optical character recognition; CAPTCHA; Completely Automated Public Turing test to tell Computers and Humans Apart; character segmentation; character sequence; data confidentiality; data integrity; data privacy; image processing; optical character recognition; user authentication; Accuracy; Artificial neural networks; CAPTCHAs; Character recognition; Feature extraction; Image color analysis; Image segmentation; CAPTCHA; Gray Scale; Image Processing; OCR; Segmentation; Thinning; Threshold (ID#: 15-8261)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148562&isnumber=7148334
Ishfaq, H.; Iqbal, W.; Bin Shahid, W., "Attaining Accessibility and Personalization with Socio-Captcha (SCAP)," in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, pp. 307-311, 13-17 Jan. 2015. doi: 10.1109/IBCAST.2015.7058521
Abstract: Many websites have made use of motions, videos, flash, gif animations and static images to implement Captcha in order to ensure that the entity trying to connect to their website(s) or system is not a Bot, but a human being. A wide variety of Captcha types and solution methods are available and few are described in section II. All of these Captcha systems possess the functionality of distinguishing humans and Bots but lack in providing personalization attribute(s) whilst browsing the internet or using any networking application. This paper has suggested a novel scheme for generation of Captcha by attaining accessibility and personalization through user's social media profile attributes Socio-Captcha (SCAP). This Socio-Captcha Scheme relies on Socio-Captcha application which is discussed in this paper.
Keywords: security of data; social networking (online); Internet; SCAP; Web sites; personalization attribute; social media profile; socio-captcha scheme; CAPTCHAs; Clothing; Electronic publishing; Facebook; Frequency modulation; Information services; Lead; accessibility; bot; captcha; human; personalization; social media; web (ID#: 15-8262)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058521&isnumber=7058466
Ranjan, A.K.; Kumar, B., "Directional Captcha: A Novel Approach to Text Based CAPTCHA," in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, pp.1278-1283, 10-13 Aug. 2015. doi: 10.1109/ICACCI.2015.7275789
Abstract: In this Paper, we have proposed a new captcha based on digits and symbols. It is based on the facts that it is difficult for the machine to interpret symbols and perform the tasks accordingly from two different datasets. We have also pointed out the main anti-recognition and anti-segmentation features from previous works and implemented them on our proposed captcha. We have also presented here the pseudocode of it, have done a security analysis and usability survey to firm our claims regarding it.
Keywords: text analysis; anti-recognition features; anti-segmentation features; directional Captcha; security analysis; text based Captcha; CAPTCHAs; Color; Image color analysis; Optical character recognition software; Security; Time factors; Usability; CAPTCHA; anti- recognition; anti- segmentation; pseudo code; security analysis; usability (ID#: 15-8263)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275789&isnumber=7275573
Gashler, M.S.; Kindle, Z.; Smith, M.R., "A Minimal Architecture for General Cognition," in Neural Networks (IJCNN), 2015 International Joint Conference on, pp. 1-8, 12-17 July 2015. doi: 10.1109/IJCNN.2015.7280749
Abstract: A minimalistic cognitive architecture called MANIC is presented. The MANIC architecture requires only three function approximating models, and one state machine. Even with so few major components, it is theoretically sufficient to achieve functional equivalence with all other cognitive architectures, and can be practically trained. Instead of seeking to trasfer architectural inspiration from biology into artificial intelligence, MANIC seeks to minimize novelty and follow the most well-established constructs that have evolved within various subfields of data science. From this perspective, MANIC offers an alternate approach to a long-standing objective of artificial intelligence. This paper provides a theoretical analysis of the MANIC architecture.
Keywords: cognition; software agents; MANIC architecture; artificial intelligence; function approximating model; functional equivalence; minimalistic cognitive architecture; state machine;Accuracy;Assembly;Automobiles;CAPTCHAs;Decoding;Robots;Service-oriented architecture (ID#: 15-8264)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280749&isnumber=7280295
Singh, K.J.; De, T., "DDOS Attack Detection and Mitigation Technique Based on Http Count and Verification Using CAPTCHA," in Computational Intelligence and Networks (CINE), 2015 International Conference on, pp. 196-197, 12-13 Jan. 2015. doi: 10.1109/CINE.2015.47
Abstract: With the rapid development of internet, the number of people who are online also increases tremendously. But now a day's we find not only growing positive use of internet but also the negative use of it. The misuse and abuse of internet is growing at an alarming rate. There are large cases of virus and worms infecting our systems having the software vulnerability. These systems can even become the clients for the bot herders. These infected system aid in launching the DDoS attack to a target server. In this paper we introduced the concept of IP blacklisting which will blocked the entire blacklisted IP address, http count filter will enable us to detect the normal and the suspected IP addresses and the CAPTCHA technique to counter check whether these suspected IP address are in control by human or botnet.
Keywords: Internet; client-server systems; computer network security; computer viruses; transport protocols; CAPTCHA; DDOS attack detection; DDOS attack mitigation technique; HTTP count filter; HTTP verification; IP address; IP blacklisting; Internet; botnet; software vulnerability; target server; virus; worms; CAPTCHAs; Computer crime; IP networks; Internet; Radiation detectors; Servers; bot; botnets; captcha; filter; http; mitigation (ID#: 15-8265)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053830&isnumber=7053782
Fujita, Masahiro; Yamada, Mako; Arimura, Shiori; Ikeya, Yuki; Nishigaki, Masakatsu, "An Attempt to Memorize Strong Passwords while Playing Games," in Network-Based Information Systems (NBiS), 2015 18th International Conference on, pp. 264-268, 2-4 Sept. 2015. doi: 10.1109/NBiS.2015.41
Abstract: There could be two approaches for combining security with entertainment, (i) an entertainment factor is embedded in security technology and (ii) a security factor is embedded in entertainment technology. Since all previous studies were focused on approach (i), we examined approach (ii). As the first attempt, this paper tried to embed a password enhancement factor into games. We designed a password enhancement scheme which enables users to naturally memorize strong passwords while playing games. We also discuss the effectiveness of our scheme.
Keywords: Authentication; CAPTCHAs; Entertainment industry; Games; Information systems; Libraries; Entertainment; Games; Password Enhancement; Security Awareness (ID#: 15-8266)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7350630&isnumber=7350553
Bindu, C.S., "Click Based Graphical CAPTCHA to Thwart Spyware Attack," in Advance Computing Conference (IACC), 2015 IEEE International, pp. 324-328, 12-13 June 2015. doi: 10.1109/IADCC.2015.7154723
Abstract: Software that gathers information regarding the computer's use secretly and conveys that information to a third party is Spyware. This paper proposes a click based Graphical CAPTCHA to overcome the spyware attacks. In case of traditional Text-Based CAPTCHA's user normally enters disorder strings to form a CAPTCHA, the same is stored in the key loggers where spywares can decode it easily. To overcome this, Click-Based Graphical CAPTCHA uses a unique way of verification where user clicks on a sequence of images to form a CAPTCHA, and that sequence is stored in pixels with a random predefined order. This paper also analyzes the proposed scheme in terms of usability, security and performance.
Keywords: image sequences; invasive software; click based graphical CAPTCHA; image sequence; key loggers; spyware attack; text-based CAPTCHA; Barium; CAPTCHAs; Computers; Conferences; Spyware; Usability; CAPTCHA; Spyware; Usability (ID#: 15-8267)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154723&isnumber=7154658
Mohammad Reza Saadat Beheshti, S.; Liatsis, P., "VICAP: Using the Mechanisms of Trans-Saccadic Memory to Distinguish between Humans and Machines," in Systems, Signals and Image Processing (IWSSIP), 2015 International Conference on, pp. 295-298, 10-12 Sept. 2015. doi: 10.1109/IWSSIP.2015.7314234
Abstract: Since demand for using online services are growing rapidly and therefore there are more number of users who prefer to use online based services such as mobile banking, email accounts, online socializing, etc. for their day-to-day needs. Therefore, the number of online threads and automated computer attacks (known as bots) which try to abuse the service provides are increasing as well. For this reason, CAPTCHA challenge was introduced in order to distinguish between real human users and automated computer bots. In this paper, we have proposed a novel human-machine separation technique based on the ability of the human's visual system to remember and superimpose all the seen frames also known as Persistence of Vision. Since this ability is uniquely dedicated to the human's visual system therefore, it is believed to be resistant against different computer recognition techniques.
Keywords: Internet; invasive software; optical character recognition; CAPTCHA challenge; VICAP; automated computer attacks; automated computer bots; computer recognition techniques; human-machine separation technique; online services; online threads ;optical character recognition; persistence of vision; trans-saccadic memory; CAPTCHAs; Computational modeling; Computers; Noise measurement; Optical character recognition software; Visual systems; Visualization; CAPTCHA; Persistence of Vision; Temporal Integration; Trans-Saccadic Memory; Visual Integration (ID#: 15-8268)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7314234&isnumber=7313917
Yamaguchi, Michitomo; Okamoto, Takeshi; Hiroaki, Kikuchi, "CAPTCHA System by Differentiating the Awkwardness of Objects," in Network-Based Information Systems (NBiS), 2015 18th International Conference on, pp. 257-263, 2-4 Sept. 2015. doi: 10.1109/NBiS.2015.114
Abstract: The "Completely Automated Public Turing test to tell Computers and Humans Apart" (CAPTCHA) is a technique that prevents unauthorized access by bots. Most studies of CAPTCHA systems use human cognitive capacities as a countermeasure to facilitate recognition techniques. Differentiating between natural and awkward objects is an approach used to distinguish humans from bots. However, this approach is vulnerable to adversaries who exploit the differences in relative frequency between natural and awkward objects because of the difficulty in collecting natural objects. In this study, we propose a new scheme that does not require the utilization of natural objects, thereby addressing this shortcoming. Our proposed method requires that humans always distinguish awkward objects, which are generated by different parameters. We evaluated our scheme in several experiments.
Keywords: Analytical models; CAPTCHAs; Electronic mail; Markov processes; Search engines; Security; Semantics; CAPTCHA; Markov chain; Security analysis; Word salad (ID#: 15-8269)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7350629&isnumber=7350553
Aruna, P.; Kanchana, R., "Face Image Captcha Generation Using Particle Swarm Optimization Approach," in Engineering and Technology (ICETECH), 2015 IEEE International Conference on, pp. 1-5, 20-20 March 2015. doi: 10.1109/ICETECH.2015.7275016
Abstract: CAPTCHA is a software programming which is introduced to differentiate the human from the robots. CATCHA intends to generate a code which can only be identified by the human and machines cannot. In the real world, due to the massive increase in the usage of smart phones, tablets and other devices with the touch screen functionality poses a many online security threats. The traditional CAPTCHA requires a help of keyboard input and does dependant of language which will not be efficient in the smart phone devices. The face CAPTCHA is the one which intends to generate a CAPTCHA by using a combination of facial images and the fake images. It is based on generating a CAPTCHA with noised real face images and the fake images which cannot be identified by the machines but humans do. In the existing work, genetic algorithm is used to select the optimized face images by using which the better optimized fpso CAPTCHA can be created. However this work lacks from the local convergence problem where it can only select the best images within the local region. To overcome this problem in this work, the particle swarm optimization method is propose which can generate the globalize solution. Particle Swarm Optimization (PSO) is a popular and bionic algorithm based on the social behavior associated with bird flocking for optimization problems. The experimental tests that were conducted were proved that the proposed methodology improves in accuracy and generates an optimized solution than the existing methodologies.
Keywords: face recognition; genetic algorithms; particle swarm optimisation; security of data; PSO; bionic algorithm; face image captcha generation; fake images; genetic algorithm; local convergence problem; particle swarm optimization approach; social behavior; Authentication; CAPTCHAs; Distortion; Face; Feature extraction; Particle swarm optimization; CAPTCHA; Distorted Image; Face Images; Particle Swarm Optimization (ID#: 15-8270)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275016&isnumber=7274993
Hoyul Choi; Hyunsoo Kwon; Junbeom Hur, "A Secure OTP Algorithm Using a Smartphone Application," in Ubiquitous and Future Networks (ICUFN), 2015 Seventh International Conference on, pp. 476-481, 7-10 July 2015. doi: 10.1109/ICUFN.2015.7182589
Abstract: Recently, several authentication protocols are being used in mobile applications. OTP is one of the most powerful authentication methods among them. However, it has some security vulnerabilities, particularly to MITM (Man-in-the-Middle) attack and MITPC/Phone(Man-in-the-PC/Phone) attack. An adversary could know a valid OTP value and be authenticated with this secret information in the presence of those attacks. To solve these problems, we propose a novel OTP algorithm and compare it with existing algorithms. The proposed scheme is secure against MITM attack and MITPC/Phone attack by using a captcha image, IMSI number embedded in SIM card and limiting available time of an attack.
Keywords: cryptographic protocols; smart phones; MITM attack; MITPC attack; authentication protocols; captcha image; man-in-the-PC-phone attack; man-in-the-middle attack; secure OTP algorithm; smartphone application; Authentication; CAPTCHAs; Mobile communication; Mobile handsets; Servers; Synchronization; MITM; MITPhone; OTP; application; smart phone (ID#: 15-8271)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182589&isnumber=7182475
Pengpeng Lu; Liang Shan; Jun Li; Xunwei Liu, "A New Segmentation Method for Connected Characters in CAPTCHA," in Control, Automation and Information Sciences (ICCAIS), 2015 International Conference on, pp. 128-131, 29-31 Oct. 2015. doi: 10.1109/ICCAIS.2015.7338647
Abstract: In the Completely Automated Public Turing test to tell Computer and Humans Apart recognition systems, character segmentation serves as a connecting link between the preceding and the following. By studying a variety of character segmentation algorithms, the improved method combined vertical projection algorithm, improved drop-falling algorithm and BP neural network classifier is proposed for merged characters. Firstly, this paper judges merged characters by the aspect ratio of connected component extracting from the images. Secondly, the division points are sought by the vertical projection minimums of connected components, and then these points are used as starting point of the improved algorithm to segment connected characters. Finally, BP neural network classifier is applied to select the best dividing line combinations. Experimental results show that this method can effectively solve the problem of merged characters segmentation.
Keywords: backpropagation; character recognition; image classification; image segmentation; neural nets; BP neural network classifier; CAPTCHA; Completely Automated Public Turing test to tell Computer and Humans Apart recognition system; connected characters segmentation; connected component extraction; drop-falling algorithm; merged characters segmentation; vertical projection algorithm; vertical projection minimums; Algorithm design and analysis; CAPTCHAs; Character recognition; Classification algorithms; Image segmentation; Neural networks; Projection algorithms; BP neural network; composite segmentation algorithm; improved drop-falling algorithm; merged CAPTCHA (ID#: 15-8272)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338647&isnumber=7338636
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Cognitive Radio Security 2015 |
Cognitive radio (CR) is a form of dynamic spectrum management--an intelligent radio that can be programmed and configured dynamically to use the best wireless channels near it. Its capability allows for great network resilience. The articles cited here were published in 2015.
Basharat, Mehak; Ejaz, Waleed; Ahmed, Syed Hassan, "Securing Cognitive Radio Enabled Smart Grid Systems Against Cyber Attacks," in Anti-Cybercrime (ICACC), 2015 First International Conference on, pp. 1-6, 10-12 Nov. 2015. doi: 10.1109/Anti-Cybercrime.2015.7351938
Abstract: Recently cognitive radio technology gets attention to enhance the performance of smart grid communication networks. In this paper, we present a cognitive radio enabled smart grid architecture. We then discuss major cyber security challenges in smart grid deployment and additional challenges introduced by cognitive radio technology. Spectrum sensing is one of the important aspect for opportunistic spectrum access in cognitive radio enabled smart grid networks. Cooperative spectrum sensing can improve the sensing performance in which multiple cognitive radio users cooperate to sense primary user bands. However, cooperative spectrum sensing is vulnerable to incumbent emulation and spectrum sensing data falsification (SSDF) attacks. Thus, we propose a two-stage scheme for defense against SSDF attacks. Simulation results show that the proposed two-stage scheme can identify and exclude the attackers accurately.
Keywords: Cognitive radio; Reliability; Security; Sensors; Smart grids; Smart meters; Cognitive radio; cyber attacks; network security; smart grid (ID#: 15-8273)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7351938&isnumber=7351910
Slimeni, F.; Scheers, B.; Chtourou, Z.; Le Nir, V., "Jamming Mitigation in Cognitive Radio Networks Using a Modified Q-Learning Algorithm," in Military Communications and Information Systems (ICMCIS), 2015 International Conference on, pp. 1-7, 18-19 May 2015. doi: 10.1109/ICMCIS.2015.7158697
Abstract: The jamming attack is one of the most severe threats in cognitive radio networks, because it can lead to network degradation and even denial of service. However, a cognitive radio can exploit its ability of dynamic spectrum access and its learning capabilities to avoid jammed channels. In this paper, we study how Q-learning can be used to learn the jammer strategy in order to pro-actively avoid jammed channels. The problem with Q-learning is that it needs a long training period to learn the behavior of the jammer. To address the above concern, we take advantage of the wideband spectrum sensing capabilities of the cognitive radio to speed up the learning process and we make advantage of the already learned information to minimize the number of collisions with the jammer during training. The effectiveness of this modified algorithm is evaluated by simulations in the presence of different jamming strategies and the simulation results are compared to the original Q-learning algorithm applied to the same scenarios.
Keywords: cognitive radio; interference suppression; jamming; learning (artificial intelligence); radio spectrum management; telecommunication security; cognitive radio networks; denial of service; dynamic spectrum access; jamming attack mitigation; modified Q-learning algorithm; network degradation; wideband spectrum sensing capability; Cognitive radio; Convergence; Jamming; Markov processes; Standards; Time-frequency analysis; Training; Cognitive radio network; Q-learning algorithm; jamming attack; markov decision process (ID#: 15-8274)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158697&isnumber=7158667
Mourougayane, K.; Srikanth, S., "Intelligent Jamming Threats to Cognitive Radio Based Strategic Communication Networks - A Survey," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-6, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219898
Abstract: Cognitive Radio (CR) technology and its capabilities are being explored for their application in military communication networks. Defence Advanced Research Projects Agency (DARPA), USA has conducted successful trials and initiated future programmes for military applications based on CR. Though CR technology is innovative, it is more susceptible to interference and jamming attacks, due to its nature of sensing and adaptive switching. Hence, the performance analysis of Cognitive Radio under jamming conditions is an important requirement and a subject for a new area of research. Development of effective Anti-jamming approaches require in-depth knowledge on jamming techniques and their effects. In this paper, various jamming threats on Cognitive Radio Networks (CRN) are presented based on literature survey, research carried out in various universities and defence research organisations.
Keywords: cognitive radio; jamming; military communication; telecommunication security; telecommunication switching; CR technology; CRN; DARPA; Defence Advanced Research Projects Agency; USA; adaptive switching; antijamming approaches; cognitive radio networks; intelligent jamming threats; jamming attacks; jamming techniques; military communication networks; strategic communication networks; Cognitive radio; Communication networks; Data communication; Interference; Jamming; Military communication; Sensors; Cognitive Radio; Defence Communication Networks; Interference; Jamming; Next Generation-xG; Spectrum sensing (ID#: 15-8275)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219898&isnumber=7219823
Rajput, S.H.; Wadhai, V.M.; Helonde, J.B., "A Novel Approach To Secure Cognitive Radio Network Using Dynamic Quiet Period Scheduling For Detection Of Control Channel Jamming Attack," in Pervasive Computing (ICPC), 2015 International Conference on, pp. 1-6, 8-10 Jan. 2015. doi: 10.1109/PERVASIVE.2015.7086984
Abstract: Cognitive radio is an emerging technology in the area of wireless communication. IEEE 802.22 WRAN is a first newly established standard for cognitive radio to provide broadband internet access in rural areas. While considering this, security of cognitive radio network is the major concern. For the coordination of network function, common control channel facilitates exchange of control messages. Because of its importance, this channel could be a key goal of jamming attacks. This paper proposes countermeasure to control channel jamming attack. In 802.22 WRAN system quiet period frame exists in every superframe. During this quite period frame spectrum sensing is carried out for primary user detection. Through this paper, we would like to introduce the similar kind of concept using dynamic quite period scheduling for the detection of control channel jamming attack. Detailed simulation results prove the effectiveness of our method.
Keywords: cognitive radio; jamming; wireless regional area networks; IEEE 802.22 WRAN; broadband Internet access; cognitive radio network; control channel jamming attack; dynamic quiet period scheduling; primary user detection; wireless communication; Cognitive radio; Jamming; Sensors; Standards; TV; Wireless sensor networks; Cognitive radio; WRAN 802.22;control channel jamming attack; quite period scheduling (ID#: 15-8276)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086984&isnumber=7086957
Siming Liu; Sengupta, S.; Louis, S.J., "Evolving Defensive Strategies Against Iterated Induction Attacks in Cognitive Radio Networks," in Evolutionary Computation (CEC), 2015 IEEE Congress on, pp. 3109-3115, 25-28 May 2015. doi: 10.1109/CEC.2015.7257277
Abstract: This paper investigates the use of Genetic Algorithms (GAs) to evolve defensive strategies against iterated and memory enabled induction attacks in cognitive radio networks. Security problems in cognitive radio networks have been heavily studied in recent years. However, few studies have considered the effect of memory size on attack and defense strategies. We model cognitive radio network attack and defense as a zero-sum stochastic game. Our research focuses on using GAs to recognize attack patterns from different attackers and evolving defensive strategies against the attack patterns so as to maximize network utility. We assume attackers are not only able to attack high utility channels, but are also capable of attacking based on the history of high utility channel usage by the secondary user. In our simulations, different memory lengths are used by the secondary user against memory enabled attackers. Results show that the best performance strategies evolved by GAs gain more payoff, on average, than the Nash equilibrium. Against our baseline memory enabled attackers, GAs quickly and reliably found the theoretically globally optimal defensive strategy. These results indicate that GAs is a viable approach for generating strong defenses against arbitrary memory based attackers.
Keywords: cognitive radio; genetic algorithms; stochastic games; telecommunication security; Nash equilibrium; cognitive radio networks; defensive strategies; genetic algorithms; high utility channel usage; iterated induction attacks; memory based attacks; zero-sum stochastic game; Biological cells; Cognitive radio; Game theory; Games; Genetic algorithms; History; Stochastic processes (ID#: 15-8277)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7257277&isnumber=7256859
Yongxu Zou; Sang-Jo Yoo, "A Cooperative Attack Detection Scheme for Common Control Channel Security in Cognitive Radio Networks," in Ubiquitous and Future Networks (ICUFN), 2015 Seventh International Conference on, pp. 606-611, 7-10 July 2015. doi: 10.1109/ICUFN.2015.7182616
Abstract: Cognitive radio (CR) is an intelligent technology designed to help secondary users (SUs) increase access opportunity for unused licensed spectrum channel while avoiding interference to the primary users (PUs). In cognitive radio networks (CRNs), SUs execute cooperative spectrum sensing to find available spectrum channels and exchange those sensed channels-related control information, namely available channels list (ACL) information, on common control channel (CCC) before determining when and in which data channels they may communicate. However, some SUs, defined as attackers, could cause security issue on the CCC by sharing false ACL information with other SUs benefit for their own utilization of the available spectrum channels, which significantly decreases the performance of the CRNs. In this paper, we propose an efficient detection scheme to identify attackers by cooperated SUs for CCC security. In the proposed scheme, all SUs exchange and share their control information on CCC with a reputation cooperate to identify attackers. The reputation of each SU is updated according to its own historical and recent behavior. Simulation results show that how to further improve the performance of the proposed scheme by choosing optimized thresholds. In addition, we also illustrate the proposed scheme can achieve a considerable performance improvement compared with a selfish attack detection technique (COOPON) for secure ACL information exchange on CCC.
Keywords: cognitive radio; cooperative communication; security of data; signal detection; ACL information; CCC; COOPON; CRN; PU; SU; access opportunity; available channel list information; cognitive radio networks; common control channel security; cooperative attack detection scheme; cooperative spectrum sensing; efficient detection scheme; primary users secondary users; selfish attack detection technique; unused licensed spectrum channel; Cognitive radio; Correlation; Design automation; Robustness; Security; Sensors; Simulation; Cognitive radio networks; common control channel security; reputation (ID#: 15-8278)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182616&isnumber=7182475
Slimeni, F.; Scheers, B.; Chtourou, Z., "Security Threats in Military Cognitive Radio Networks," in Military Communications and Information Systems (ICMCIS), 2015 International Conference on, pp. 1-10, 18-19 May 2015. doi: 10.1109/ICMCIS.2015.7158714
Abstract: The emergence of new wireless services and the growing demand for wireless communications are creating a spectrum shortage problem. Moreover, the current technique of static frequency allocation leads to inefficiency utilization of the available spectrum. Cognitive radio (CR) and dynamic spectrum management (DSM) concepts, aim to solve this imbalance between scarcity and under utilization of the spectrum by dynamically using the free frequency bands. However, this technology introduces new vulnerabilities and opportunities for malicious users compared to traditional wireless networks due to its intrinsic characteristics. In this paper, we present a comprehensive review of common CR attacks and their potential countermeasures with projection on military radio networks. We classify the attacks based on the four main functions of the cognitive radio, not according to the layers of the OSI model as usually done. Through this classification, we tried to provide directions for related researches to discern which cognitive functionality has to be insured against each threat.
Keywords: cognitive radio; military communication; radio spectrum management; telecommunication security; DSM; OSI model; dynamic spectrum management; frequency allocation; military cognitive radio networks; spectrum shortage; spectrum utilization; wireless communications; Cognitive radio; Interference; Radio spectrum management; Radio transmitters; Security; Sensors (ID#: 15-8279)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158714&isnumber=7158667
Sumathi, A.C.; Vidhyapriya, R.; Kiruthika, C., "A Proactive Elimination of Primary User Emulation Attack in Cognitive Radio Networks Using Intense Explore Algorithm," in Computer Communication and Informatics (ICCCI), 2015 International Conference on, pp. 1-7, 8-10 Jan. 2015. doi: 10.1109/ICCCI.2015.7218110
Abstract: In cognitive radio network, secondary users (without license) are allowed to access the licensed spectrum if primary users (having license) are not present. A serious threat in physical layer of this network is that a malicious secondary user exploiting the spectrum access etiquette by mimicking the spectral characteristics of a primary user known as Primary User Emulation Attack (PUEA). The main objective of this paper is to eliminate the PUE attack that may arise from one of the secondary users. We propose our Intense Explore algorithm to eliminate the PUE attack in a proactive way. Our simulation results proved that our proposed Intense Explore algorithm yields better results than existing techniques.
Keywords: cognitive radio; radio spectrum management; telecommunication security; PUEA proactive elimination; cognitive radio network; intense explore algorithm; licensed spectrum; malicious secondary user; primary user emulation attack proactive elimination; spectrum access; Cognitive radio; Computers; Correlation; Emulation; Feature extraction; Informatics; Sensors; Cognitive Radio Network; Intense Explore; PUEA (ID#: 15-8280)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218110&isnumber=7218046
Jun Du; Xiang Wen; Ligang Shang; Shan Zou; Bangning Zhang; Daoxing Guo; Yihe Song, "A Byzantine Attack Defender for Censoring-Enabled Cognitive Radio Networks," in Wireless Communications & Signal Processing (WCSP), 2015 International Conference on, pp. 1-6, 15-17 Oct. 2015. doi: 10.1109/WCSP.2015.7341091
Abstract: This paper considers the problem of cooperative spectrum sensing (CSS) in the censoring-enabled Cognitive Radio Networks (CRNs) with a crowd of battery-powered Secondary Users (SUs), where only significant local observations are submitted to the Fusion Center (FC). However, in order to monopolize the spectrum usage or disrupt the networks operation, malicious SUs may try to send falsified sensing reports to the FC even they are uncertain about their observations, which make the existing robust CSS schemes ineffective. To tackle these challenges, we formulate an optimization problem to improve the performance of CSS in the censoring-enabled CRNs, and develop an expectation maximization based algorithm to solve it, where the presences of primary user and the reliabilities of each SU can be jointly estimated. Extensive simulation results show that the proposed robust CSS scheme outperforms the previous reputation-based approaches under various attack scenarios.
Keywords: cognitive radio; cooperative communication; expectation-maximisation algorithm; optimisation; radio spectrum management; sensor fusion; telecommunication security; battery-powered secondary users; byzantine attack defender; censoring-enabled CRN; censoring-enabled cognitive radio networks; collaborative spectrum sensing; cooperative spectrum sensing problem; expectation maximization based algorithm; fusion center; local observations; malicious user detection; optimization problem; robust CSS scheme; spectrum usage; Cascading style sheets; Cognitive radio; Collaboration; Estimation; Robustness; Sensors; Cognitive radio network; collaborative spectrum sensing; malicious user detection; security (ID#: 15-8281)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7341091&isnumber=7340966
Niranjane, P.K.; Rajput, S.H.; Wadhai, V.M.; Helonde, J.B., "Performance Analysis of PUE Attacker on Dynamic Spectrum Access in Cognitive Radio," in Pervasive Computing (ICPC), 2015 International Conference on, pp. 1-6, 8-10 Jan. 2015. doi: 10.1109/PERVASIVE.2015.7086985
Abstract: Spectrum inefficiency is the major problem in the wireless technology. To negotiate this problem the technology called cognitive radio is introduced which is based on software defined radio (SDR). It additionally senses the environment with the help of secondary user to utilize the licensed band when primary user is not accessing the spectrum. Security and privacy are immeasurable challenges in all types of wired and wireless networks. These challenges are of even greater importance in CR networks. The unique characteristic of these networks and the application purposes, they serve make them attractive target for intrusions and other attacks. Primary user emulation attack is one of the major security threats in spectrum sensing. To shrink this we have distinguished various defense techniques which can conflict the PUE attacks.
Keywords: cognitive radio; radio spectrum management; signal detection; software radio; telecommunication security; CR networks; PUE attacker performance analysis; SDR; cognitive radio; dynamic spectrum access; primary user emulation attack; software defined radio; spectrum inefficiency; spectrum sensing; wireless networks; wireless technology; Interference; Protocols; Receivers; Routing; Security; Sensors; Wireless communication; Cognitive Radio; Fundamentals of Network Security; PUEA; Performance Analysis of PUE Attacker; Spectrum Sensing (ID#: 15-8282)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086985&isnumber=7086957
Yongjia Huo; Ying Wang; Wenxuan Lin; Ruijin Sun, "Three-Layer Bayesian Model Based Spectrum Sensing to Detect Malicious Attacks in Cognitive Radio Networks," in Communication Workshop (ICCW), 2015 IEEE International Conference on, pp. 1640-1645, 8-12 June 2015. doi: 10.1109/ICCW.2015.7247415
Abstract: Owing to the open nature of cooperative cognitive radio networks (CRNs), security becomes a critical topic to consider. In order to acquire more spectrum resources, malicious secondary users (SUs) always launch various attacks. Among these attacks, spectrum sensing data falsification (SSDF) attack is a typical one. To cope with SSDF attacks, this paper proposes a three-layer Bayesian model. History data is processed through three layers, namely processing layer, integrating layer and inferring layer. Processing layer is modeled by hidden Markov model (HMM), which uses original data to train parameters and then provide trained emission distributions to the second layer. Within integrating layer, on the basis of different algorithms, emission distributions are processed to obtain the reputation values, balance values and specificity values of different SUs. By using different thresholds, these continuous values can be made discrete and then transferred to inferring layer. In the third layer, a Bayesian network (BN) is built to calculate the safety probabilities of SUs via using the discrete values as evidence. From simulation results, the proposed system is useful to defend against different types of malicious users, especially in low-SNR situations.
Keywords: belief networks; cognitive radio; hidden Markov models; radio spectrum management; signal detection; Bayesian network; CRN; HMM; SNR; SSDF attack; SU; cognitive radio network; emission distribution; hidden Markov model; inferring layer; integrating layer; malicious attack detection; processing layer; secondary user; spectrum sensing data falsification; three-layer Bayesian model; Bayes methods; Cognitive radio; Hidden Markov models; Numerical models; Safety; Sensors; Signal to noise ratio (ID#: 15-8283)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247415&isnumber=7247062
Nadendla, V.S.S.; Han, Y.S.; Varshney, P.K., "Information-Dispersal Games for Security in Cognitive-Radio Networks," in Information Theory (ISIT), 2015 IEEE International Symposium on, pp. 1600-1604, 14-19 June 2015. doi: 10.1109/ISIT.2015.7282726
Abstract: Rabin's information dispersal algorithm (IDA) simultaneously addresses secrecy and fault-tolerance by encoding a data file and parsing it into unrecognizable data-packets before transmitting or storing them in a network. In this paper, we redesign Rabin's IDA for cognitive-radio networks where the routing paths are available with uncertainty. In addition, we also assume the presence of an attacker in the network which attempts to simultaneously compromise the confidentiality and data-integrity of the source message. Due to the presence of two rational entities with conflicting motives, we model the problem as a zero-sum game between the source and the attacker and investigate the mixed-strategy Nash Equilibrium by decoupling the game into two linear programs which have a primal-dual relationship.
Keywords: cognitive radio; data integrity; fault tolerance; game theory; linear programming; message authentication; network coding; packet radio networks; source coding; telecommunication network reliability; telecommunication network routing; Rabin IDA; Rabin information dispersal algorithm; cognitive radio network security; data file encoding; data file parsing; data packet storage; data packet transmission; fault tolerance; information-dispersal game; linear program; mixed-strategy Nash equilibrium; primal-dual relationship; routing path; secrecy; source message confidentiality; source message data integrity; unrecognizable data packet; zero-sum game; Fault tolerance; Fault tolerant systems; Game theory; Games; Network topology; Random variables; Reed-Solomon codes; Byzantine Attacks; Cognitive-Radio Networks; File-Sharing Networks; Information Dispersal Games; Reed-Solomon Codes (ID#: 15-8284)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282726&isnumber=7282397
Nath, Shikhamoni; Marchang, Ningrinla; Taggu, Amar, "Mitigating SSDF Attack using k-medoids Clustering in Cognitive Radio Networks," in Wireless and Mobile Computing, Networking and Communications (WiMob), 2015 IEEE 11th International Conference on, pp. 275-282, 19-21 Oct. 2015. doi: 10.1109/WiMOB.2015.7347972
Abstract: Collaborative sensing is preferred to individual sensing in Cognitive Radio Network (CRN) since it helps in achieving a more accurate sensing decision. In infrastructure-based cognitive radio network, each node sends its local sensing report to the fusion center which uses a fusion rule to make the final decision. The decision of the Fusion Center plays a vital role. Attackers may try to manipulate the decision-making of the Fusion Center (FC) for selfish reasons or to interfere with the primary user transmission. In SSDF attack, malicious users try to manipulate the FC by sending false sensing report. In this paper we present a method for detection and isolation of such malicious users. Our method is based on the k-medoids clustering algorithm. The proposed approach does not require the use of any predefined threshold for detection. It mines the collection of sensing reports at the FC for determining the presence of attackers. Additionally, we also present how we can use the proposed approach on streaming data (sensing reports) and thereby detect and isolate attackers on the fly. Simulation results support the validity of the approach.
Keywords: Clustering algorithms; Cognitive radio; Data mining; Interference; Sensors; Wireless sensor networks; SSDF attack; cognitive radio network; data mining; k-medoids clustering; spectrum sensing security (ID#: 15-8285)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7347972&isnumber=7347915
Rajasegarar, S.; Leckie, C.; Palaniswami, M., "Pattern Based Anomalous User Detection in Cognitive Radio Networks," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 5605-5609, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7179044
Abstract: Cognitive radio (CR) provides the ability to sense the range of frequencies (spectrum) that are not utilized by the incumbent user (primary user) and to opportunistically use the unoccupied spectrum in a heterogeneous environment. This can use a collaborative spectrum sensing approach to detect the spectrum holes. However, this nature of the collaborative mechanism is vulnerable to security attacks and faulty observations communicated by the opportunistic users (secondary users). Detecting such malicious users in CR networks is challenging as the pattern of malicious behavior is unknown apriori. In this paper we present an unsupervised approach to detect those malicious users, utilizing the pattern of their historic behavior. Our evaluation reveals that the proposed scheme effectively detects the malicious data in the system and provides a robust framework for CR to operate in this environment.
Keywords: cognitive radio; radio spectrum management; signal detection; telecommunication security; cognitive radio networks; collaborative spectrum sensing approach; heterogeneous environment; malicious data detection; pattern based anomalous user detection; security attacks; spectrum hole detection; unsupervised approach; Clustering algorithms; FCC ;Geometry; History; Sensors; Systematics (ID#: 15-8286)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7179044&isnumber=7177909
Ta Duc-Tuyen; Nhan Nguyen-Thanh; Ciblat, P.; Van-Tam Nguyen, "Extra-Sensing Game for Malicious Primary User Emulator Attack in Cognitive Radio Network," in Networks and Communications (EuCNC), 2015 European Conference on, pp. 306-310, June 29 2015-July 2 2015. doi: 10.1109/EuCNC.2015.7194088
Abstract: Primary User Emulation (PUE) attack is a serious security problem in cognitive radio (CR) network. A PUE attacker emulates a primary signal during sensing duration in order the CR users not to use the spectrum. The PUE attacker is either selfish if it would like to take benefit of the spectrum, or malicious if it would like to do a Deny of Service of the CR network. In this paper, we only consider malicious PUE. We propose to perform sometimes an additional sensing step, called extra-sensing, in order to have a new opportunity to sense the channel and so to use it. Obviously the malicious PUE may still perform an attack during this extra-sensing. Therefore, our problem can be formulated as a zero-sum game to modeling and analyzing the strategies for two players. The equilibrium is expressed in closed-form. The results show that the benefit ratio and the probability of channel's availability strongly influence the equilibrium. Numerical results confirm our claims.
Keywords: cognitive radio; signal detection; telecommunication security; CR network; PUE attack; channel probability; cognitive radio network; deny of service attack; extra-sensing game; malicious primary user emulator attack; Games (ID#: 15-8287)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194088&isnumber=7194024
Demirdogen, I.; Lei Li; Chunxiao Chigan, "FEC Driven Network Coding Based Pollution Attack Defense in Cognitive Radio Networks," in Wireless Communications and Networking Conference Workshops (WCNCW), 2015 IEEE, pp. 259-268, 9-12 March 2015. doi: 10.1109/WCNCW.2015.7122564
Abstract: Relay featured cognitive radio network scenario is considered in the absence of direct link between secondary user (SU) and secondary base station (S-BS). Being a realistic deployment use case scenario, relay node can be subjected to pollution attacks. Forward error correction (FEC) driven network coding (NC) method is employed as a defense mechanism in this paper. By using the proposed methods, pollution attack is efficiently defended. Bit error rate (BER) measurements are used to quantify network reliability. Furthermore, in the absence of any attack, the proposed method can efficiently contribute to network performance by improving BER. Simulation results underline our mechanism is superior to existing FEC driven NC methods such as low density parity check (LDPC).
Keywords: cognitive radio; error statistics; forward error correction; network coding; parity check codes; relay networks (telecommunication);telecommunication network reliability; telecommunication security; BER; FEC driven network coding based pollution attack defense; LDPC; bit error rate measurements; forward error correction; low density parity check; network performance; network reliability quantification; relay featured cognitive radio network scenario; secondary base station; secondary user; Bit error rate; Conferences; Forward error correction; Network coding; Pollution; Relays; Reliability (ID#: 15-8288)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122564&isnumber=7122513
Al-Talabani, A.; Nallanathan, A.; Nguyen, H.X., "Enhancing Physical Layer Security of Cognitive Radio Transceiver via Chaotic OFDM," in Communications (ICC), 2015 IEEE International Conference on, pp. 4805-4810, 8-12 June 2015. doi: 10.1109/ICC.2015.7249083
Abstract: Due to the enormous potential of improving the spectral utilization by using Cognitive Radio (CR), designing adaptive access system and addressing its physical layer security are the most important and challenging issues in CR networks. Since CR transceivers need to transmit over multiple non-contiguous frequency holes, multi-carrier based system is one of the best candidates for CR's physical layer design. In this paper, we propose a combined chaotic scrambling (CS) and chaotic shift keying (CSK) scheme in Orthogonal Frequency Division Multiplexing (OFDM) based CR to enhance its physical layer security. By employing chaos based third order Chebyshev map which allows optimum bit error rate (BER) performance of CSK modulation, the proposed combined scheme outperforms the traditional OFDM system in overlay scenario with Rayleigh fading channel. Importantly, with two layers of encryption based on chaotic scrambling and CSK modulation, large key size can be generated to resist any brute-force attack, leading to a significantly improved level of security.
Keywords: OFDM modulation; Rayleigh channels; chaotic communication; cognitive radio; error statistics; radio transceivers; telecommunication security; Rayleigh fading channel; adaptive access system; bit error rate; brute-force attack; chaos based third order Chebyshev map; chaotic OFDM; chaotic shift keying scheme; cognitive radio transceiver; combined chaotic scrambling; encryption; orthogonal frequency division multiplexing; physical layer security; Bit error rate; Chaotic communication; Modulation; OFDM; Receivers; Security (ID#: 15-8289)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249083&isnumber=7248285
Shribala, N.; Srihari, P.; Jinaga, B.C., "Intended Inference Lenient Secure Spectrum Sensing By Prominence State Verification," in Industrial Instrumentation and Control (ICIC), 2015 International Conference on, pp. 1550-1554, 28-30 May 2015. doi: 10.1109/IIC.2015.7150996
Abstract: Spectrum sensing is the process of identifying idle spectrums and utilizing them by the secondary users such that there is no inference with primary users. Cognitive Radio Network's (CRN) primary challenge is sensing of idle spectrum and efficiently handling that spectrum by the secondary user nodes. The effective and efficient spectrum sensing can achieve by enabling cooperation between nodes to share the information about spectrum state in cognitive radio networks. But there is a possibility of Spectrum Sensing Data Falsification (SSDF) attacks by malicious or selfish CRN nodes. This paper discusses the technique to assess the credibility of the neighbor nodes towards spectrum state verification process. The method referred as Intended Inference Lenient Secure Spectrum Sensing By prominence state verification (PSV) that devised in this paper aimed to prevent the intended data falsification by selfish or malicious neighbor nodes in cognitive radio networks. The simulations build on custom testbed with usual network conditions and SSDF attacks indicating that the devised model is greatly brought down the error rate of spectrum decision and at the same time improved the detection rate of malicious cognitive nodes.
Keywords: cognitive radio; radio spectrum management; telecommunication security; cognitive radio network; intended inference lenient secure spectrum sensing; malicious cognitive nodes; prominence state verification; spectrum sensing data falsification attacks; Analytical models; Cognitive radio; Cryptography; Data models; Robustness; Sensors; Cognitive Radio; Cooperative Spectrum Sensing; Malicious User Detection; Spectrum sensing; cognitive radios; data falsification attack (ID#: 15-8290)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150996&isnumber=7150576
Iyer, V.; Kumari, R.; Selvi, T.; Priya, "Deterministic Approach for Performance Efficiency in Vehicular Cloud Computing," in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, pp. 78-82, 11-13 March 2015. Doi: (not provided)
Abstract: Vehicular networking is one of the research areas which needs to be addressed because of its features and some of the applications such as standardization, efficient traffic management. Many works have been done to address the various issues of vehicular networks and various technologies have been implemented for the maintenance of Intelligent Transportation System (ITS). To address the various issues in vehicular networks Vehicular Cloud Computing was introduced as a solution. A huge impact was observed after the introduction of this hybrid technology which made use of the resources for decision making. In this paper we try to address some of the security challenges faced in vehicular networks. We consider the major issues such as Maintenance of network, Environmental impact and Security. To address the issues for security we have considered the already proven technology i.e. Elliptic Curve Cryptography which can withstand many different types of attacks. Also for timely and reliable communication in cloud we have made the use of Cognitive Radio.
Keywords: cloud computing; cognitive radio; intelligent transportation systems; mobile computing; public key cryptography; ITS; cognitive radio; deterministic approach; elliptic curve cryptography; intelligent transportation system; traffic management; vehicular cloud computing; vehicular networking; Cloud computing; Cognitive radio; Elliptic curve cryptography; Elliptic curves; Vehicles; Vehicular ad hoc networks; Cloud Computing; Cognitive Radio; Elliptic Curve Cryptography; Vehicular networks (ID#: 15-8291)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100224&isnumber=7100186
Yuan Jiang; Jia Zhu; Yulong Zou, "Secrecy Outage Analysis of Multi-User Cellular Networks in the Face of Cochannel Interference," in Cognitive Informatics & Cognitive Computing (ICCI*CC), 2015 IEEE 14th International Conference on, pp. 441-446, 6-8 July 2015. doi: 10.1109/ICCI-CC.2015.7259422
Abstract: In this paper, we explore the physical-layer security of a multi-user cellular network in the presence of an eavesdropper, which is made up of multiple users communicating with a base station while the eavesdropper may intercept the communications from users to the base station (BS). Considering that multiple users are available in cellular network, we present three multi-user scheduling schemes, namely the round-robin scheduling scheme, the suboptimal and optimal user scheduling schemes to improve the security of communication (from users to BS) against the eavesdropping attack. In the suboptimal scheduling, we only need to assume that the channel state information (CSI) of the main link spanning from users to BS are known. In contrast to the suboptimal scheduling, the optimal scheduling is designed by assuming the CSI of the main link and wiretap link (spanning from users to the eavesdropper) that are available. We obtain the calculus form of the secrecy outage probability to analyze the secrecy diversity performance. Secrecy diversity analysis is carried out, which shows that the round-robin always achieves only one diversity order, whereas the suboptimal and optimal user scheduling schemes achieve the full diversity order. In addition, the results of the secrecy outage show that the optimal scheduling has the best performance and the round-robin performs the worst in terms of defending against the eavesdropping attack. Lastly, as the number of users increases, both the secrecy outage probabilities of the suboptimal and optimal scheduling schemes have a significant secrecy performance improvement.
Keywords: cellular radio; cochannel interference; diversity reception; multi-access systems; probability; radio links; telecommunication network reliability; telecommunication scheduling; telecommunication security; wireless channels; BS; CSI; base station; channel state information; cochannel interference; eavesdropper; link spanning; multiuser cellular network secrecy outage probability analysis; multiuser scheduling scheme; optimal user scheduling scheme; physical-layer security; round-robin scheduling scheme; secrecy diversity analysis; suboptimal scheduling; wiretap link; Base stations; Interchannel interference; Cellular network; cochannel interference; multi-user scheduling; secrecy diversity; secrecy outage probability (ID#: 15-8292)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7259422&isnumber=7259359
Bhattacharjee, S.; Rajkumari, R.; Marchang, N., "Effect of Colluding Attack in Collaborative Spectrum Sensing," in Signal Processing and Integrated Networks (SPIN), 2015 2nd International Conference on, pp. 223-227, 19-20 Feb. 2015. doi: 10.1109/SPIN.2015.7095266
Abstract: Collaborative spectrum sensing (CSS) is an approach that enhances the spectrum sensing performance where multiple secondary users (SUs) cooperate to make the final sensing decision in a cognitive radio network (CRN). In CSS, the SUs are generally assumed to report correct local sensing result to the fusion center (FC). But, some SUs may be compromised and start reporting false local sensing decision to the FC to disrupt the network. CSS can also be severely affected by compromised nodes working together. Such a type of attack is termed as colluding attack and nodes that launch colluding attacks are known as colluding nodes. In this paper, we study the effect of colluding nodes in collaborative spectrum sensing. We also show that the presence of colluding attack results in higher as network performance degradation compared to independent attack especially when the presence of attackers is high. Hence, colluding attacks are of much security concern.
Keywords: cognitive radio; cooperative communication; signal detection; telecommunication security; cognitive radio network; collaborative spectrum sensing; colluding attack; colluding node; fusion center; multiple secondary user cooperation; Cascading style sheets; Cognitive radio; Collaboration; Conferences; Sensors; Signal processing; Cognitive radio; Infrastructure-based CR; Spectrum sensing data falsification; colluding nodes (ID#: 15-8293)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7095266&isnumber=7095159
Zhexiong Wei; Tang, H.; Yu, F.R., "A Trust Based Framework for Both Spectrum Sensing and Data Transmission in CR-MANETs," in Communication Workshop (ICCW), 2015 IEEE International Conference on, pp.562-567, 8-12 June 2015. doi: 10.1109/ICCW.2015.7247240
Abstract: Distributed cooperative spectrum sensing is an effective and feasible approach to detect primary users in Cognitive Radio Mobile Ad Hoc NETworks (CR-MANETs). However, due to the dynamic and interdependent characteristics of this approach, malicious attackers can interrupt the normal spectrum sensing more easily in open environments by spectrum sensing data falsification attacks. Meanwhile, attackers can perform traditional attacks to data transmission in MANETs. Towards these complicated situations in CR-MANETs, we study a new type of attack named joint dynamic spectrum sensing and data transmission attack in this paper. We propose a trust based framework to protect both distributed cooperative spectrum sensing and data transmission. For protection of distributed cooperative spectrum sensing, a weighted-average consensus algorithm with trust is applied to degrade the impact of malicious secondary users. At the same time, data transmission in a network formed by secondary users can be protected by trust with direct and indirect observations. Simulation results show the effectiveness and performance of the proposed framework under different experimental scenarios.
Keywords: mobile ad hoc networks; security of data; spread spectrum communication; CR-MANET; cognitive radio mobile ad hoc networks; data transmission; distributed cooperative spectrum sensing; malicious attackers; spectrum sensing data falsification attacks; trust based framework; weighted-average consensus algorithm; Ad hoc networks; Cognitive radio; Data communication; Joints; Routing; Security; Sensors; Cognitive radio mobile ad hoc networks (CRMANETs);spectrum sensing; trust (ID#: 15-8294)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247240&isnumber=7247062
Arun, S.; Umamaheswari, G., "Performance Analysis of Anti-Jamming Technique Using Angle of Arrival Estimation in CRN's," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-8, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219875
Abstract: Jamming has been one of the major attacks in any wireless network causing denial of service by disrupting the system. Jamming includes transmission of random signals with the high power towards the receiver causing interference and denial of service. This paper presents an anti-jamming mechanism for receivers in a cognitive radio network using Angle of Arrival (AoA) estimation method combined with adaptive beamforming. The receiver obtains the required signal at the particular angle found by AoA estimation followed by improving the SNR of the signal using beamforming neglecting interference from signals sent by jammers at other angles. AoA techniques like Esprit, Music and Root-Music algorithms are used to estimate the angle of arrival of the incoming signal. The Conjugate gradient method is used in the process of adaptive beamforming which continuously adapts to the varying angle provided by the AoA estimation. Simulations results are given comparing the Esprit, Music and Root-Music algorithms based on different angle of arrivals and SNR values. Simulation results are also provided for the combined mechanism of AoA estimation with Conjugate gradient method which will prove to be an effective way to avoid jamming.
Keywords: array signal processing; cognitive radio; computer network security; conjugate gradient methods; direction-of-arrival estimation; jamming; CRN; adaptive beamforming; angle of arrival estimation; anti-jamming technique; cognitive radio network; conjugate gradient method; denial of service; wireless network; Arrays; Estimation; Frequency estimation; Interference; Jamming; Silicon; Wireless networks; Adaptive beamforming; Angle of Arrival; Cognitive radio network; Conjugate gradient method; Jamming (ID#: 15-8295)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219875&isnumber=7219823
Chatterjee, S.; Chatterjee, P.S., "A Comparison Based Clustering Algorithm to Counter SSDF Attack in CWSN," in Computational Intelligence and Networks (CINE), 2015 International Conference on, pp.194-195, 12-13 Jan. 2015. doi: 10.1109/CINE.2015.46
Abstract: Cognitive Wireless Sensor Networks follow IEEE 802.22 standard which is based on the concept of cognitive radio. In this paper we have studied the Denial of Service (DOS) attack. Spectrum Sensing Data Falsification (SSDF) attack is one such type of DOS attack. In this attack the attackers modify the sensing report in order to compel the Secondary User (SU) to take a wrong decision regarding the vacant spectrum band in other's network. In this paper we have proposed a similarity-based clustering of sensing data to counter the above attack.
Keywords: cognitive radio; computer network security; radio spectrum management; wireless sensor networks; CWSN; DOS attack; IEEE 802.22 standard; SSDF attack; cognitive radio; cognitive wireless sensor networks; comparison based clustering algorithm; denial of service attack; secondary user; similarity-based clustering; spectrum sensing data falsification attack; vacant spectrum band; Clustering algorithms; Cognitive radio; Complexity theory; Computer crime; Educational institutions; Sensors; Wireless sensor networks; Cognitive Wireless Sensor Network; Denial of Service attack; Spectrum Sensing Data Falsification attack (ID#: 15-8296)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053829&isnumber=7053782
Zhan Gao; Xinquan Huang; Manxi Wang, "An Heuristic WSPRT Fusion Algorithm Against High Proportion of Malicious Users," in Wireless Communications & Signal Processing (WCSP), 2015 International Conference on, pp. 1-5, 15-17 Oct. 2015. doi: 10.1109/WCSP.2015.7340980
Abstract: Spectrum Sensing Data Falsification (SSDF) is a critical threat to collaborative spectrum sensing (CSS) in cognitive radio networks. Especially, with increasing of amount of malicious users (MUs), SSDF causes enormous damage. Most studies have achieved success in the scenario that MUs are much less than honest users (HUs), but in this paper, we mainly consider the scenario that MUs are at high proportion in the network. In order to alleviate performance deterioration caused by SSDF attack in the scenario with a high proportion of MUs, we propose a heuristic weighted sequential probability ratio test (HWSPRT) algorithm. Based on WSPRT, we provide a secure fusion approach. Simulation results have shown that the proposed algorithm provides a better performance than WSPRT algorithm in the network containing a high proportion of MUs.
Keywords: cognitive radio; probability; radio spectrum management; signal detection; telecommunication security; CSS; HWSPRT algorithm; MU; SSDF; cognitive radio networks; collaborative spectrum sensing; heuristic weighted sequential probability ratio test algorithm; honest users; malicious users; spectrum sensing data falsification; Cognitive radio; Decision support systems; Electromagnetics; Electronic mail; Heuristic algorithms; Peer-to-peer computing; Sensors; Heuristic weighted sequential probability ratio test; High proportion of malicious users; Spectrum sensing data falsification (ID#: 15-8297)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7340980&isnumber=7340966
Karunambiga, K.; Sumathi, A.C.; Sundarambal, M., "Channel Selection Strategy for Jamming-Resistant Reactive Frequency Hopping in Cognitive WiFi Network," in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, pp. 1-4, 25-27 Feb. 2015. doi: 10.1109/ICSNS.2015.7292430
Abstract: Spectrum availability of the WiFi network is increased with the help of Cognitive radio (CR). The availability of spectrum is targeted by jamming attack. The jamming attack is addressed with the help of reactive frequency hopping technique. One of the important factors that support the frequency hopping technique is the channel selection strategy. The existing selection strategies are either based on the statistic of network traffic or random select. Statistic based methods includes the overhead of monitoring the network traffic and maintaining it. The random selection in turn increases the delay to choose a channel. To address the aforementioned problem two novel strategies are proposed: i) hybrid channel selection (HCS) and ii) Weight based channel selection (WCS) is proposed for efficient communication. The channel for frequency hopping is selected based on the HCS and WCS strategy from the available channels. These two strategies do not depend on the statistic of the network traffic and not completely randomized selection.
Keywords: cognitive radio; frequency hop communication; statistical analysis; HCS; WCS; cognitive WiFi network; cognitive radio; hybrid channel selection strategy; jamming-resistant reactive frequency hopping technique; statistic based methods; weight based channel selection; Clustering algorithms; Cognitive radio; IEEE 802.11 Standard; Jamming; Spread spectrum communication; Switches; Channel selection strategy; Frequency Hopping; Jamming attack (ID#: 15-8298)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292430&isnumber=7292366
Zhiyuan Shi; Xiaolan Lin; Caidan Zhao; Mingjun Shi, "Multifractal Slope Feature Based Wireless Devices Identification," in Computer Science & Education (ICCSE), 2015 10th International Conference on, pp. 590-595, 22-24 July 2015. doi: 10.1109/ICCSE.2015.7250315
Abstract: Cognitive Radio (CR) is a promising technology for alleviating spectrum shortage problem. However, with the realization of CR, new security issues are gradually emerging, like Primary User Emulation (PUE) attack. This paper incorporates transient-based identification into cognitive radio network (CRN) to defend against the PUE attacks. A method of extracting transient envelope features for wireless devices identification based on multifractal has been presented. Utilizing the joint fingerprint features of multifractal slope and polynomial fitting for wireless devices identification, the results show that the recognition performance is greatly improved.
Keywords: cognitive radio; feature extraction; polynomials; radio equipment; radio spectrum management; telecommunication security; CRN security issue; cognitive radio network; multifractal slope feature based wireless device identification; multifractal slope joint fingerprint feature extraction; polynomial fitting joint fingerprint feature extraction; primary user emulation attack; spectrum shortage problem alleviation; transient envelope feature extraction; transient-based identification; Band-pass filters; Feature extraction; Fractals; Transient analysis; Wireless LAN; Wireless communication; PUE attack; generalized dimension; multifractal; transient signal; wireless devices identification}, (ID#: 15-8299)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7250315&isnumber=7250193
Yang Liu; Chengzhi Li; Changchuan Yin; Huaiyu Dai, "A Unified Framework for Wireless Connectivity Study Subject to General Interference Attack," in Communications (ICC), 2015 IEEE International Conference on, pp. 7174-7179, 8-12 June 2015. doi: 10.1109/ICC.2015.7249471
Abstract: Connectivity is crucial to ensure information availability and survivability for wireless networks. In this paper, we propose a unified framework to study the connectivity of wireless networks under a general type of interference attack, which can address diverse applications including Cognitive Radio, Jamming attack and shadowing effect. By considering the primary users, jammers and deep fading as unified Interferers, we investigate a 3-dimensional connectivity region, defined as the set of key system parameters - the density of users, the density of Interferers and the interference range of Interferers - with which the network is connected. Further we study the impact of the Interferers' settings on node isolation probability, which is a fundamental local connectivity metric. Through percolation theory, the sufficient and necessary conditions for global connectivity are also derived. Our study is supported by simulation results.
Keywords: cognitive radio; fading channels; jamming; probability; radio networks; telecommunication security;3D connectivity region; cognitive radio; deep fading; general interference attack; information availability; information survivability; interferer density; interferer interference range; jammers; jamming attack; key system parameters; local connectivity metric; node isolation probability; percolation theory; primary users; shadowing effect; user density; wireless network connectivity; Information systems; Interference; Jamming; Security; Shadow mapping; Three-dimensional displays; Wireless networks (ID#: 15-8300)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249471&isnumber=7248285
Cao Long; Zhao Hangsheng; Zhang Jianzhao; Liu Yongxiang, "Secure Cooperative Spectrum Sensing Based on Energy Efficiency Under SSDF Attack," in Wireless Symposium (IWS), 2015 IEEE International, pp. 1-4, March 30 2015-April 1 2015. doi: 10.1109/IEEE-IWS.2015.7164592
Abstract: Spectrum sensing is a premise for the realization of Cognitive Radio Networks (CRN). This paper considers Spectrum Sensing Data Falsification (SSDF) attack to CRN, where malicious users send false local spectrum sensing results in cooperative spectrum sensing, and it will result in wrong final decisions by the fusion center. A low overhead symmetric cryptographic mechanism utilizing message authentication code to authenticate the sensing data of Secondary User (SU) is proposed to resolve this problem. Moreover, in this article, the concept of energy efficiency in cooperative spectrum sensing is first introduced, and the optimal length of message authentication code is provided to maximize the energy efficiency. Simulation results verify the efficiency of this scheme, and the solution of optimal problem is also evaluated.
Keywords: cognitive radio; cryptography; radio spectrum management; signal detection; telecommunication security; CRN; SSDF; SSDF attack; SU; cognitive radio networks; energy efficiency; fusion center; message authentication code; secondary user; secure cooperative spectrum sensing; spectrum sensing data falsification; symmetric cryptographic mechanism; Cascading style sheets; Energy efficiency; Indexes; Lead; Mathematical model; Sensors; Sun; Cooperative spectrum sensing; energy efficiency; fusion rule; malicious user (ID#: 15-8301)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164592&isnumber=7164507
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Covert Channels 2015 |
Covert Channels 2015
A covert channel is a simple, effective mechanism for sending and receiving data between machines without alerting any firewalls or intrusion detectors on the network. In cybersecurity science, they have value both as a means for defense and attack. The work cited here was presented or published in 2015.
Darwish, O.; Al-Fuqaha, A.; Anan, M.; Nasser, N., "The Role of Hierarchical Entropy Analysis in the Detection and Time-Scale Determination Of Covert Timing Channels," in Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, pp. 153-159, 24-28 Aug. 2015
doi: 10.1109/IWCMC.2015.7289074
Abstract: This paper evaluates the potential use of hierarchal entropy analysis to detect covert timing channels and determine the best time-scale that reveals it. A data transmission simulator is implemented to generate a collection of overt and covert channels. The hierarchical entropy analysis approach is then utilized to detect the covert timing channels and identify the type-scale that provides the highest evidence that the underlying channel is covert. Hierarchical entropy divides the stream of inter-arrival times greedily to identify the time-scale the best reveals the existence of a covert-timing channel. The lowest entropy in the sequence is the best indicator that identifies non-random patterns in the given data stream. The results show that hierarchal entropy analysis performs significantly better than the classical flat entropy approach in the detection of covert timing channels. Furthermore, the hierarchical entropy analysis provides details about the best time-scale that reveals the features of the covert timing channel.
Keywords: data communication; entropy; security of data; covert timing channel detection; data transmission simulator; hierarchical entropy analysis; time-scale determination; Decoding; Encoding; Entropy; Indexes; Noise; Receivers; Timing; Covert timing channels; Hierarchical entropy; Pattern recognition; Security; Time-scale determination (ID#: 15-8302)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289074&isnumber=7288920
Kaur, J.; Wendzel, S.; Meier, M., "Countermeasures for Covert Channel-Internal Control Protocols," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 422-428, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.88
Abstract: Network covert channels have become a sophisticated means for transferring hidden information over the network, and thereby breaking the security policy of a system. Covert channel-internal control protocols, called micro protocols, have been introduced in the recent years to enhance capabilities of network covert channels. Micro protocols are usually placed within the hidden bits of a covert channel's payload and enable features such as reliable data transfer, session management, and dynamic routing for network covert channels. These features provide adaptive and stealthy communication channels for malware, especially bot nets. Although many techniques are available to counter network covert channels, these techniques are insufficient for countering micro protocols. In this paper, we present the first work to categorize and implement possible countermeasures for micro protocols that can ultimately break sophisticated covert channel communication. The key aspect of proposing these countermeasures is based on the interaction with the micro protocol. We implemented the countermeasures for two micro protocol-based tools: Ping Tunnel and Smart Covert Channel Tool. The results show that our techniques are able to counter micro protocols in an effective manner compared to current mechanisms, which do not target micro protocol-specific behavior.
Keywords: computer network security; invasive software; protocols; telecommunication channels; adaptive communication channel; bot nets communication channel; covert channel internal control protocols; dynamic routing; hidden information; malware communication channel; microprotocol; network covert channels; ping tunnel; reliable data transfer; session management; smart covert channel tool; stealthy communication channel; Communication channels; Overlay networks; Payloads; Protocols; Reliability; Routing; Timing; ICMP tunneling; active warden; covert channels; information hiding; micro protocols; network security; overlay routing; passive warden; steganography (ID#: 15-8303)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299946&isnumber=7299862
Dakhane, D.M.; Deshmukh, P.R., "Active Warden for TCP Sequence Number Base Covert Channel," in Pervasive Computing (ICPC), 2015 International Conference on, pp. 1-5, 8-10 Jan. 2015
doi: 10.1109/PERVASIVE.2015.7087183
Abstract: Network covert channel generally use for leak the information by violating the security policies. It allows the attacker to send as well as receive secret information without being detected by the network administrator or warden in the network. There are several ways to implement such covert channels; Storage covert channel and Timing covert channel. However there is always some possibility of these covert channels being identified depending on their behaviour. In this paper, we propose, an active warden, which normalizes incoming and outgoing network traffic for eliminating all possible storage based covert channels. It is specially design for TCP sequence number because this field is a maximum capacity vehicle for storage based covert channel. Our experimental result shows that propose active warden model eliminates covert communication up to 99%, while overt communication is as intact.
Keywords: transport protocols; TCP sequence number base covert channel; maximum capacity vehicle; security policies; storage covert channel; timing covert channel; IP networks; Internet; Kernel; Protocols; Security; Telecommunication traffic; Timing; Active Warden; Network Covert Channels; Storage Covert Channels; TCP Headers; TCP ISN;TCP Sequence Number; TCP-SQN; TCP/IP (ID#: 15-8304)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087183&isnumber=7086957
Epishkina, A.; Kogos, K., "A Traffic Padding to Limit Packet Size Covert Channels," in Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on, pp. 519-525, 24-26 Aug. 2015. doi: 10.1109/FiCloud.2015.20
Abstract: Nowadays applications for big data are widely spread since IP networks connect milliards of different devices. On the other hand, there are numerous accidents of information leakage using IP covert channels worldwide. Covert channels based on packet size modification are resistant to traffic encryption, but there are some data transfer schemes that are difficult to detect. Investigation of the technique to limit the capacity of covert channels becomes topical as covert channels construction can violate big data security. The purpose of this work is to examine the capacity of a binary packet size covert channel when a traffic padding is generated.
Keywords: Big Data; IP networks; cryptography; electronic data interchange; telecommunication traffic; Big Data security; IP network; data transfer scheme; packet size covert channel; traffic encryption; traffic padding; Channel capacity; IP networks; Receivers; Security; Timing; Yttrium; big data; capacity; information security; limitation; network covert channels (ID#: 15-8305)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300861&isnumber=7300539
Xuyang; Zouchenpeng; Yangning, "Network Covert Channel Analysis Based on the Density Multilevel Two Segment Clustering," in Software Engineering and Service Science (ICSESS), 2015 6th IEEE International Conference on, pp. 263-266, 23-25 Sept. 2015. doi: 10.1109/ICSESS.2015.7339051
Abstract: On the problem of covert channel detection, the traditional detection algorithms exist specific covert channel blind area, or it is useful for some kind of covert channel detection but ignore other covert channels. In order to solve this problem, in this paper proposes network covert channel analysis method based on the density multilevel two segment clustering. Firstly, the problem of covert channel in complex network is studied, and its mathematical model and data feature extraction are presented; Secondly, based on hierarchical clustering and design its multilevel aggregation improved form using the given complex network channel coarsening clustering results, at the same time in each layer of coarse channel and the results of detection, using density clustering algorithm to implement complex network covert channel detection and thinning and improve the prediction accuracy. Finally, the proposed algorithm can detect the complex network covert channel quickly and accurately when the noise is no higher than 20%.
Keywords: computer network security; feature extraction; complex network channel coarsening clustering; complex network covert channel detection; data feature extraction; density multilevel two-segment clustering; mathematical model; network covert channel analysis method; Accuracy; Algorithm design and analysis; Classification algorithms; Clustering algorithms; Complex networks; Gravity; Security; complex network; covert channel; density clustering; multilevel clustering; two segment analysis (ID#: 15-8306)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7339051&isnumber=7338993
Tuptuk, N.; Hailes, S., "Covert Channel Attacks in Pervasive Computing," in Pervasive Computing and Communications (PerCom), 2015 IEEE International Conference on, pp. 236-242, 23-27 March 2015. doi: 10.1109/PERCOM.2015.7146534
Abstract: Ensuring security in pervasive computing systems is an essential pre-requisite for their deployment. Typically, such systems are reliant on wireless networks for communication; however, whilst a considerable amount of attention has been given to cryptographic mechanisms for securing that wireless link, almost none has been devoted to the creation of covert channels capable of circumventing perimeter security. In systems that embody an element of control, covert channels offer the potential both to leak information that might be considered private and to alter the operation of the system in ways that are undesirable or unsafe. In this paper, we present two novel forms of covert channel designed to leak information from a compromised node within a secured network in ways that are statistically undetectable by other parts of that system. These two attacks rely on: modulation of transmission power, which impacts the RSSI/LQI of a message; and modulation of sensor data in a way that can be seen in the encrypted form of that data. We report the results of an extensive set of practical experiments designed to assess the channel capacity of these covert channels. Overall, this paper demonstrates that the creation of undetectable covert channels is a practical proposition in pervasive computing systems. This, in turn, has implications for key distribution: the use of individual, rather than group, keys is necessary to limit the exposure caused by a successful covert channel attack.
Keywords: radio links; radio networks; telecommunication security; ubiquitous computing; wireless channels; RSSI/LQI; channel capacity; covert channel attack; cryptographic mechanism; leak information; perimeter security; pervasive computing system; secured network; sensor data; transmission power; wireless link; wireless networks; Cryptography; Pervasive computing; Receivers; Transmitters; Wireless communication; Wireless sensor networks (ID#: 15-8307)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7146534&isnumber=7146496
Hong Rong; Huimei Wang; Jian Liu; Xiaochen Zhang; Ming Xian, "WindTalker: An Efficient and Robust Protocol of Cloud Covert Channel Based on Memory Deduplication," in Big Data and Cloud Computing (BDCloud), 2015 IEEE Fifth International Conference on, pp. 68-75, 26-28 Aug. 2015. doi: 10.1109/BDCloud.2015.12
Abstract: As information security and privacy are primary concerns for most enterprises and individuals, a threat called Cross-VM (Virtual Machine) Attack certainly impedes their adoption of public or hybrid cloud computing. Specifically, Cross-VM Attack enables hostile tenants to leverage various forms of covert channels to exfiltrate sensitive information of victims on the same physical host. A new covert channel has been demonstrated by exploiting a special feature of memory deduplication which is widely used in virtualization products, that is, writing to a shared page would incur longer access delay than those non-shared. However, this sort of covert channel attack is merely considered as "potential threat" due to lack of practical protocol. In this paper, we study how to design an efficient and reliable protocol of CCCMD (Cloud Covert Channel based on Memory Deduplication). We first analyze the CCCMD working scheme in a virtualized environment, and uncover its major defects and implementation difficulties. We then build a prototype named WindTalker which overcomes these obstacles. Our experiments show that WindTalker performs much better with lower bit error rate and achieves a reasonable transmission speed adaptive to noisy environment.
Keywords: cloud computing; computer crime; cryptographic protocols; data privacy; error statistics; virtual machines; virtualisation; CCCMD protocol; WindTalker; bit error rate; cloud covert channel based on memory deduplication protocol; covert channel attack; cross-VM attack; cross-virtual machine attack; enterprises; hostile tenants; hybrid cloud computing; information privacy; information security; noisy environment; public cloud computing; robust protocol; transmission speed; virtualization products; virtualized environment; Delays; Encoding; Merging; Protocols; Receivers; Synchronization; Uncertainty; Cloud Computing; Covert Channel; Memory Deduplication; Virtualization Security (ID#: 15-8308)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7310718&isnumber=7310694
Epishkina, A.; Kogos, K., "A Random Traffic Padding to Limit Packet Size Covert Channels," in Computer Science and Information Systems (FedCSIS), 2015 Federated Conference on, pp. 1107-1111, 13-16 Sept. 2015. doi: 10.15439/2015F88
Abstract: This paper observes different methods for network covert channels constructing and describes the scheme of the packet length covert channel. The countermeasure based on random traffic padding generating is proposed. The capacity of the investigated covert channel is estimated and the relation between parameter of covert channel and counteraction tool is examined. Practical recommendation for using the obtained results are given.
Keywords: channel capacity; packet size covert channels; random traffic padding; Channel capacity; Channel estimation; IP networks; Receivers; Security; Timing; Yttrium (ID#: 15-8309)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7321567&isnumber=7321412
Epishkina, A.; Kogos, K., "Covert Channels Parameters Evaluation Using the Information Theory Statements," in IT Convergence and Security (ICITCS), 2015 5th International Conference on, pp.1-5, 24-27 Aug. 2015. doi: 10.1109/ICITCS.2015.7292966
Abstract: This paper describes a packet length network covert channel and violators possibilities to build such a channel. Then the technique to estimate and limit the capacity of such channel is presented. The calculation is based on the information theory statements and helps to diminish the negative effects of covert channels in information systems, e.g. data leakage.
Keywords: information theory; telecommunication channels; covert channel parameter evaluation; information theory statements; packet length network covert channel; Channel capacity; Channel estimation; IP networks; Receivers; Security; Timing (ID#: 15-8310)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292966&isnumber=7292885
Peng Yang; Hui Zhao; Zhonggui Bao, "A Probability-Model-Based Approach to Detect Covert Timing Channel," in Information and Automation, 2015 IEEE International Conference on, pp. 1043-1047, 8-10 Aug. 2015. doi: 10.1109/ICInfA.2015.7279440
Abstract: Interest of detecting covert timing channels is increasing rapidly. A lot of exploitation has been done on the construction and detection of covert timing channels over the internet. But the detection of covert timing channels is a challenging task because legitimate network traffic is so various that it's hard to detect and distinguish. The existing detection approaches are not so effective to detect the variety of covert timing channels known to security community. In this paper, we first review some typical detection methods of covert timing channels and then evaluate every approach. After that we introduce a new model-based approach to detecting various covert timing channels. Our new approach is based on the probability model that covert timing channels have different distribution from the legitimate channels. At last, we do an experiment to confirm the effectiveness of our model-based approach. The experiment result shows that our model-based approach is sensitive to the current timing channels, and is capable of detecting them in an accurate manner.
Keywords: probability; telecommunication channels; telecommunication traffic; Internet; covert timing channel detection; network traffic; probability model; Computers; Delays; Entropy; Random variables; Security; Telecommunication traffic; covert timing channel; detection; probability-model-based (ID#: 15-8311)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279440&isnumber=7279248
Rezaei, F.; Hempel, M.; Shrestha, P.L.; Rakshit, S.M.; Sharif, H., "Detecting Covert Timing Channels Using Non-Parametric Statistical Approaches," in Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, pp. 102-107, 24-28 Aug. 2015. doi: 10.1109/IWCMC.2015.7289065
Abstract: Extensive availability and development of Internet applications and services open up the opportunity for abusing network and Internet resources to distribute malicious data and leak sensitive information. One of the prevalent information-hiding approaches suitable for such activities is known as Covert Timing Channel (CTC), which utilizes the modulation of Inter-Packet Delays (IPDs) to embed secret data and transfers that to designated receivers. In this paper, we propose two different non-parametric statistical tests that can be employed to detect this type of covert communication activities over a network. The new detection metrics are evaluated and verified against four different and highly recognized CTC algorithms. The experimental results show that the proposed detection metrics can reliably and effectively distinguish between the covert and overt traffic flows, thus significantly supporting our research toward an accurate blind and comprehensive CTC detection. This is a capability vital to cyber security in today's information society.
Keywords: Internet; computer network security; modulation; statistical analysis; CTC; IPD modulation; Internet resources; Internet services; covert communication activities; covert timing channel; cyber security; designated receivers; inter-packet delays modulation; malicious data; network resources; nonparametric statistical tests; overt traffic flows; Algorithm design and analysis; Delays; Entropy; Reliability; Telecommunication traffic; Covert Channel Detection; Covert Communication; Covert Timing Channel; Detection Fingerprints; Information Hiding (ID#: 15-8312)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289065&isnumber=7288920
Guri, M.; Monitz, M.; Mirski, Y.; Elovici, Y., "BitWhisper: Covert Signaling Channel between Air-Gapped Computers Using Thermal Manipulations," in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, pp. 276-289, 13-17 July 2015. doi: 10.1109/CSF.2015.26
Abstract: It has been assumed that the physical separation ('air-gap') of computers provides a reliable level of security, such that should two adjacent computers become compromised, the covert exchange of data between them would be impossible. In this paper, we demonstrate BitWhisper, a method of bridging the air-gap between adjacent compromised computers by using their heat emissions and built-in thermal sensors to create a covert communication channel. Our method is unique in two respects: it supports bidirectional communication, and it requires no additional dedicated peripheral hardware. We provide experimental results based on the implementation of the Bit-Whisper prototype, and examine the channel's properties and limitations. Our experiments included different layouts, with computers positioned at varying distances from one another, and several sensor types and CPU configurations (e.g., Virtual Machines). We also discuss signal modulation and communication protocols, showing how BitWhisper can be used for the exchange of data between two computers in a close proximity (positioned 0-40 cm apart) at an effective rate of 1-8 bits per hour, a rate which makes it possible to infiltrate brief commands and exfiltrate small amount of data (e.g., passwords) over the covert channel.
Keywords: computer network security; protocols; BitWhisper prototype; CPU configurations; air-gapped computers; bidirectional communication; built-in thermal sensors; communication protocols; computer network; covert communication channel; heat emissions; physical separation; sensor types; signal modulation; signaling channel; thermal manipulations; virtual machines; Central Processing Unit; Computers; Heating; Layout; Temperature sensors; air-gap; bridging; covert channel; exfiltration; infiltration; sensors; temperature (ID#: 15-8313)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243739&isnumber=7243713
Benedetto, F.; Giunta, G.; Liguori, A.; Wacker, A., "A Novel Method for Securing Critical Infrastructures by Detecting Hidden Flows of Data," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 648-654, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346881
Abstract: This work introduces a novel method for securing critical infrastructures. We propose an innovative hypothesis test for intrusion detection in data communications. In particular, we detect the presence or absence of a covert (i.e. hidden) timing channel. We devised a new testing procedure, namely the Weibullness test, that statistically measures how much the series under investigation (inter-arrival times of the received packets) fits Weibull vs. non-Weibull models. This is equivalent to differentiating between the cases of legitimate and covert data communications. The achieved results show the robustness of this innovative test versus the conventional shape and regularity tests, even in presence of short-lived covert communications for intrusion detection in data communications.
Keywords: Conferences; Data communication; Shape; Testing; Timing; Weibull distribution; Yttrium; Covert timing channels; Detection methods; Performance analysis; Regularity test; Shape test (ID#: 15-8314)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346881&isnumber=7346791
Hong Zhao; Minxiou Chen, "WLAN Covert Timing Channel Detection," in Wireless Telecommunications Symposium (WTS), 2015, pp. 1-5, 15-17 April 2015. doi: 10.1109/WTS.2015.7117246
Abstract: Wireless LANs have been widely used to carry out a system to access Internet. WLAN security becomes mission one, especially a new type of attacks called covert channel based attack surfaced over the past few years. This attack uses different data rates provided in WLAN to transmit a secret message. Detecting this covert channel could be difficult due to existence of rate diversity in 802.11 WLAN. Multiple transmission data rates are supported to exploit the trade-off between obtaining the highest possible data rate and trying to minimize the number of communication errors. In this paper, a feature modal is proposed to form possible hypotheses and then statistic hypothesis testing is applied. Simulation results on publicly available WLAN traffic show that our proposed approach could achieve 100% detection rate.
Keywords: Internet; computer network security; message authentication; signal detection; statistical analysis; telecommunication traffic; wireless LAN; wireless channels; IEEE 802.11 standard; Internet; WLAN covert timing channel detection; WLAN traffic; communication error minimization; secret message transmission; statistic hypothesis testing; wireless local area network security; IEEE 802.11 Standards; Monitoring; Security; Testing; Timing; Wireless LAN; Wireless communication (ID#: 15-8315)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7117246&isnumber=7117237
Rezaei, Fahimeh; Hempel, Michael; Shrestha, Pradhumna Lal; Rakshit, Sushanta Mohan; Sharif, Hamid, "A Novel Covert Timing Channel Detection Approach For Online Network Traffic," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 737-738, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346911
Abstract: In this paper, we propose a novel Covert Timing Channel (CTC) detection method that leverages computationally low-cost statistical measures to precisely detect covert communication, using only minimum network traffic knowledge. The proposed detection approach utilizes three different non-parametric statistical tests to classify overt and covert inter-packet delays.
Keywords: Computers; Delays; History; Image edge detection; Knowledge engineering; Telecommunication traffic (ID#: 15-8316)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346911&isnumber=7346791
Qingfeng Tan; Jinqiao Shi; Binxing Fang; Wentao Zhang; Xuebin Wang, "Stegop2p: Oblivious User-Driven Unobservable Communications," in Communications (ICC), 2015 IEEE International Conference on, pp.7126-7131, 8-12 June 2015. doi: 10.1109/ICC.2015.7249463
Abstract: With increasing concern for erosion of privacy, privacy preserving and censorship-resistance techniques are becoming more and more important. Anonymous communication techniques offer an important method defending against Internet surveillance, but these techniques don't conceal themselves when used. In this paper, we propose StegoP2P, an unobservable communication system with Internet users in overlay network that relies on Innocent users' oblivious data downloading, StegoP2P works by deploying a end-to-middle proxies, which inspect special steganography flows from StegoP2P users to innocent-looking destinations and mirror them to the true destination requested by oblivious P2P users. The hidden communication is indistinguishable from normal network communications to any adversaries without a private key, hence, making the StegoP2P clients unobservable. We have developed a proof-of-concept application based on Vuze and conducted evaluations through experiments.
Keywords: Internet; overlay networks; peer-to-peer computing; steganography; Internet users; StegoP2P; Vuze proof-of-concept application; end-to-middle proxy; hidden communication; innocent users oblivious data downloading; innocent-looking destinations; normal network communications; oblivious user-driven unobservable communications; overlay network; steganography; Censorship; IP networks; Internet; Peer-to-peer computing; Protocols; Security; Servers; Censorship-resistant; Covert channel; Steganography; Unobservable communication (ID#: 15-8317)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249463&isnumber=7248285
Hussain, Rasheed; Kim, Donghyun; Tokuta, Alade O.; Melikyan, Hayk M.; Oh, Heekuck, "Covert Communication Based Privacy Preservation in Mobile Vehicular Networks," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp. 55-60, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357418
Abstract: Due to the dire consequences of privacy abuse in vehicular ad hoc network (VANET), a number of mechanisms have been put forth to conditionally preserve the user and location privacy. To date, multiple pseudonymous approach is regarded as one of the best effective solutions where every node uses multiple temporary pseudonyms. However, recently it has been found out that even multiple pseudonyms could be linked to each other and to a single node thereby jeopardizing the privacy. Therefore in this paper, we propose a novel identity exchange-based approach to preserve user privacy in VANET where a node exchanges its pseudonyms with the neighbors and uses both its own and neighbors' pseudonym randomly to preserve privacy. Additionally the revocation of the immediate user of the pseudonym is made possible through an efficient revocation mechanism. Moreover the pseudonym exchange is realized through covert communication where a side channel is used to establish a covert communication path between the exchanging nodes, based on the scheduled beacons. Our proposed scheme is secure, robust, and it preserves privacy through the existing beacon infrastructure.
Keywords: Cryptography; Privacy; Standards; Transmission line measurements; Vehicles; Vehicular ad hoc networks; Beacons; Conditional Privacy; Covert Communication; Pseudonyms; VANET (ID#: 15-8318)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357418&isnumber=7357245
Ligong Wang; Wornell, G.W.; Lizhong Zheng, "Limits of Low-Probability-of-Detection Communication Over a Discrete Memoryless Channel," in Information Theory (ISIT), 2015 IEEE International Symposium on, pp. 2525-2529, 14-19 June 2015. doi: 10.1109/ISIT.2015.7282911
Abstract: This paper considers the problem of communication over a discrete memoryless channel subject to the constraint that the probability that an adversary who observes the channel outputs can detect the communication is low. Specifically, the relative entropy between the output distributions when a codeword is transmitted and when no input is provided to the channel must be sufficiently small. For a channel whose output distribution induced by the zero input symbol is not a mixture of the output distributions induced by other input symbols, it is shown that the maximum number of bits that can be transmitted under this criterion scales like the square root of the blocklength. Exact expressions for the scaling constant are also derived.
Keywords: channel coding; entropy codes; signal detection; steganography; codeword transmission; discrete memoryless channel; entropy; low-probability-of-detection communication limits; scaling constant; steganography; zero input symbol; AWGN channels; Channel capacity; Memoryless systems; Receivers; Reliability theory; Transmitters; Fisher information; Low probability of detection; covert communication; information-theoretic security (ID#: 15-8319)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282911&isnumber=7282397
Mehic, M.; Slachta, J.; Voznak, M., "Hiding Data in SIP Session," in Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, pp. 1-5, 9-11 July 2015. doi: 10.1109/TSP.2015.7296445
Abstract: Steganography is method of hiding data inside of existing channels of communications. SIP is one of the key protocols used to implement Voice over IP. It is used for establishing, managing and termination of the communication session. During the call, SIP is used for changing parameters of the session as well as for the transfer of DTMF or instant messages. We analyzed scenario where two users (Alice and Bob) want to exchange hidden message via SIP protocol. Their call is established over Kamailio, SIP Proxy server. We were interested in a number of SIP messages that are exchanged during the call with an average duration of 60 seconds. Then we used SNORT IDS with hard coded rules and AD.SNORT (Anomaly Detection) for detecting irregularities while we increased the number of SIP messages. Finally, we calculated the available steganographic bandwidth, amount of hidden data that can be transferred in these messages. The results obtained from the experiments show that it is possible to create a covert channel over SIP with bandwidth of several kbps.
Keywords: Internet telephony; protocols; steganography; AD.SNORT; Kamailio; SIP Proxy server; SIP protocol; SIP session; SNORT IDS; Voice over IP; anomaly detection; data hiding; hard coded rules; key protocols; steganography; Bandwidth; Floods; Generators; IP networks; Protocols; Servers; Telecommunication traffic; Anomaly Detection; Kamailio; Proxy; SIP; Steganography; VoIP (ID#: 15-8320)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296445&isnumber=7296206
Ummenhofer, M.; Schell, J.; Heckenbach, J.; Kuschel, H.; O'Hagan, D.W., "Doppler Estimation for DVB-T Based Passive Radar Systems on Moving Maritime Platforms," in Radar Conference (RadarCon), 2015 IEEE, pp. 1687-1691, 10-15 May 2015. doi: 10.1109/RADAR.2015.7131270
Abstract: PR (Passive Radar) systems using digital broadcasting services such as DVB-T (Digital Versatile Broadcasting - Terrestrial) transmission as illuminators represent surveillance solutions which have rapidly evolved and matured over recent years. PR systems typically use coherent integration time in the order of a few hundred milliseconds to acquire enough dynamic for the detection of moving objects. This is done under the assumption that both the illuminator of opportunity and the receiver stay static during this time period. However, advances in miniaturization of high performing computers and data storage devices allowed the design of PR systems that are compact enough to be operated onboard of moving platforms such as cars [1] and airplanes [2], [3]. Deploying PR systems on maritime platforms could enable covert surveillance of small land or sea based targets in littoral environment. A receiver mounted on such a platform may be subjected to highly non-linear motions. In this case a reference signal generated under the assumption of a static scenario may de-correlate from measured Doppler shifted surveillance channel and consequently degrade the PR systems detection performance. Compensation of these detrimental effects requires highly sampled and accurate measurements of the vessels Doppler shift with respect to the transmitter. To study these effects a two channel PR system for DVB-T broadcast reception has been deployed on a small boat to acquire platform motion data in a littoral environment. Based on the data gathered in this trial, a robust method for the Doppler estimation was developed, which uses the DVB-T standards OFDM (Orthogonal Frequency Division Multiplexing) signal features. The validity of this approach is verified with data gathered simultaneously from onboard IMU (Internal Motion Units) data.
Keywords: Doppler radar; OFDM modulation; marine radar; object detection; passive radar; radar receivers; radar transmitters; DVB-T based passive radar systems; Doppler estimation; Doppler shifted surveillance channel; IMU; OFDM; PR systems detection performance; data storage devices; digital broadcasting services; digital versatile broadcasting terrestrial transmission; internal motion units data; littoral environment; maritime platforms; nonlinear motions; object detection; orthogonal frequency division multiplexing; receiver; reference signal generation; signal features; transmitter; Clocks; Digital video broadcasting; Doppler effect; OFDM; Passive radar; Receivers; Transmitters (ID#: 15-8321)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131270&isnumber=7130933
Yichao Jia; Guangjie Liu; Lihua Zhang, "Bionic Camouflage Underwater Acoustic Communication Based on Sea Lion Sounds," in Control, Automation and Information Sciences (ICCAIS), 2015 International Conference on, pp. 332-336, 29-31 Oct. 2015. doi: 10.1109/ICCAIS.2015.7338688
Abstract: In military confrontation, traditional underwater acoustic communication techniques with fixed frequency and modulation fashion are likely to result in the exposure of the submarine's position. It is helpful for enhance the submarine's concealment to use the sea background noise as carrier to perform communication. In this paper, a novel covert underwater communication method based on sea lion sounds is proposed. Properties of sea lion sounds are investigated firstly. According to the analysis result, sea lion click sound is used as information carrier and whistles are used as synchronization. Information is modulated on the compressed click based on the dual-orthogonal modulation method. For improving the receiving SNR, the channel equalization is performed by passive time reversal mirror technique, whereas channel estimation is done through matching pursuit method under the theory of compressed sensing. The efficiency and feasibility of the proposed method are verified by the simulation.
Keywords: biocybernetics; channel estimation; compressed sensing; frequency modulation; military communication; synchronisation; underwater acoustic communication; wireless channels; bionic camouflage underwater acoustic communication; channel equalization; channel estimation; compressed sensing theory; dual-orthogonal modulation method; frequency modulation; matching pursuit method; military confrontation; passive time reversal mirror technique; receiving SNR improvement; sea background noise; sea lion sound; submarine concealment enhancement; submarine position exposure; synchronization; Channel estimation; Correlation; Frequency modulation; Matching pursuit algorithms; Synchronization; Underwater acoustics; bionic; covert; sea lion; underwater acoustic communication (ID#: 15-8322)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338688&isnumber=7338636
Ligong Wang; Wornell, G.W.; Lizhong Zheng, "Limits of Low-Probability-of-Detection Communication Over a Discrete Memoryless Channel," in Information Theory (ISIT), 2015 IEEE International Symposium on, pp. 2525-2529, 14-19 June 2015. doi: 10.1109/ISIT.2015.7282911
Abstract: This paper considers the problem of communication over a discrete memoryless channel subject to the constraint that the probability that an adversary who observes the channel outputs can detect the communication is low. Specifically, the relative entropy between the output distributions when a codeword is transmitted and when no input is provided to the channel must be sufficiently small. For a channel whose output distribution induced by the zero input symbol is not a mixture of the output distributions induced by other input symbols, it is shown that the maximum number of bits that can be transmitted under this criterion scales like the square root of the blocklength. Exact expressions for the scaling constant are also derived.
Keywords: channel coding; entropy codes; signal detection; steganography; codeword transmission; discrete memoryless channel; entropy; low-probability-of-detection communication limits; scaling constant; steganography; zero input symbol; AWGN channels; Channel capacity; Memoryless systems; Receivers; Reliability theory; Transmitters; Fisher information; Low probability of detection; covert communication; information-theoretic security (ID#: 15-8323)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282911&isnumber=7282397
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modification.
![]() |
Cryptanalysis 2015 |
Cryptanalysis is a core function for cybersecurity research. The work cited below looks at issues related to the Science of Security including cyber physical systems, composability, resilience, and metrics. These works appeared in 2015.
Kokes, J.; Lorencz, R., "Linear cryptanalysis of Baby Rijndael," in e-Technologies and Networks for Development (ICeND),2015 Forth International Conference on, pp. 1-6, 21-23 Sept. 2015. doi: 10.1109/ICeND.2015.7328533
Abstract: We present results of linear cryptanalysis of Baby Rijndael, a reduced-size model of Rijndael. The results were obtained using exhaustive search of all approximations and all keys and show some curious properties of both linear cryptanalysis and Baby Rijndael, particularly the existence of different classes of linear approximations with significantly different success rates of recovery of the cipher's key.
Keywords: approximation theory; cryptography; Baby Rijndael; Rijndael reduced-size model; cipher key recovery; exhaustive search; linear approximation; linear cryptanalysis; Algorithm design and analysis; Approximation algorithms; Ciphers; Linear approximation; Pediatrics; Baby Rijndael; Linear cryptanalysis; key recovery; linear approximations; success rate (ID#: 15-8401)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328533&isnumber=7328528
Divya, R.; Muthukumarasamy, S., "An Impervious QR-Based Visual Authentication Protocols to Prevent Black-Bag Cryptanalysis," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp. 1-6, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282330
Abstract: Black-bag cryptanalysis is used to acquire the cryptographic secrets from the target computers and devices through burglary or covert installation of keylogging and Trojan horse hardware/software. To overcome black-bag cryptanalysis, the secure authentication protocols are required. It mainly focuses on keylogging where the keylogger hardware or software is used to capture the client's keyboard strokes to intercept the password. They consider various root kits residing in PCs (Personnel Computers) to observe the client's behavior that breaches the security. The QR code can be used to design the visual authentication protocols to achieve high usability and security. The two authentication protocols are Time based One-Time-Password protocol and Password-based authentication protocol. Through accurate analysis, the protocols are proved to be robust to several authentication attacks. And also by deploying these two protocols in real-world applications especially in online transactions, the strict security requirements can be satisfied.
Keywords: QR codes; cryptographic protocols; invasive software; message authentication; QR code; QR-based visual authentication protocol; Trojan horse hardware/software; authentication attack; black-bag cryptanalysis; burglary; covert installation; cryptographic secret; keylogger hardware; keylogger software; keylogging; online transaction; password-based authentication protocol; personnel computer; secure authentication protocol; time based one-time-password protocol;Encryption;Hardware;Keyboards;Personnel;Protocols;Robustness;Android; Attack; Authentication; Black-bag cryptanalysis; Keylogging; Malicious code; Pharming; Phishing; QR code; Session hijacking; visualization (ID#: 15-8402)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282330&isnumber=7282219
Madhusudan, R.; Valiveti, A., "Cryptanalysis of Remote User Authentication Scheme with Key Agreement," in Computer, Communications, and Control Technology (I4CT), 2015 International Conference on, pp. 476-480, 21-23 April 2015. doi: 10.1109/I4CT.2015.7219623
Abstract: Password authentication with smart card is one of the most convenient and effective two-factor authentication mechanisms for remote systems to assure one communicating party of the legitimacy of the corresponding party by acquisition of corroborative evidence. This technique has been widely deployed for various kinds of authentication applications, such as remote host login, online banking, e-commerce and e-health. Recently, Kumari et al. presented a dynamic-identity-based user authentication scheme with session key agreement. In this research, we illustrate that Kumari et al.'s scheme violates the purpose of dynamic-identity contrary to author's claim. We show that once the smart card of an arbitrary user is lost, messages of all registered users are at risk. Using information from an arbitrary smart card, an adversary can impersonate any user of the system.
Keywords: cryptography; message authentication; smart cards; corroborative evidence acquisition; cryptanalysis; dynamic-identity-based user authentication scheme; password authentication; remote user authentication scheme; session key agreement; smart card; two-factor authentication mechanisms; Authentication; Bismuth; Nickel; Servers; Silicon; Smart cards; Smartcard; authentication; cryptanalysis; dynamic-id based authentication scheme (ID#: 15-8403)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219623&isnumber=7219513
Kexin Qiao; Lei Hu; Siwei Sun; Xiaoshuang Ma, "Related-Key Rectangle Cryptanalysis of Reduced-Round Block Cipher MIBS," in Application of Information and Communication Technologies (AICT), 2015 9th International Conference on, pp. 216-220, 14-16 Oct. 2015. doi: 10.1109/ICAICT.2015.7338549
Abstract: A related-key rectangle attack treats a block cipher as a cascade of two sub-ciphers to construct distinguishers. In this paper, by introducing related-key differential characteristics with high probability for each sub-cipher, we construct a distinguisher for 13-round MIBS80, a Feistel block cipher with key length of 80 bits, and launch a key-recovery attack on 15-round MIBS80 with time complexity of 267 and data complexity of 249. A similar attack is also launched on 13-round MIBS64, a version of the cipher with 64-bit keys. This is the first and a textbook related-key rectangle cryptanalysis on MIBS block cipher.
Keywords: computational complexity; cryptography; 13-round MIBS80; 15-round MIBS80; 64-bit keys; Feistel block cipher; data complexity; key-recovery attack; reduced-round block cipher MIBS; related-key rectangle cryptanalysis; time complexity; word length 64 bit; word length 80 bit; Ciphers; Computational modeling ;Lead; Schedules; Time complexity; MIBS block cipher; rectangle attack; rectangle distinguisher; related-key differential attack (ID#: 15-8404)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338549&isnumber=7338496
Ergun, S., "Cryptanalysis of a Double Scroll Based “True” Random Bit Generator," in Circuits and Systems (MWSCAS), 2015 IEEE 58th International Midwest Symposium on, pp. 1-4, 2-5 Aug. 2015. doi: 10.1109/MWSCAS.2015.7282066
Abstract: An algebraic cryptanalysis of a “true” random bit generator (RBG) based on a double-scroll attractor is provided. An attack system is proposed to analyze the security weaknesses of the RBG. Convergence of the attack system is proved using synchronization of chaotic systems with unknown parameters called auto-synchronization. All secret parameters of the RBG are recovered from a scalar time series using auto-synchronization where the other information available are the structure of the RBG and output bit sequence obtained from the RBG. Simulation and numerical results verifying the feasibility of the attack system are given. The RBG doesn't fulfill NIST-800-22 statistical test suite, the next bit can be predicted, while the same output bit stream of the RBG can be reproduced.
Keywords: cryptography; random number generation; synchronisation; RBG; algebraic cryptanalysis; attack system; attack system convergence; autosynchronization; chaotic system synchronization; double-scroll attractor; double-scroll based random bit generator; output bit sequence; output bit stream; scalar time series; secret parameter recovery; security weaknesses analysis; unknown parameters; Chaotic communication; Generators; Oscillators; Random number generation; Synchronization (ID#: 15-8405)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282066&isnumber=7281994
Chun-Ta Li; Cheng-Chi Lee; Hua-Hsuan Chen; Min-Jie Syu; Chun-Cheng Wang, "Cryptanalysis of an Anonymous Multi-Server Authenticated Key Agreement Scheme Using Smart Cards and Biometrics," in Information Networking (ICOIN), 2015 International Conference on, pp. 498-502, 12-14 Jan. 2015. doi: 10.1109/ICOIN.2015.7057955
Abstract: With the growing popularity of network applications, multi-server architectures are becoming an essential part of heterogeneous networks and numerous security mechanisms have been widely studied in recent years. To protect sensitive information and restrict the access of precious services for legal privileged users only, smart card and biometrics based password authentication schemes have been widely utilized for various transaction-oriented environments. In 2014, Chuang and Chen proposed an anonymous multi-server authenticated key agreement scheme based on trust computing using smart cards, password, and biometrics. They claimed that their three-factor scheme achieves better efficiency and security as compared to those for other existing biometrics-based and multi-server schemes. Unfortunately, in this paper, we found that the user anonymity of Chuang-Chen's authentication scheme cannot be protected from an eavesdropping attack during authentication phase. Moreover, their scheme is vulnerable to smart card lost problems, many logged-in users' attacks and denial-of-service attacks and is not easily reparable.
Keywords: biometrics (access control); cryptography; message authentication; smart cards; trusted computing; anonymous multiserver authenticated key agreement scheme; biometrics; cryptanalysis; denial-of-service attacks; eavesdropping attack; password authentication; smart card loss problems; trusted computing; user anonymity; Authentication; Biometrics (access control);Computer crime; Cryptography; Servers; Smart cards; Anonymity; Authentication; Biometrics; Cryptanalysis; Multi-server; Password; Smart cards (ID#: 15-8406)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7057955&isnumber=7057846
Harikrishnan, T.; Babu, C., "Cryptanalysis of Hummingbird Algorithm with Improved Security and Throughput," in VLSI Systems, Architecture, Technology and Applications (VLSI-SATA), 2015 International Conference on, pp. 1-6, 8-10 Jan. 2015. doi: 10.1109/VLSI-SATA.2015.7050460
Abstract: Hummingbird is a Lightweight Authenticated Cryptographic Encryption Algorithm. This light weight cryptographic algorithm is suitable for resource constrained devices like RFID tags, Smart cards and wireless sensors. The key issue of designing this cryptographic algorithm is to deal with the trade off among security, cost and performance and find an optimal cost-performance ratio. This paper is an attempt to find out an efficient hardware implementation of Hummingbird Cryptographic algorithm to get improved security and improved throughput by adding Hash functions. In this paper, we have implemented an encryption and decryption core in Spartan 3E and have compared the results with the existing lightweight cryptographic algorithms. The experimental results show that this algorithm has higher security and throughput with improved area than the existing algorithms.
Keywords: cryptography; telecommunication security; Hash functions; RFID tags; Spartan 3E;decryption core; hummingbird algorithm cryptanalysis; hummingbird cryptographic algorithm; lightweight authenticated cryptographic encryption algorithm; optimal cost-performance ratio; resource constrained devices; security ;smart cards; wireless sensors; Authentication; Ciphers; Logic gates; Protocols; Radiofrequency identification; FPGA Implementation; Lightweight Cryptography; Mutual authentication protocol; Security analysis (ID#: 15-8407)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7050460&isnumber=7050449
Chia-Mei Chen; Tien-Ho Chang, "The Cryptanalysis of WPA & WPA2 in the Rule-Based Brute Force Attack, an Advanced and Efficient Method," in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, pp. 37-41, 24-26 May 2015. doi: 10.1109/AsiaJCIS.2015.14
Abstract: The development of kinds of mobile device is a nonlinear but in a tremendous hopping way. The security of wireless LAN is far more important, and its mainly present protection is the WPA & WPA2 protocol which is a complex tough algorithm. This exploratory study shows that there is a security gap by the social human factors which are the weak passwords. Traditionally, brute force password attack is using the dictionary files that is aimless and extremely labor work. Now, we proposed 10 rule-based methods which are globally inclusive and culturally exclusive and prove the insecurity of WPA & WPA2 by 100 empirical and valuable real wireless encrypted packets of WPA & WPA2. The evidence shows that there is a 68 % of cracking rate and then do the passwords patterns analysis as well.
Keywords: computer network security; cryptographic protocols; mobile computing; mobile handsets; wireless LAN;WPA protocol;WPA2 protocol; brute force password attack; complex tough algorithm; cracking rate; cryptanalysis; dictionary files; mobile device; passwords patterns; rule-based brute force attack; rule-based methods; security gap; social human factors; weak passwords; wireless LAN; wireless encrypted packets; Communication system security; Dictionaries; Encryption; Force; Wireless LAN; Wireless communication; brute force attack; cryptanalysis in WPA & WPA2; dictionary attack; rule-based; wireless security (ID#: 15-8408)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153933&isnumber=7153836
Wicik, R.; Gliwa, R.; Komorowski, P., "Cryptanalysis of Alternating Step Generators," in Military Communications and Information Systems (ICMCIS), 2015 International Conference on, pp. 1-6, 18-19 May 2015. doi: 10.1109/ICMCIS.2015.7158683
Abstract: Alternate clocking of linear feedback shift registers is the popular technique used to increase the linear complexity of binary sequences produced by keystream generators designed for stream ciphers. The analysis of the best known attacks on the alternating step generator led us to add nonlinear filtering functions and the nonlinear scrambler to the construction. In this paper we give complexities of these attacks applied to the modified alternating step generator with nonlinear filters and the scrambler. We also suggest minimum lengths of registers in the original alternating step generator to make it resistant to the attacks.
Keywords: binary sequences; communication complexity; cryptography; function generators; nonlinear filters; shift registers; alternate clocking; alternating step generator; binary sequences; cryptanalysis; keystream generators; linear complexity; linear feedback shift registers; nonlinear filtering functions; nonlinear scrambler; stream cipher; Clocks; Correlation; Generators; Shift registers; Time complexity; feedback shift register; keystream generator; stream cipher (ID#: 15-8409)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158683&isnumber=7158667
Yongming Jin; Hongsong Zhu; Zhiqiang Shi; Xiang Lu; Limin Sun, "Cryptanalysis and Improvement of Two RFID-OT Protocols Based on Quadratic Residues," in Communications (ICC), 2015 IEEE International Conference on, pp. 7234-7239, 8-12 June 2015. doi: 10.1109/ICC.2015.7249481
Abstract: The ownership transfer of RFID tag means a tagged product changes control over the supply chain. Recently, Doss et al. proposed two secure RFID tag ownership transfer (RFID-OT) protocols based on quadratic residues. However, we find that they are vulnerable to the desynchronization attack. The attack is probabilistic. As the parameters in the protocols are adopted, the successful probability is 93.75%. We also show that the use of the pseudonym of the tag h(TID) and the new secret key KTID are not feasible. In order to solve these problems, we propose the improved schemes. Security analysis shows that the new protocols can resist in the desynchronization attack and other attacks. By optimizing the performance of the new protocols, it is more practical and feasible in the large-scale deployment of RFID tags.
Keywords: cryptographic protocols; probability; radiofrequency identification; supply chains; RFID-OT protocol improvement; cryptanalysis; desynchronization attack; probability; quadratic residue; radio frequency identification; secure RFID tag ownership transfer protocol; security analysis; supply chain; Cryptography; Information systems; Privacy; Protocols; Radiofrequency identification; Servers; Ownership Transfer; Protocol; Quadratic Residues; RFID; Security (ID#: 15-8410)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249481&isnumber=7248285
Upadhyay, D.; Shah, T.; Sharma, P., "Cryptanalysis of Hardware Based Stream Ciphers and Implementation of GSM Stream Cipher to Propose A Novel Approach for Designing N-Bit LFSR Stream Cipher," in VLSI Design and Test (VDAT), 2015 19th International Symposium on, pp. 1-6, 26-29 June 2015. doi: 10.1109/ISVDAT.2015.7208129
Abstract: With increasing use of network applications, security has become a major issue. Strong encryption mechanisms are required for securing important data. This encryption is provided by a strong cipher, capable of producing strong and highly random sequence of pseudo-random numbers. Through this paper, we present a detailed study of existing LFSR (Linear Feedback Shift Register) based hardware ciphers and an experimental approach to implement A5/1 algorithm on hardware platform. From this detailed study a generic cipher compatible with various network applications like smart cards, mobile phones, wireless LAN etc. has been proposed.
Keywords: cellular radio; cryptography; random sequences; shift registers; telecommunication security;A5/1 algorithm; GSM stream cipher; hardware based stream cipher cryptanalysis; linear feedback shift register; n-bit LFSR stream cipher designing; pseudo-random numbers. random sequence; Authentication; Ciphers; Clocks; Encryption; Hardware; Logic gates;A5/1;Cipher; LAN(Local Area Network); LFSR(Linear Feedback Shift Register) (ID#: 15-8411)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208129&isnumber=7208044
Arnaud, B.; Nicolas, B.; Eric, F., "Automatic Search for a Maximum Probability Differential Characteristic in a Substitution-Permutation Network," in System Sciences (HICSS), 2015 48th Hawaii International Conference on, pp. 5165-5174, 5-8 Jan. 2015. doi: 10.1109/HICSS.2015.610
Abstract: The algorithm presented in this paper computes a maximum probability differential characteristic in a Substitution-Permutation Network (or SPN). Such characteristics can be used to prove that a cipher is practically secure against differential cryptanalysis or on the contrary to build the most effective possible attack. Running in just a few second on 64 or 128-bit SPN, our algorithm is an important tool for both cryptanalists and designers of SPN.
Keywords: cryptography; probability; SPN; automatic search; cipher; differential cryptanalysis; maximum probability differential characteristic; substitution-permutation network; Algorithm design and analysis; Ciphers; Complexity theory; Encryption; Optimization; Cryptanalysis; Software security; Substitution-Permutation Network; software assurance (ID#: 15-8412)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070434&isnumber=7069647
Yuanwen Huang; Chattopadhyay, A.; Mishra, P., "Trace Buffer Attack: Security Versus Observability Study in Post-Silicon Debug," in Very Large Scale Integration (VLSI-SoC), 2015 IFIP/IEEE International Conference on, pp. 355-360, 5-7 Oct. 2015. doi: 10.1109/VLSI-SoC.2015.7314443
Abstract: Since the standardization of AES/Rijndael symmetric-key cipher by NIST in 2001, it gained widespread acceptance in various protocols and withstood intense scrutiny from the theoretical cryptanalysts. From the physical implementation point of view, however, AES remained vulnerable. Practical attacks on AES via fault injection, differential power analysis, scan-chain and cache-access timing have been demonstrated so far. Along this line, in this paper, we propose a novel and effective attack, termed Trace Buffer Attack. Trace buffers are extensively used for post-silicon debug of digital designs. We identify this as a source of information leakage and show that, unless proper countermeasure is taken, Trace Buffer Attack is capable of partially recovering the secret keys of different AES implementations. We report the detailed process of trace-buffer attack with experimental results. We also propose a countermeasure in order to avoid such attack.
Keywords: buffer storage; cryptography; observability; AES cipher; NIST; Rijndael symmetric-key cipher standardization; cache-access timing; differential power analysis; digital design; fault injection; information leakage; observability; post-silicon debug; scan-chain; secret key partial recovery; security countermeasure; theoretical cryptanalysis; trace buffer attack; Ciphers; Encryption; Memory management; Observability; Registers; AES; Cryptanalysis; Cryptography; Post-silicon Debug; Trace Buffer (ID#: 15-8413)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7314443&isnumber=7314373
Brodic, D.; Milivojevic, Z.N.; Maluckov, C.A., "Characterization of the Script Using Adjacent Local Binary Patterns," in Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, pp. 1-4, 9-11 July 2015. doi: 10.1109/TSP.2015.7296388
Abstract: The paper proposed an algorithm for the identification of the script by adjacent local binary patterns (ALBP). In the first phase, each letter in the text is modeled with the so-called script type, which is based on its status in the baseline area. Then, the feature extraction is made with the adjacent local binary pattern (ALBP). According to ALBP, the distinctive features of the script are set and stored for further analysis. Because of the difference in script characteristics, the analysis shows significant diversity between different scripts. Hence, it represents the key point for decision-making process of script identification. The proposed method is tested on the example of old Slavic printed documents, which contain Latin and Glagolitic script. The results of experiments are encouraging.
Keywords: cryptography; decision making; feature extraction; natural language processing; statistical analysis; text analysis; ALBP; Glagolitic script; Latin script; adjacent local binary patterns; cryptanalysis; decision-making process; feature extraction; old Slavic printed documents; script characterization; script identification; Algorithm design and analysis; Ciphers; Databases; Feature extraction; Hafnium; Statistical analysis; adjacent local binary pattern; cryptanalysis; script recognition; statistical analysis (ID#: 15-8414)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296388&isnumber=7296206
Ergun, Salih, "On the Security of a Double-Scroll Based "True" Random Bit Generator," in Signal Processing Conference (EUSIPCO), 2015 23rd European, pp. 2058-2061, Aug. 31 2015-Sept. 4 2015. doi: 10.1109/EUSIPCO.2015.7362746
Abstract: This paper is on the security of a true random bit generator (RBG) based on a double-scroll attractor. A clone system is proposed to analyze the security weaknesses of the RBG and its convergence is proved using master slave synchronization scheme. All secret parameters of the RBG are revealed where the only information available are the structure of the RBG and a scalar time series observed from the double-scroll at-tractor. Simulation and numerical results verifying the feasibility of the clone system are given such that the RBG doesn't fulfill NIST-800-22 statistical test suite, not only the next bit but also the same output bit stream of the RBG can be reproduced.
Keywords: Chaos; Cloning; Generators; Random number generation; Synchronization; Random number generator; continuous-time chaos; cryptanalysis; synchronization of chaotic systems; truly random (ID#: 15-8415)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7362746&isnumber=7362087
Phuong Ha Nguyen; Sahoo, D.P.; Chakraborty, R.S.; Mukhopadhyay, D., "Efficient Attacks on Robust Ring Oscillator PUF with Enhanced Challenge-Response Set," in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, pp. 641-646, 9-13 March 2015. Doi: (not provided)
Abstract: Physically Unclonable Function (PUF) circuits are an important class of hardware security primitives that promise a paradigm shift in applied cryptography. Ring Oscillator PUF (ROPUF) is an important PUF variant, but it suffers from hardware overhead limitations, which in turn restricts the size of its challenge space. To overcome this fundamental shortcoming, improved ROPUF variants based on the subset selection concept have been proposed, which significantly “expand” the challenge space of a ROPUF at acceptable hardware overhead. In this paper, we develop cryptanalytic attacks on a previously proposed low-overhead and robust ROPUF variant. The proposed attacks are practical as they have quadratic time and data complexities in the worst case. We demonstrate the effectiveness of the proposed attack by successfully attacking a public domain dataset acquired from FPGA implementations.
Keywords: copy protection; cryptography; field programmable gate arrays; oscillators; FPGA; PUF circuits; ROPUF; challenge-response set; cryptanalytic attacks; cryptography; data complexities; hardware security primitives; physically unclonable function circuits; public domain dataset; quadratic time; ring oscillator PUF; Algorithm design and analysis; Complexity theory; Cryptography; Hardware; Prediction algorithms; Ring oscillators; Cryptanalysis; hardware-intrinsic security; physically unclonable function (PUF); ring oscillator PUF (ROPUF) (ID#: 15-8416)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092468&isnumber=7092347
Sbiaa, Fatma; Baganne, Adel; Zeghid, Medien; Tourki, Rached, "A New Approach for Encryption System Based on Block Cipher Algorithms And Logistic Function," In Systems, Signals & Devices (SSD), 2015 12th International Multi-Conference on, pp. 1-5, 16-19 March 2015. doi: 10.1109/SSD.2015.7348107
Abstract: In this paper, a new approach for encryption system based on a block cipher algorithm and a logistic function is proposed. The main goal of the present work is to study the weaknesses of different operating modes in order to propose appropriate modifications. The experimental results show that the proposed modifications can be easily implemented and they do not need high level of consumption or hardware occupation. In addition, the security analysis proved the resistance of the new algorithms to statistical attacks, differential attacks and initial key sensibility.
Keywords: Chaos; Decision support systems; Indexes; Sensitivity; AES; Chaos; Security analysis; Symmetric cryptography; attacks; block cipher; cryptanalysis; operating modes; update function (ID#: 15-8417)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348107&isnumber=7348090
Alabaichi, A.; Salih, A.I., "Enhance Security of Advance Encryption Standard Algorithm Based on Key-Dependent S-Box," in Digital Information Processing and Communications (ICDIPC), 2015 Fifth International Conference on, pp. 44-53, 7-9 Oct. 2015. doi: 10.1109/ICDIPC.2015.7323004
Abstract: Cryptographic algorithms uniquely define the mathematical steps required to encrypt and decrypt messages in a cryptographic system. Shortly, they protect data from unauthorized access. The process of encryption is a crucial technique to ensure the protection of important electronic information and allows two parties to communicate and prevent unauthorized parties from accessing the information simultaneously. The process of encrypting information is required to be dynamic in nature to ensure protection from novel and advanced techniques used by cryptanalysts. The substitution box (S-box) is a key fundamental of contemporary symmetric cryptosystems as it provides nonlinearity to cryptosystems and enhances the security of their cryptography. This paper discusses the enhancement of the AES algorithm and describes the process, which involves the generation of dynamic S-boxes for Advance Encryption Standard (AES). The generated S-boxes are more dynamic and key-dependent which make the differential and linear cryptanalysis more difficult. NIST randomness tests and correlation coefficient were conducted on the proposed dynamic AES algorithm, their results showing that it is superior to the original AES with security verified.
Keywords: authorisation; cryptography; data protection; AES algorithm; NIST randomness tests; advance encryption standard algorithm; contemporary symmetric cryptosystems; correlation coefficient; cryptographic algorithms; cryptographic system; data protection; differential cryptanalysis; dynamic S-boxes; electronic information protection; information access; key-dependent S-box; linear cryptanalysis; messages decryption; security; substitution box; unauthorized access; unauthorized parties; Ciphers; Correlation coefficient; Encryption; Heuristic algorithms; Standards; AES;NIST test; S-box; correlation coefficient; dynamic S-box; inverse S-box; permutation (ID#: 15-8418)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7323004&isnumber=7322996
Islam, C.S.; Mollah, M.S.H., "Timing SCA Against HMAC to Investigate from the Execution Time of Algorithm Viewpoint," in Informatics, Electronics & Vision (ICIEV), 2015 International Conference on, pp. 1-6, 15-18 June 2015. doi: 10.1109/ICIEV.2015.7333988
Abstract: Phasor Measurement Units (PMUs), or synchrophasors, are rapidly being deployed in the smart grid with the goal of measuring phasor quantities concurrently from wide area distribution substations. There are a variety of security attacks on the PMU communications infrastructure. Timing Side Channel Attack (SCA) is one of these possible attacks. In this paper, timing side channel vulnerability against execution time of the HMAC-SHA1 authentication algorithm is considered. Both linear and negative binomial regression are used to model some security features of the stored key, e.g., its length and Hamming weight. The goal is to reveal secret-related information based on leakage models. The results would mitigate the cryptanalysis process of an attacker.
Keywords: phasor measurement; regression analysis; substations; HMAC-SHA1 authentication algorithm; Hamming weight; PMU communications infrastructure; cryptanalysis process; linear binomial regression; negative binomial regression; phasor measurement units; secret-related information; security attacks; synchrophasors; timing SCA; timing side channel attack; timing side channel vulnerability; wide area distribution substations; Authentication; Data models; Hamming weight; Linear regression; Phasor measurement units; Predictive models; Timing; PMU; Phasor; hamming weight; side Channel Attack; smart grid; timing Attack (ID#: 15-8419)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7333988&isnumber=7333967
Ghosh, S.; Chowdhury, D.R., "Preventing Fault Attack on Stream Cipher Using Randomization," in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, pp. 88-91, 5-7 May 2015. doi: 10.1109/HST.2015.7140243
Abstract: Fault attacks are one of the most popular side channel attacks. It has been mounted on numerous stream ciphers successfully. Almost all the winners of the eSTREAM project have been cryptanalyzed using fault attack techniques even if they were shown to be secure against algebraic cryptanalysis techniques. Beside, very little research work exists in the contemporary literature to prevent fault attacks on stream ciphers and most of them are attack specific. This necessitates a generalized fault attack prevention technique for stream ciphers. In the current paper, fault attacks on stream ciphers are formalized and a generalized approach to thwart this kind of attacks is proposed using fault randomization. It is also proved that the proposed countermeasure nullifies the advantage of performing fault analysis techniques. We validate our scheme taking Grain-128 as crypto primitive along with FPGA implementation.
Keywords: cryptography; FPGA implementation; algebraic cryptanalysis techniques; eSTREAM project; fault attack techniques; fault randomization; side channel attacks; stream cipher; Boolean functions; Ciphers; Hardware; Probabilistic logic; Silicon; DFA; Fault Randomization; Grain; Infective Countermeasure; Stream Cipher (ID#: 15-8420)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140243&isnumber=7140225
Junhan Yang; Bo Su, "IB-KEM Based Password Authenticated Key Exchange Protocol," in Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, pp. 1-6, 19-22 Sept. 2015. doi: 10.1109/ICSPCC.2015.7338831
Abstract: Cryptanalysis of Chang et al. proposed a communication-efficient three-party password authenticated key exchange protocol, we found that their protocol easily suffers from password-compromise impersonation attack and privileged impersonation attack. In this paper, we introduce a novel three-party password authenticated key exchange protocol based on IB-KEM under HDH assumption. Security analysis has shown that our protocol achieved the following security requirements: (1) Forward security; (2) Mutual authentication; (3) Off-line/on-line password guessing attack resistance; (4) Password compromise impersonation attack resistance; (5) Privileged impersonation attack resistance.
Keywords: cryptographic protocols; telecommunication security; IB-KEM based password authenticated key exchange protocol; cryptanalysis; forward security; mutual authentication; password compromise impersonation attack resistance; password guessing attack resistance; privileged impersonation attack resistance; security analysis; three-party password authenticated key exchange protocol; Authentication; Cryptography; Encapsulation; Protocols; Resistance; Servers; 3PAKE; HDH; IB-KEM; password compromise impersonation attack; privileged impersonation attack (ID#: 15-8421)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338831&isnumber=7338753
Bora, S.; Sen, P.; Pradhan, C., "Novel Color Image Encryption Technique Using Blowfish and Cross Chaos Map," in Communications and Signal Processing (ICCSP), 2015 International Conference on, pp.0879-0883, 2-4 April 2015. doi: 10.1109/ICCSP.2015.7322621
Abstract: Data security requirement increased due to transmission of huge data over the communication channel. For this, we have proposed a double encryption technique using Blowfish algorithm and Cross chaos map. These techniques have been chosen due to their resistance over the cryptanalysis attacks. Parameters such as NPCR (Number of Pixels Changing Rate), UACI (Unified Average Changing Intensity) and CC (Correlation Co-efficient) are used for the effectiveness of our proposed technique. The result provides a high level of security.
Keywords: cryptography; image colour analysis; Blowfish chaos map; NPCR; UACI; communication channel; correlation coefficient; cross chaos map; cryptanalysis attacks; data security requirement; double encryption technique; novel color image encryption technique; number of pixels changing rate; unified average changing intensity; Chaos; Communication channels; Encryption; Matrix decomposition; Resistance; Blowfish; Cross Chaos Map; Decryption; Encryption (ID#: 15-8422)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322621&isnumber=7322423
Upadhyaya, Akanksha; Shokeen, Vinod; Srivastava, Garima, "Image Encryption: Using AES, Feature Extraction and Random No. Generation," in Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, pp. 1-4, 2-4 Sept. 2015. doi: 10.1109/ICRITO.2015.7359286
Abstract: During data transmission, data can be transmitted in the form of text, image, audio and video, hence securing all kinds of data is most essential in today's era. Securing Image data is one of the major concern and a complex term. Various visual cryptographic techniques have been developed for confidentiality, authenticity and integrity of images during transmission and when it is received at other end. This paper proposes an Image encryption technique on the basis of 128-bit AES, Feature extraction and random no. generation. Applying AES in two levels and generating key on the basis of feature extraction makes the system more confidential and secure against cryptanalysis attacks.
Keywords: AES; Digital Image; Image encryption; Least significant bit; Visual Cryptography (ID#: 15-8423)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359286&isnumber=7359191
Ragini, K.; Sivasankar, S., "Security And Performance Analysis of Identity Based Schemes in Sensor Networks," in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, pp. 1-5, 19-20 March 2015. doi: 10.1109/ICIIECS.2015.7192881
Abstract: Security and efficient data transmission without any hurdles caused by external Attackers is an issue in sensor networks. This paper deals with the provision of an assured efficient data transmission in the sensor networks. To ensure this requirement Hash based Message Authentication Code (HMAC) and Message Digest (MD) is envisaged by employing identity based digital signature scheme (IBS). Identity based scheme is an encryption scheme that generates an operation of developing secret code with secret key that protects the data during transmission without any cryptanalysis. To achieve the above requisite the modalities used in HMAC and MD5 which simulates the functional efficiency &security of data transmission in sensor networks.
Keywords: data communication; data protection; digital signatures; private key cryptography; telecommunication security; wireless sensor networks; HMAC; IBS; MD; data protection; data transmission security; hash based message authentication code; identity based digital signature scheme; message digest; secret key encryption scheme; wireless sensor network security; Authentication; Cryptography; Data communication; Message authentication; Wireless sensor networks; HMAC; Hash algorithm; IBS; MD5; Security (ID#: 15-8424)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7192881&isnumber=7192777
Chandrasekaran, J.; Jayaraman, T.S., "A Fast and Secure Image Encryption Algorithm Using Number Theoretic Transforms and Discrete Logarithms," in Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, pp. 1-5, 19-21 Feb. 2015
doi: 10.1109/SPICES.2015.7091491
Abstract: Many of the Internet applications such as video conferencing, military image databases, personal online photograph albums and cable television require a fast and efficient way of encrypting images for storage and transmission. In this paper, discrete logarithms are used for generation of random keys and Number Theoretic Transform (NTT) is used as a transformation technique prior to encryption. The implementation of NTT is simple as it uses arithmetic for real sequences. Encryption and decryption involves the simple and reversible XOR operation of image pixels with the random keys based on discrete logarithms generated independently at the transmitter and receiver. Experimental results with the standard bench mark test images proposed in the USC-SIPI data base confirm the enhanced key sensitivity and strong resistivity of the algorithm against brute force attack and statistical crypt analysis. The computational complexity of the algorithm in terms of number of operations and number of rounds is very small in comparison with the other image encryption algorithms. The randomness of the keys generated has been tested and is found in accordance with the statistical test suite for security requirements of cryptographic modules as recommended by National Institute of Standards and Technology (NIST).
Keywords: computational complexity; cryptography; image processing; number theory; statistical analysis; transforms; Internet; NTT; USC-SIPI database; brute force attack; computational complexity; cryptographic modules; decryption; discrete logarithms; enhanced key sensitivity; fast image encryption algorithm ;image pixels; number theoretic transforms; random keys generation; receiver; reversible XOR operation; secure image encryption algorithm; standard benchmark test images; statistical cryptanalysis; transmitter; Chaotic communication; Ciphers; Correlation; Encryption; Transforms; Discrete Logarithms ;Image Encryption; Number Theoretic Transforms (ID#: 15-8425)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091491&isnumber=7091354
Jiageng Chen; Miyaj, A.; Sato, H.; Chunhua Su, "Improved Lightweight Pseudo-Random Number Generators for the Low-Cost RFID Tags," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 17-24, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.352
Abstract: EPC Gen2 tags are working as international RFID standards for the use in the supply chain worldwide, such tags are computationally weak devices and unable to perform even basic symmetric-key cryptographic operations. For this reason, to implement robust and secure pseudo-random number generators (PRNG) is a challenging issue for low-cost Radio-frequency identification (RFID) tags. In this paper, we study the security of LFSR-based PRNG implemented on EPC Gen2 tags and exploit LFSR-based PRNG to provide a better constructions. We provide a cryptanalysis against the J3Gen which is LFSR-based PRNG and proposed by Sugei et al. [1], [2] for EPC Gen2 tags using distinguish attack and make observations on its input using NIST randomness test. We also test the PRNG in EPC Gen2 RFID Tags by using the NIST SP800-22. As a counter-measure, we propose two modified models based on the security analysis results. We show that our results perform better than J3Gen in terms of computational and statistical property.
Keywords: cryptography; radiofrequency identification; random number generation; telecommunication security EPC Gen2 tags; LFSR-based PRNG security; NIST SP800-22; NIST randomness test; cryptanalysis; international RFID standards; lightweight pseudorandom number generators; low-cost RFID tags; radiofrequency identification; security analysis; symmetric-key cryptographic operations; Cryptography; Generators; NIST; Polynomials; RFID tags; EPC Gen2 RFID tag; lightweight PRNG; randomness test (ID#: 15-8426)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345260&isnumber=7345233
Idzikowska, E., "Faults Detection Schemes for PP-2 Cipher," in Military Communications and Information Systems (ICMCIS), 2015 International Conference on, pp. 1-4, 18-19 May 2015. doi: 10.1109/ICMCIS.2015.7158695
Abstract: Hardware implementations of cryptographic systems are becoming more and more popular, due to new market needs and to reduce costs. However, system security may be seriously compromised by implementation attacks, such as side channel analysis or fault analysis. Fault-based side-channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although hardware and time redundancy based Concurrent Error Detection (CED) architectures can be used to thwart such attacks, they entail significant overheads. In this paper we investigate systematic approaches to low-cost CED techniques for symmetric encryption algorithm PP-2, based on inverse relationships that exist between encryption and decryption at algorithm level, round level, and operation level. We show architectures that explore tradeoffs among performance penalty, area overhead, and fault detection latency.
Keywords: cryptography; error detection; fault diagnosis; redundancy; CED architectures; PP-2 cipher; algorithm level decryption; asymmetric encryption algorithms; cryptographic systems; fault analysis; fault detection latency; fault detection schemes; fault-based side-channel cryptanalysis; hardware implementations; implementation attacks; low-cost CED techniques; operation level decryption; round level decryption; side channel analysis; symmetric encryption algorithm; system security; time redundancy based concurrent error detection architectures; Ciphers; Encryption; Fault detection; Hardware; Redundancy; Registers; CED; PP-2; error detection latency fault detection; hardware redundancy; time redundancy (ID#: 15-8427)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158695&isnumber=7158667
Ogawa, K.; Inoue, T., "Practically Secure Update of Scrambling Scheme," in Broadband Multimedia Systems and Broadcasting (BMSB), 2015 IEEE International Symposium on, pp. 1-7, 17-19 June 2015.
doi: 10.1109/BMSB.2015.7177195
Abstract: Content distributed by broadcast and multicast services is often encrypted (scrambled) to protect copyrighted material. When any cryptanalysis of the current cryptographic scheme (scrambling scheme) used in such services is found, the scheme must be updated. However, the scheme cannot be updated suddenly because a lot of subscribers have receivers with the current scheme. We propose two cryptographic scheme updating methods. They have trade-off relationship between security and transmission bit rate. They use both current and new cryptographic schemes simultaneously, but their transmission bit rates do not need to be doubled. In addition, they are practically secure from the viewpoint of service quality.
Keywords: copy protection; cryptography; multicast communication; quality of service; telecommunication security; television broadcasting; television receivers; broadcast services; copyrighted material; cryptanalysis; cryptographic scheme; multicast services; scrambling scheme; service quality; transmission bit rate; Bit rate; Broadcasting; Encryption; Real-time systems; Receivers (ID#: 15-8428)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177195&isnumber=7177182
Aditya, S.; Mittal, V., "Multi-layered Crypto Cloud Integration of oPass," in Computer Communication and Informatics (ICCCI), 2015 International Conference on, pp. 1-7, 8-10 Jan. 2015. doi: 10.1109/ICCCI.2015.7218114
Abstract: One of the most popular forms of user authentication is the Text Passwords. It is due to its convenience and simplicity. Still, the passwords are susceptible to be taken and compromised under various threats and weaknesses. In order to overcome these problems, a protocol called oPass was proposed. A cryptanalysis of it was done. We found out four kinds of attacks which could be done on it i.e. Use of SMS service, Attacks on oPass communication links, Unauthorized intruder access using the master password, Network attacks on untrusted web browser. One of them was Impersonation of the User. In order to overcome these problems in cloud environment, a protocol is proposed based on oPass to implement multi-layer crypto-cloud integration with oPass which can handle this kind of attack.
Keywords: cloud computing; cryptography; SMS service; Short Messaging Service; cloud environment; cryptanalysis; master password; multilayered crypto cloud integration; oPass communication links; oPass protocol;text password; user authentication; user impersonation; Authentication; Cloud computing; Encryption; Protocols; Servers; Cloud; Digital Signature; Impersonation; Network Security; RSA; SMS; oPass (ID#: 15-8429)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218114&isnumber=7218046
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Cyber-crime Analysis 2015 |
As cyber-crime grows, methods for preventing, detecting, and responding are growing as well. Research is examining new faster more automated methods for dealing with cyber-crime both from a technical and a behavioral standpoint. Human behavior, resilience, policy-based governance and metrics are the hard topics in the Science of Security that are related. The work cited here was presented in 2015.
Stoll, J.; Bengez, R.Z., "Visual Structures for Seeing Cyber Policy Strategies," in Cyber Conflict: Architectures in Cyberspace (CyCon), 2015 7th International Conference on, pp. 135-152, 26-29 May 2015. doi: 10.1109/CYCON.2015.7158474
Abstract: In the pursuit of cyber security for organizations, there are tens of thousands of tools, guidelines, best practices, forensics, platforms, toolkits, diagnostics, and analytics available. However according to the Verizon 2014 Data Breach Report, “after analyzing 10 years of data... organizations cannot keep up with cyber crime-and the bad guys are winning.” Although billions are expended worldwide on cyber security, organizations struggle with complexity, e.g., the NISTIR 7628 guidelines for cyber-physical systems are over 600 pages of text. And there is a lack of information visibility. Organizations must bridge the gap between technical cyber operations and the business/social priorities since both sides are essential for ensuring cyber security. Identifying visual structures for information synthesis could help reduce the complexity while increasing information visibility within organizations. This paper lays the foundation for investigating such visual structures by first identifying where current visual structures are succeeding or failing. To do this, we examined publicly available analyses related to three types of security issues: 1) epidemic, 2) cyber attacks on an industrial network, and 3) threat of terrorist attack. We found that existing visual structures are largely inadequate for reducing complexity and improving information visibility. However, based on our analysis, we identified a range of different visual structures, and their possible trade-offs/limitation is framing strategies for cyber policy. These structures form the basis of evolving visualization to support information synthesis for policy actions, which has rarely been done but is promising based on the efficacy of existing visualizations for cyber incident detection, attacks, and situation awareness.
Keywords: data visualisation; security of data; terrorism; Verizon 2014 Data Breach Report; cyber attacks; cyber incident detection; cyber policy strategies; cyber security; information synthesis; information visibility; situation awareness; terrorist attack; visual structures; Complexity theory; Computer security; Data visualization; Organizations; Terrorism; Visualization; cyber security policy; human-computer interaction; organizations; visual structures; visualization (ID#: 15-8450)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158474&isnumber=7158456
Jain, N.; Kalbande, D.R., "Digital Forensic Framework Using Feedback and Case History Keeper," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp. 1-6, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045670
Abstract: Cyber crime investigation is the integration of two technologies named theoretical methodology and second practical tools. First is the theoretical digital forensic methodology that encompasses the steps to investigate the cyber crime. And second technology is the practically development of the digital forensic tool which sequentially and systematically analyze digital devices to extract the evidence to prove the crime. This paper explores the development of digital forensic framework, combine the advantages of past twenty five forensic models and generate a algorithm to create a new digital forensic model. The proposed model provides the following advantages, a standardized method for investigation, the theory of model can be directly convert into tool, a history lookup facility, cost and time minimization, applicable to any type of digital crime investigation.
Keywords: computer crime; digital forensics; system monitoring; case history keeper;cyber crime investigation; digital crime investigation; digital forensic framework; feedback; forensic models; history lookup facility; Adaptation models; Computational modeling; Computers; Digital forensics;History; Mathematical model; Digital forensic framework; digital crime; evidence (ID#: 15-8451)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045670&isnumber=7045627
Armin, J.; Thompson, B.; Ariu, D.; Giacinto, G.; Roli, F.; Kijewski, P., "2020 Cybercrime Economic Costs: No Measure No Solution," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 701-710, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.56
Abstract: Governments needs reliable data on crime in order to both devise adequate policies, and allocate the correct revenues so that the measures are cost-effective, i.e., The money spent in prevention, detection, and handling of security incidents is balanced with a decrease in losses from offences. The analysis of the actual scenario of government actions in cyber security shows that the availability of multiple contrasting figures on the impact of cyber-attacks is holding back the adoption of policies for cyber space as their cost-effectiveness cannot be clearly assessed. The most relevant literature on the topic is reviewed to highlight the research gaps and to determine the related future research issues that need addressing to provide a solid ground for future legislative and regulatory actions at national and international levels.
Keywords: government data processing; security of data; cyber security; cyber space; cyber-attacks; cybercrime economic cost; economic costs; Computer crime; Economics; Measurement; Organizations; Reliability; Stakeholders (ID#: 15-8452)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299982&isnumber=7299862
Tosh, D.; Sengupta, S.; Kamhoua, C.; Kwiat, K.; Martin, A., "An Evolutionary Game-Theoretic Framework for Cyber-Threat Information Sharing," in Communications (ICC), 2015 IEEE International Conference on, pp. 7341-7346, 8-12 June 2015. doi: 10.1109/ICC.2015.7249499
Abstract: The initiative to protect against future cyber crimes requires a collaborative effort from all types of agencies spanning industry, academia, federal institutions, and military agencies. Therefore, a Cybersecurity Information Exchange (CYBEX) framework is required to facilitate breach/patch related information sharing among the participants (firms) to combat cyber attacks. In this paper, we formulate a non-cooperative cybersecurity information sharing game that can guide: (i) the firms (players)1 to independently decide whether to “participate in CYBEX and share” or not; (ii) the CYBEX framework to utilize the participation cost dynamically as incentive (to attract firms toward self-enforced sharing) and as a charge (to increase revenue). We analyze the game from an evolutionary game-theoretic strategy and determine the conditions under which the players' self-enforced evolutionary stability can be achieved. We present a distributed learning heuristic to attain the evolutionary stable strategy (ESS) under various conditions. We also show how CYBEX can wisely vary its pricing for participation to increase sharing as well as its own revenue, eventually evolving toward a win-win situation.
Keywords: evolutionary computation; game theory; security of data; CYBEX framework; ESS; academia; collaborative effort; combat cyber attacks; cyber crimes; cyber threat information sharing; cybersecurity information exchange; evolutionary game theoretic framework; evolutionary game theoretic strategy; evolutionary stable strategy; federal institutions; military agencies; self-enforced evolutionary stability; spanning industry; Computer security; Games; Information management; Investment; Sociology; Statistics; CYBEX; Cybersecurity; Evolutionary Game Theory; Incentive Model; Information Sharing (ID#: 15-8453)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249499&isnumber=7248285
Namazifard, A.; Tousi, A.; Amiri, B.; Aminilari, M.; Hozhabri, A.A., "Literature Review of Different Contention of E-Commerce Security and the Purview of Cyber Law Factors," in e-Commerce in Developing Countries: With focus on e-Business (ECDC), 2015 9th International Conference on, pp. 1-14, 16-16 April 2015. doi: 10.1109/ECDC.2015.7156333
Abstract: Today, by widely spread of information technology (IT) usage, E-commerce security and its related legislations are very critical issue in information technology and court law. There is a consensus that security matters are the significant foundation of e-commerce, electronic consumers, and firms' privacy. While e-commerce networks need a policy for security privacy, they should be prepared for a simple consumer friendly infrastructure. Hence it is necessary to review the theoretical models for revision. In This theory review, we embody a number of former articles that cover security of e-commerce and legislation ambit at the individual level by assessing five criteria. Whether data of articles provide an effective strategy for secure-protection challenges in e-commerce and e-consumers. Whether provisions clearly remedy precedents or they need to flourish? This paper focuses on analyzing the former discussion regarding e-commerce security and existence legislation toward cyber-crime activity of e-commerce the article also purports recommendation for subsequent research which is indicate that through secure factors of e-commerce we are able to fill the vacuum of its legislation.
Keywords: computer crime; data privacy; electronic commerce; information systems; legislation; IT; cyber law factor; cyber-crime activity; e-commerce security; information technology; legislation; security privacy policy; Business; Electronic commerce; Information technology; Internet; Legislation; Privacy; Security; cyberspace security; e-commerce law; e-consumer protection; jurisdiction (ID#: 15-8454)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7156333&isnumber=7156307
Wazzan, M.A.; Awadh, M.H., "Towards Improving Web Attack Detection: Highlighting the Significant Factors," in IT Convergence and Security (ICITCS), 2015 5th International Conference on, pp. 1-5, 24-27 Aug. 2015. doi: 10.1109/ICITCS.2015.7293028
Abstract: Nowadays, with the rapid development of Internet, the use of Web is increasing and the Web applications have become a substantial part of people's daily life (e.g. E-Government, E-Health and E-Learning), as they permit to seamlessly access and manage information. The main security concern for e-business is Web application security. Web applications have many vulnerabilities such as Injection, Broken Authentication and Session Management, and Cross-site scripting (XSS). Subsequently, web applications have become targets of hackers, and a lot of cyber attack began to emerge in order to block the services of these Web applications (Denial of Service Attach). Developers are not aware of these vulnerabilities and have no enough time to secure their applications. Therefore, there is a significant need to study and improve attack detection for web applications through determining the most significant factors for detection. To the best of our knowledge, there is not any research that summarizes the influent factors of detection web attacks. In this paper, the author studies state-of-the-art techniques and research related to web attack detection: the author analyses and compares different methods of web attack detections and summarizes the most important factors for Web attack detection independent of the type of vulnerabilities. At the end, the author gives recommendation to build a framework for web application protection.
Keywords: Internet; computer crime; data protection; Internet; Web application protection; Web application security; Web application vulnerabilities; Web attack detection; XSS; broken authentication; cross-site scripting; cyber attack; denial of service attack; e-business; hackers; information access; information management; injection; session management; Buffer overflows; Computer crime; IP networks; Intrusion detection; Monitoring; Uniform resource locators (ID#: 15-8455)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293028&isnumber=7292885
Adebayo, Ojeniyi Joseph; ASuleiman, Idris; Ade, Abdulmalik Yunusa; Ganiyu, S.O; Alabi, I.O., "Digital Forensic Analysis for Enhancing Information Security," in Cyberspace (CYBER-Abuja), 2015 International Conference on, pp. 38-44, 4-7 Nov. 2015. doi: 10.1109/CYBER-Abuja.2015.7360517
Abstract: Digital Forensics is an area of Forensics Science that uses the application of scientific method toward crime investigation. The thwarting of forensic evidence is known as anti-forensics, the aim of which is ambiguous in the sense that it could be bad or good. The aim of this project is to simulate digital crimes scenario and carry out forensic and anti-forensic analysis to enhance security. This project uses several forensics and anti-forensic tools and techniques to carry out this work. The data analyzed were gotten from result of the simulation. The results reveal that although it might be difficult to investigate digital crime but with the help of sophisticated forensic tools/anti-forensics tools it can be accomplished.
Keywords: Analytical models; Computers; Cyberspace; Digital forensics; Information security; Operating systems; Digital forensic; anti-digital forensic; image acquisition; image integrity; privacy (ID#: 15-8456)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7360517&isnumber=7360499
Zeb, K.; Baig, O.; Asif, M.K., "DDoS Attacks and Countermeasures in Cyberspace," in Web Applications and Networking (WSWAN), 2015 2nd World Symposium on, pp. 1-6, 21-23 March 2015. doi: 10.1109/WSWAN.2015.7210322
Abstract: In cyberspace, availability of the resources is the key component of cyber security along with confidentiality and integrity. Distributed Denial of Service (DDoS) attack has become one of the major threats to the availability of resources in computer networks. It is a challenging problem in the Internet. In this paper, we present a detailed study of DDoS attacks on the Internet specifically the attacks due to protocols vulnerabilities in the TCP/IP model, their countermeasures and various DDoS attack mechanisms. We thoroughly review DDoS attacks defense and analyze the strengths and weaknesses of different proposed mechanisms.
Keywords: Internet; computer network security; transport protocols; DDoS attack mechanisms; Internet; TCP-IP model; computer networks; cyber security; cyberspace; distributed denial of service attacks; Computer crime; Filtering; Floods; IP networks; Internet; Protocols; Servers; Cyber security; Cyber-attack; Cyberspace; DDoS Defense; DDoS attack; Mitigation; Vulnerability (ID#: 15-8457)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210322&isnumber=7209078
Gorton, D., "Modeling Fraud Prevention of Online Services Using Incident Response Trees and Value at Risk," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 149-158, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.17
Abstract: Authorities like the Federal Financial Institutions Examination Council in the US and the European Central Bank in Europe have stepped up their expected minimum security requirements for financial institutions, including the requirements for risk analysis. In a previous article, we introduced a visual tool and a systematic way to estimate the probability of a successful incident response process, which we called an incident response tree (IRT). In this article, we present several scenarios using the IRT which could be used in a risk analysis of online financial services concerning fraud prevention. By minimizing the problem of underreporting, we are able to calculate the conditional probabilities of prevention, detection, and response in the incident response process of a financial institution. We also introduce a quantitative model for estimating expected loss from fraud, and conditional fraud value at risk, which enables a direct comparison of risk among online banking channels in a multi-channel environment.
Keywords: Internet; computer crime; estimation theory; financial data processing; fraud; probability; risk analysis; trees (mathematics);IRT; conditional fraud value; cyber criminal; fraud prevention modelling; incident response tree; online financial service; probability estimation; risk analysis; Europe; Online banking; Probability; Trojan horses (ID#: 15-8458)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299908&isnumber=7299862
Tan Heng Chuan; Jun Zhang; Ma Maode; Chong, P.H.J.; Labiod, H., "Secure Public Key Regime (SPKR) in Vehicular Networks," in Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC), 2015 International Conference on, pp. 1-7, 5-7 Aug. 2015. doi: 10.1109/SSIC.2015.7245678
Abstract: Public Key Regime (PKR) was proposed as an alternative to certificate based PKI in securing Vehicular Networks (VNs). It eliminates the need for vehicles to append their certificate for verification because the Road Side Units (RSUs) serve as Delegated Trusted Authorities (DTAs) to issue up-to-date public keys to vehicles for communications. If a vehicle's private/public key needs to be revoked, the root TA performs real time updates and disseminates the changes to these RSUs in the network. Therefore, PKR does not need to maintain a huge Certificate Revocation List (CRL), avoids complex certificate verification process and minimizes the high latency. However, the PKR scheme is vulnerable to Denial of Service (DoS) and collusion attacks. In this paper, we study these attacks and propose a pre-authentication mechanism to secure the PKR scheme. Our new scheme is called the Secure Public Key Regime (SPKR). It is based on the Schnorr signature scheme that requires vehicles to expend some amount of CPU resources before RSUs issue the requested public keys to them. This helps to alleviate the risk of DoS attacks. Furthermore, our scheme is secure against collusion attacks. Through numerical analysis, we show that SPKR has a lower authentication delay compared with the Elliptic Curve Digital Signature (ECDSA) scheme and other ECDSA based counterparts.
Keywords: mobile radio; public key cryptography; certificate revocation list; collusion attack; complex certificate verification process; delegated trusted authorities; denial of service attack; lower authentication delay; preauthentication mechanism; road side units; secure public key regime; vehicular networks; Authentication; Computer crime; Digital signatures; Public key; Vehicles; Collusion Attacks; Denial of Service Attacks; Schnorr signature; certificate-less PKI (ID#: 15-8459)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245678&isnumber=7245317
Bulbul, R.; Chee-Wooi Ten; Lingfeng Wang, "Prioritization Of MTTC-Based Combinatorial Evaluation For Hypothesized Substations Outages," in Power & Energy Society General Meeting, 2015 IEEE, pp. 1-5, 26-30 July 2015. doi: 10.1109/PESGM.2015.7286248
Abstract: Exhaustive enumeration of a S-select-k problem for hypothesized substations outages can be practically infeasible due to exponential growth of combinations as both S and k numbers increase. This enumeration of worst-case substations scenarios from the large set, however, can be improved based on the initial selection sets with the root nodes and segments. In this paper, the previous work of the reverse pyramid model (RPM) is enhanced with prioritization of root nodes and defined segmentations of substation list based on mean-time-to-compromise (MTTC) value that is associated with each substation. Root nodes are selected based on the threshold values of the substation ranking on MTTC values and are segmented accordingly from the root node set. Each segmentation is then being enumerated with S-select-k module to identify worst-case scenarios. The lowest threshold value on the list, e.g., a substation with no assignment of MTTC or extremely low number, is completely eliminated. Simulation shows that this approach demonstrates similar outcome of the risk indices among all randomly generated MTTC of the IEEE 30-bus system.
Keywords: IEEE standards; combinatorial mathematics; power generation reliability; risk management; substation protection; IEEE 30-bus system; MTTC-based combinatorial evaluation prioritization;S-select-k problem; hypothesized substation outage; randomly generated mean-time-to-compromise value;risk indices; substation ranking; Computer crime; Indexes; Power system reliability; Reliability ;Substations; Topology; Combinatorial verification; cyber-contingency analysis; mean time to compromise (MTTC) (ID#: 15-8460)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286248&isnumber=7285590
Ansilla, J.D.; Vasudevan, N.; JayachandraBensam, J.; Anunciya, J.D., "Data security in Smart Grid with Hardware Implementation Against DoS Attacks," in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, pp. 1-7, 19-20 March 2015. doi: 10.1109/ICCPCT.2015.7159274
Abstract: Cultivation of Smart Grid refurbish with brisk and ingenious. The delinquent breed and sow mutilate in massive. This state of affair coerces security as a sapling which incessantly is to be irrigated with Research and Analysis. The Cyber Security is endowed with resiliency to the SYN flooding induced Denial of Service attack in this work. The proposed secure web server algorithm embedded in the LPC1768 processor ensures the smart resources to be precluded from the attack.
Keywords: Internet; computer network security; power engineering computing; smart power grids; DoS attacks; LPC1768 processor; SYN flooding; cybersecurity; data security; denial of service attack; secure Web server algorithm; smart grid; smart resources; Computer crime; Computers; Floods; IP networks; Protocols; Servers; ARM Processor; DoS; Hardware Implementation; SYNflooding; Smart Grid (ID#: 15-8461)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159274&isnumber=7159156
Aggarwal, P.; Grover, A.; Singh, S.; Maqbool, Z.; Pammi, V.S.C.; Dutt, V., "Cyber Security: A Game-Theoretic Analysis of Defender and Attacker Strategies in Defacing-Website Games," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-8, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166127
Abstract: The rate at which cyber-attacks are increasing globally portrays a terrifying picture upfront. The main dynamics of such attacks could be studied in terms of the actions of attackers and defenders in a cyber-security game. However currently little research has taken place to study such interactions. In this paper we use behavioral game theory and try to investigate the role of certain actions taken by attackers and defenders in a simulated cyber-attack scenario of defacing a website. We choose a Reinforcement Learning (RL) model to represent a simulated attacker and a defender in a 2×4 cyber-security game where each of the 2 players could take up to 4 actions. A pair of model participants were computationally simulated across 1000 simulations where each pair played at most 30 rounds in the game. The goal of the attacker was to deface the website and the goal of the defender was to prevent the attacker from doing so. Our results show that the actions taken by both the attackers and defenders are a function of attention paid by these roles to their recently obtained outcomes. It was observed that if attacker pays more attention to recent outcomes then he is more likely to perform attack actions. We discuss the implication of our results on the evolution of dynamics between attackers and defenders in cyber-security games.
Keywords: Web sites; computer crime ;computer games; game theory; learning (artificial intelligence);RL model; attacker strategies; attacks dynamics; behavioral game theory; cyber-attacks; cyber-security game; defacing Website games; defender strategies; game-theoretic analysis; reinforcement learning; Cognitive science; Computational modeling; Computer security; Cost function; Games; Probabilistic logic; attacker; cognitive modeling; cyber security; cyber-attacks; defender; reinforcement-learning model (ID#: 15-8462)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166127&isnumber=7166109
Kilger, M., "Integrating Human Behavior Into the Development of Future Cyberterrorism Scenarios," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 693-700, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.105
Abstract: The development of future cyber terrorism scenarios is a key component in building a more comprehensive understanding of cyber threats that are likely to emerge in the near-to mid-term future. While developing concepts of likely new, emerging digital technologies is an important part of this process, this article suggests that understanding the psychological and social forces involved in cyber terrorism is also a key component in the analysis and that the synergy of these two dimensions may produce more accurate and detailed future cyber threat scenarios than either analytical element alone.
Keywords: computer crime; human factors; terrorism; cyber threats; cyberterrorism scenarios; digital technologies; human behavior; psychological force; social force; Computer crime; Computer hacking; Organizations; Predictive models; Psychology; Terrorism; cyberterrorism; motivation; psychological; scenario; social (ID#: 15-8463)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299981&isnumber=7299862
Ugwoke, F.N.; Okafor, K.C.; Chijindu, V.C., "Security Qos Profiling Against Cyber Terrorism in Airport Network Systems," in Cyberspace (CYBER-Abuja), 2015 International Conference on, pp. 241-251, 4-7 Nov. 2015. doi: 10.1109/CYBER-Abuja.2015.7360516
Abstract: Attacks on airport information network services in the form of Denial of Service (DoS), Distributed DoS (DDoS), and hijacking are the most effective schemes mostly explored by cyber terrorists in the aviation industry running Mission Critical Services (MCSs). This work presents a case for Airport Information Resource Management Systems (AIRMS) which is a cloud based platform proposed for the Nigerian aviation industry. Granting that AIRMS is susceptible to DoS attacks, there is need to develop a robust counter security network model aimed at pre-empting such attacks and subsequently mitigating the vulnerability in such networks. Existing works in literature regarding cyber security DoS and other schemes have not explored embedded Stateful Packet Inspection (SPI) based on OpenFlow Application Centric Infrastructure (OACI) for securing critical network assets. As such, SPI-OACI was proposed to address the challenge of Vulnerability Bandwidth Depletion DDoS Attacks (VBDDA). A characterization of the Cisco 9000 router firewall as an embedded network device with support for Virtual DDoS protection was carried out in the AIRMS threat mitigation design. Afterwards, the mitigation procedure and the initial phase of the design with Riverbed modeler software were realized. For the security Quality of Service (QoS) profiling, the system response metrics (i.e. SPI-OACI delay, throughput and utilization) in cloud based network were analyzed only for normal traffic flows. The work concludes by offering practical suggestion for securing similar enterprise management systems running on cloud infrastructure against cyber terrorists.
Keywords: Air traffic control; Airports; Atmospheric modeling; Computer crime; Floods; AIRMS; Attacks; Aviation Industry; Cloud Datacenters; DDoS; DoS; Mitigation Techniques; Vulnerabilities (ID#: 15-8464)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7360516&isnumber=7360499
Rashid, A.; Moore, K.; May-Chahal, C.; Chitchyan, R., "Managing Emergent Ethical Concerns for Software Engineering in Society," in Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, vol. 2, pp. 523-526, 16-24 May 2015. doi: 10.1109/ICSE.2015.187
Abstract: This paper presents an initial framework for managing emergent ethical concerns during software engineering in society projects. We argue that such emergent considerations can neither be framed as absolute rules about how to act in relation to fixed and measurable conditions. Nor can they be addressed by simply framing them as non-functional requirements to be satisficed. Instead, a continuous process is needed that accepts the 'messiness' of social life and social research, seeks to understand complexity (rather than seek clarity), demands collective (not just individual) responsibility and focuses on dialogue over solutions. The framework has been derived based on retrospective analysis of ethical considerations in four software engineering in society projects in three different domains.
Keywords: ethical aspects; software engineering; software management; emergent ethical concern management; society projects; software engineering; Ethics; Law enforcement; Media; Societies; Software; Software engineering; Stakeholders; citizen science; cyber crime; ethics; software in society (ID#: 15-8465)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7203005&isnumber=7202933
Olabelurin, A.; Veluru, S.; Healing, A.; Rajarajan, M., "Entropy Clustering Approach for Improving Forecasting in DDoS Attacks," in Networking, Sensing and Control (ICNSC), 2015 IEEE 12th International Conference on, pp. 315-320, 9-11 April 2015. doi: 10.1109/ICNSC.2015.7116055
Abstract: Volume anomaly such as distributed denial-of-service (DDoS) has been around for ages but with advancement in technologies, they have become stronger, shorter and weapon of choice for attackers. Digital forensic analysis of intrusions using alerts generated by existing intrusion detection system (IDS) faces major challenges, especially for IDS deployed in large networks. In this paper, the concept of automatically sifting through a huge volume of alerts to distinguish the different stages of a DDoS attack is developed. The proposed novel framework is purpose-built to analyze multiple logs from the network for proactive forecast and timely detection of DDoS attacks, through a combined approach of Shannon-entropy concept and clustering algorithm of relevant feature variables. Experimental studies on a cyber-range simulation dataset from the project industrial partners show that the technique is able to distinguish precursor alerts for DDoS attacks, as well as the attack itself with a very low false positive rate (FPR) of 22.5%. Application of this technique greatly assists security experts in network analysis to combat DDoS attacks.
Keywords: computer network security; digital forensics; entropy; forecasting theory; pattern clustering; DDoS attacks; FPR; IDS; Shannon-entropy concept; clustering algorithm; cyber-range simulation dataset; digital forensic analysis; distributed denial-of-service; entropy clustering approach; false positive rate; forecasting; intrusion detection system; network analysis; proactive forecast; project industrial partner; volume anomaly; Algorithm design and analysis; Clustering algorithms; Computer crime; Entropy; Feature extraction; Ports (Computers);Shannon entropy; alert management; distributed denial-of-service (DDoS) detection; k-means clustering analysis; network security; online anomaly detection (ID#: 15-8466)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116055&isnumber=7115994
Dehghanniri, H.; Letier, E.; Borrion, H., "Improving Security Decision under Uncertainty: A Multidisciplinary Approach," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-7, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7166134
Abstract: Security decision-making is a critical task in tackling security threats affecting a system or process. It often involves selecting a suitable resolution action to tackle an identified security risk. To support this selection process, decision-makers should be able to evaluate and compare available decision options. This article introduces a modelling language that can be used to represent the effects of resolution actions on the stakeholders' goals, the crime process, and the attacker. In order to reach this aim, we develop a multidisciplinary framework that combines existing knowledge from the fields of software engineering, crime science, risk assessment, and quantitative decision analysis. The framework is illustrated through an application to a case of identity theft.
Keywords: decision making; risk management; security of data; software engineering; crime science; identity theft; modelling language; quantitative decision analysis; risk assessment; security decision-making; security risk; security threat; software engineering; Companies; Credit cards; Decision making; Risk management; Security; Uncertainty; crime script; decision-making; identity theft; requirements engineering; risk; security; uncertainty (ID#: 15-8467)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166134&isnumber=7166109
Jinoh Kim; Ilhwan Moon; Kyungil Lee; Suh, S.C.; Ikkyun Kim, "Scalable Security Event Aggregation for Situation Analysis," in Big Data Computing Service and Applications (BigDataService), 2015 IEEE First International Conference on, pp. 14-23, March 30 2015-April 2 2015. doi: 10.1109/BigDataService.2015.28
Abstract: Cyber-attacks have been evolved in a way to be more sophisticated by employing combinations of attack methodologies with greater impacts. For instance, Advanced Persistent Threats (APTs) employ a set of stealthy hacking processes running over a long period of time, making it much hard to detect. With this trend, the importance of big-data security analytics has taken greater attention since identifying such latest attacks requires large-scale data processing and analysis. In this paper, we present SEAS-MR (Security Event Aggregation System over MapReduce) that facilitates scalable security event aggregation for comprehensive situation analysis. The introduced system provides the following three core functions: (i) periodic aggregation, (ii) on-demand aggregation, and (iii) query support for effective analysis. We describe our design and implementation of the system over MapReduce and high-level query languages, and report our experimental results collected through extensive settings on a Hadoop cluster for performance evaluation and design impacts.
Keywords: Big Data; computer crime; data analysis; parallel processing; pattern clustering; query languages; APT; Hadoop cluster; SEAS-MR; advanced persistent threats; attack methodologies; big-data security analytics; cyber-attacks; high-level query languages; large-scale data analysis; large-scale data processing; on-demand aggregation; performance evaluation; periodic aggregation; query support; scalable security event aggregation; security event aggregation system over MapReduce; situation analysis; stealthy hacking processes; Aggregates; Analytical models; Computers; Data processing; Database languages; Security; Sensors; Security event aggregation; big-data analytics; big-data computing; security analytics (ID#: 15-8468)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184860&isnumber=7184847
Masood, A.; Java, J., "Static Analysis for Web Service Security - Tools & Techniques for a Secure Development Life Cycle," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-6, 14-16 April 2015. doi: 10.1109/THS.2015.7225337
Abstract: In this ubiquitous IoT (Internet of Things) era, web services have become a vital part of today's critical national and public sector infrastructure. With the industry wide adaptation of service-oriented architecture (SOA), web services have become an integral component of enterprise software eco-system, resulting in new security challenges. Web services are strategic components used by wide variety of organizations for information exchange on the internet scale. The public deployments of mission critical APIs opens up possibility of software bugs to be maliciously exploited. Therefore, vulnerability identification in web services through static as well as dynamic analysis is a thriving and interesting area of research in academia, national security and industry. Using OWASP (Open Web Application Security Project) web services guidelines, this paper discusses the challenges of existing standards, and reviews new techniques and tools to improve services security by detecting vulnerabilities. Recent vulnerabilities like Shellshock and Heartbleed has shifted the focus of risk assessment to the application layer, which for majority of organization means public facing web services and web/mobile applications. RESTFul services have now become the new service development paradigm normal; therefore SOAP centric standards such as XML Encryption, XML Signature, WS-Security, and WS-SecureConversation are nearly not as relevant. In this paper we provide an overview of the OWASP top 10 vulnerabilities for web services, and discuss the potential static code analysis techniques to discover these vulnerabilities. The paper reviews the security issues targeting web services, software/program verification and security development lifecycle.
Keywords: Web services; program diagnostics; program verification; security of data; Heartbleed; Internet of Things; Internet scale; OWASP; Open Web Application Security Project; RESTFul services; SOAP centric standards; Shellshock; WS-Secure Conversation; WS-security; Web applications; Web service security; Web services guidelines; XML encryption; XML signature; critical national infrastructure; dynamic analysis; enterprise software ecosystem; information exchange; mission critical API; mobile applications; national security and industry; program verification; public deployments; public sector infrastructure; risk assessment; secure development life cycle; security challenges; service development paradigm; service-oriented architecture; services security; software bugs; software verification; static code analysis; strategic components; ubiquitous IoT; vulnerabilities detection; vulnerability identification; Computer crime; Cryptography; Simple object access protocol; Testing; XML; Cyber Security; Penetration Testing; RESTFul API; SOA; SOAP; Secure Design; Secure Software Development; Security Code Review; Service Oriented Architecture; Source Code Analysis; Static Analysis Tool; Static Code Analysis; Web Application security; Web Services; Web Services Security (ID#: 15-8469)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225337&isnumber=7190491
Wood, P., "A Simulated Criminal Attack," in Cyber Security for Industrial Control Systems, pp. 1-21, 2-3 Feb. 2015. doi: 10.1049/ic.2015.0007
Abstract: Presents a collection of slides covering the following topics: advanced attack; threat analysis; remote information gathering; on-site reconnaissance; spear phishing plan; spear phishing exercise; branch office attack plan; branch office attack exercise; head office attack plan; head office attack exercise.
Keywords: computer crime; firewalls; Red Team exercise; a simulated criminal attack; advanced attack; branch office attack exercise; branch office attack plan; head office attack exercise; head office attack plan; on-site reconnaissance; remote information gathering; spear phishing exercise; spear phishing plan; threat analysis (ID#: 15-8470)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332809&isnumber=7137498
Nirmal, K.; Janet, B.; Kumar, R., "Phishing - The Threat That Still Exists," in Computing and Communications Technologies (ICCCT), 2015 International Conference on, pp. 139-143, 26-27 Feb. 2015. doi: 10.1109/ICCCT2.2015.7292734
Abstract: Phishing is an online security attack in which the hacker aims in harvesting sensitive information like passwords, credit card information etc. from the users by making them to believe what they see is what it is. This threat has been into existence for a decade and there has been continuous developments in counter attacking this threat. However, statistical study reveals how phishing is still a big threat to today's world as the online era booms. In this paper, we look into the art of phishing and have made a practical analysis on how the state of the art anti-phishing systems fail to prevent Phishing. With the loop-holes identified in the state-of-the-art systems, we move ahead paving the roadmap for the kind of system that will counter attack this online security threat more effectively.
Keywords: authorisation; computer crime; antiphishing systems; online security attack; online security threat; phishing attack; sensitive information harvesting; statistical analysis; Browsers; Computer hacking; Electronic mail; Google; Radiation detectors; Uniform resource locators; Computer Fraud; Cyber Security; Password theft; Phishing (ID#: 15-8471)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292734&isnumber=7292708
Treseangrat, K.; Kolahi, S.S.; Sarrafpour, B., "Analysis of UDP DDoS cyber flood attack and defense mechanisms on Windows Server 2012 and Linux Ubuntu 13," in Computer, Information and Telecommunication Systems (CITS), 2015 International Conference on, pp. 1-5, 15-17 July 2015. doi: 10.1109/CITS.2015.7297731
Abstract: Distributed Denial of Service (DoS) attacks are one of the major threats and among the hardest security problems in the Internet world. In this paper, we study the impact of a UDP flood attack on TCP throughputs, round-trip time, and CPU utilization on the latest version of Windows and Linux platforms, namely, Windows Server 2012 and Linux Ubuntu 13. This paper also evaluates several defense mechanisms including Access Control Lists (ACLs), Threshold Limit, Reverse Path Forwarding (IP Verify), and Network Load Balancing. Threshold Limit defense gave better results than the other solutions.
Keywords: Internet; Linux; computer network security; file servers; resource allocation; transport protocols; ACL; CPU utilization; IP verify; Internet world; Linux Ubuntu 13; TCP throughputs; UDP DDoS cyber flood attack; Windows Server 2012;Windows Sever 2012;access control lists; defense mechanisms; distributed denial of service attacks; network load balancing; reverse path forwarding; round-trip time; security problems; threshold limit; threshold limit defense; Computer crime; Floods; IP networks; Linux; Load management; Servers; Throughput; Cyber Security; UDP DDoS Attack (ID#: 15-8472)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7297731&isnumber=7297712
Choejey, P.; Chun Che Fung; Kok Wai Wong; Murray, D.; Sonam, D., "Cybersecurity Challenges for Bhutan," in Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2015 12th International Conference on, pp. 1-5, 24-27 June 2015. doi: 10.1109/ECTICon.2015.7206975
Abstract: Information and Communications Technologies (ICTs), especially the Internet, have become a key enabler for government organisations, businesses and individuals. With increasing growth in the adoption and use of ICT devices such as smart phones, personal computers and the Internet, Cybersecurity is one of the key concerns facing modern organisations in both developed and developing countries. This paper presents an overview of cybersecurity challenges in Bhutan, within the context that the nation is emerging as an ICT developing country. This study examines the cybersecurity incidents reported both in national media and government reports, identification and analysis of different types of cyber threats, understanding of the characteristics and motives behind cyber-attacks, and their frequency of occurrence since 1999. A discussion on an ongoing research study to investigate cybersecurity management and practices for Bhutan's government organisations is also highlighted.
Keywords: Internet; government data processing; organisational aspects; security of data; Bhutan government organisations; ICT developing country; Internet; cybersecurity incidents; government organisations; government reports; information and communications technologies; national media; Computer crime; Computers; Government; Internet; Viruses (medical);Cybersecurity; cyber threats; cybersecurity management; hacking; phishing; spamming; viruses (ID#: 15-8473)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7206975&isnumber=7206924
Spring, J.; Kern, S.; Summers, A., "Global Adversarial Capability Modeling," in Electronic Crime Research (eCrime), 2015 APWG Symposium on, pp. 1-21, 26-29 May 2015. doi: 10.1109/ECRIME.2015.7120797
Abstract: Intro: Computer network defense has models for attacks and incidents comprised of multiple attacks after the fact. However, we lack an evidence-based model the likelihood and intensity of attacks and incidents. Purpose: We propose a model of global capability advancement, the adversarial capability chain (ACC), to fit this need. The model enables cyber risk analysis to better understand the costs for an adversary to attack a system, which directly influences the cost to defend it. Method: The model is based on four historical studies of adversarial capabilities: capability to exploit Windows XP, to exploit the Android API, to exploit Apache, and to administer compromised industrial control systems. Result: We propose the ACC with five phases: Discovery, Validation, Escalation, Democratization, and Ubiquity. We use the four case studies as examples as to how the ACC can be applied and used to predict attack likelihood and intensity.
Keywords: Android (operating system); application program interfaces; computer network security; risk analysis; ACC; Android API; Apache; Windows XP; adversarial capability chain; attack likelihood prediction; compromised industrial control systems; computer network defense; cyber risk analysis; evidence-based model; global adversarial capability modeling; Analytical models; Androids; Biological system modeling; Computational modeling; Humanoid robots; Integrated circuit modeling; Software systems; CND; computer network defense; cybersecurity; incident response; intelligence; intrusion detection; modeling; security (ID#: 15-8474)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120797&isnumber=7120794
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Data Deletion and “Forgetting” 2015 |
A recent court decision has focused attention on the problem of “forgetting,” that is, eliminating links and references used on the Internet to focus on a specific topic or reference. “Forgetting,” essentially a problem in data deletion, has many implications for security and for data structures. The work cited here was presented in 2015.
Ranjan, A.K.; Kumar, V.; Hussain, M., "Security Analysis of Cloud Storage with Access Control and File Assured Deletion (FADE)," in Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, pp. 453-458, 1-2 May 2015. doi: 10.1109/ICACCE.2015.10
Abstract: Today most of the enterprises are outsourcing their data backups onto online cloud storage, services offered by third party. In such environment, security of offsite data is most prominent requirement. Tang et el. Has proposed, designed and implemented FADE, a secure overlay cloud storage system. Fade assures file deletion, making files unrecoverable after their revocation and also it associates outsourced file with fine grained access policies to avoid unauthorised access of data. In their paper, we have done security analysis of FADE and have found some design vulnerability in it. We could also discover few attacks and find out their causes. We have also suggested few countermeasures to prevent those attacks and made few improvements in the FADE system.
Keywords: authorisation; cloud computing; storage management; FADE; access control; design vulnerability; file assured deletion; online cloud storage; secure overlay cloud storage system; security analysis; Access control; Authentication; Cloud computing; Encryption; Silicon; FADE; access policies; assured deletion; attacks; cloud storage; design vulnerabilities (ID#: 15-8201)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306728&isnumber=7306547
Sanatinia, A.; Noubir, G., "OnionBots: Subverting Privacy Infrastructure for Cyber Attacks," in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, pp.69-80, 22-25 June 2015. doi: 10.1109/DSN.2015.40
Abstract: Over the last decade botnets survived by adopting a sequence of increasingly sophisticated strategies to evade detection and take overs, and to monetize their infrastructure. At the same time, the success of privacy infrastructures such as Tor opened the door to illegal activities, including botnets, ransomware, and a marketplace for drugs and contraband. We contend that the next waves of botnets will extensively attempt to subvert privacy infrastructure and cryptographic mechanisms. In this work we propose to preemptively investigate the design and mitigation of such botnets. We first, introduce OnionBots, what we believe will be the next generation of resilient, stealthy botnets. OnionBots use privacy infrastructures for cyber attacks by completely decoupling their operation from the infected host IP address and by carrying traffic that does not leak information about its source, destination, and nature. Such bots live symbiotically within the privacy infrastructures to evade detection, measurement, scale estimation, observation, and in general all IP-based current mitigation techniques. Furthermore, we show that with an adequate self-healing network maintenance scheme, that is simple to implement, OnionBots can achieve a low diameter and a low degree and be robust to partitioning under node deletions. We develop a mitigation technique, called SOAP, that neutralizes the nodes of the basic OnionBots. In light of the potential of such botnets, we believe that the research community should proactively develop detection and mitigation methods to thwart OnionBots, potentially making adjustments to privacy infrastructure.
Keywords: IP networks; computer network management; computer network security; data privacy; fault tolerant computing; telecommunication traffic; Cyber Attacks; IP-based mitigation techniques; OnionBots; SOAP; Tor; botnets; cryptographic mechanisms; destination information; host IP address; illegal activities; information nature; node deletions; privacy infrastructure subversion; resilient-stealthy botnets; self-healing network maintenance scheme; source information; Cryptography; IP networks; Maintenance engineering; Peer-to-peer computing; Privacy; Relays; Servers; Tor; botnet; cyber security; privacy infrastructure; self-healing network (ID#: 15-8202)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266839&isnumber=7266818
Askarov, A.; Moore, S.; Dimoulas, C.; Chong, S., "Cryptographic Enforcement of Language-Based Information Erasure," in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, pp.334-348, 13-17 July 2015. doi: 10.1109/CSF.2015.30
Abstract: Information erasure is a formal security requirement that stipulates when sensitive data must be removed from computer systems. In a system that correctly enforces erasure requirements, an attacker who observes the system after sensitive data is required to have been erased cannot deduce anything about the data. Practical obstacles to enforcing information erasure include: (1) correctly determining which data requires erasure, and (2) reliably deleting potentially large volumes of data, despite untrustworthy storage services. In this paper, we present a novel formalization of language-based information erasure that supports cryptographic enforcement of erasure requirements: sensitive data is encrypted before storage, and upon erasure, only a relatively small set of decryption keys needs to be deleted. This cryptographic technique has been used by a number of systems that implement data deletion to allow the use of untrustworthy storage services. However, these systems provide no support to correctly determine which data requires erasure, nor have the formal semantic properties of these systems been explained or proven to hold. We address these shortcomings. Specifically, we study a programming language extended with primitives for public-key cryptography, and demonstrate how information-flow control mechanisms can automatically track data that requires erasure and provably enforce erasure requirements even when programs employ cryptographic techniques for erasure.
Keywords: programming language semantics; public key cryptography; trusted computing; cryptographic enforcement; cryptographic technique; data deletion; decryption key; erasure requirement; formal security requirement; formal semantic property information-flow control mechanism; language-based information erasure; programming language; public-key cryptography; sensitive data; untrustworthy storage service; Cloud computing; Cryptography; Reactive power; Reliability; Semantics; Standards (ID#: 15-8203)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243743&isnumber=7243713
Kavak, P.; Demirci, H., "LargeDEL: A Tool for Identifying Large Deletions in the Whole Genome Sequencing Data," in Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), 2015 IEEE Conference on, pp. 1-7, 12-15 Aug. 2015. doi: 10.1109/CIBCB.2015.7300280
Abstract: DNA deletions are one of the main genetic reasons of disease. Currently there are many tools which are capable of detecting structural variations. However, these tools usually require long running time and lack ease of use. It is generally not possible to restrict the search to a region of interest. The programs also yield excessive number of results which obstructs further analysis. In this work, we present LargeDEL, a tool which quickly scans aligned paired-end next generation sequencing (NGS) data for finding large deletions. The program is capable of extracting the candidate deletions according to desired criteria. It is a fast, easy to use tool for finding large deletions within the critical regions in the whole genome.
Keywords: DNA; bioinformatics; diseases; genomics; DNA deletion; LargeDEL; deletion identification; disease; genome sequencing data; next generation sequencing data; structural variation detection; Arrays; Bioinformatics; Biological cells; Diseases; Genomics; Sequential analysis (ID#: 15-8204)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300280&isnumber=7300268
Peipei Wang; Dean, D.J.; Xiaohui Gu, "Understanding Real World Data Corruptions in Cloud Systems," in Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 116-125, 9-13 March 2015. doi: 10.1109/IC2E.2015.41
Abstract: Big data processing is one of the killer applications for cloud systems. MapReduce systems such as Hadoop are the most popular big data processing platforms used in the cloud system. Data corruption is one of the most critical problems in cloud data processing, which not only has serious impact on the integrity of individual application results but also affects the performance and availability of the whole data processing system. In this paper, we present a comprehensive study on 138 real world data corruption incidents reported in Hadoop bug repositories. We characterize those data corruption problems in four aspects: 1) what impact can data corruption have on the application and system? 2) how is data corruption detected? 3) what are the causes of the data corruption? and 4) what problems can occur while attempting to handle data corruption? Our study has made the following findings: 1) the impact of data corruption is not limited to data integrity, 2) existing data corruption detection schemes are quite insufficient: only 25% of data corruption problems are correctly reported, 42% are silent data corruption without any error message, and 21% receive imprecise error report. We also found the detection system raised 12% false alarms, 3) there are various causes of data corruption such as improper runtime checking, race conditions, inconsistent block states, improper network failure handling, and improper node crash handling, and 4) existing data corruption handling mechanisms (i.e., data replication, replica deletion, simple re-execution) make frequent mistakes including replicating corrupted data blocks, deleting uncorrupted data blocks, or causing undesirable resource hogging.
Keywords: cloud computing; data handling; Hadoop; MapReduce systems; big data processing; cloud data processing; cloud systems; data corruption; data corruption problems; data integrity; improper network failure handling; improper node crash handling; inconsistent block states; race conditions; real world data corruptions; runtime checking; Availability; Computer bugs; Data processing; Radiation detectors; Software; Yarn (ID#: 15-8205)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092909&isnumber=7092808
Wenji Chen; Yong Guan, "Distinct Element Counting in Distributed Dynamic Data Streams," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 2371-2379, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218625
Abstract: We consider a new type of distinct element counting problem in dynamic data streams, where (1) insertions and deletions of an element can appear not only in the same data stream but also in two or more different streams, (2) a deletion of a distinct element cancels out all the previous insertions of this element, and (3) a distinct element can be re-inserted after it has been deleted. Our goal is to count the number of distinct elements that were inserted but have not been deleted in a continuous data stream. We also solve this new type of distinct element counting problem in a distributed setting. This problem is motivated by several network monitoring and attack detection applications where network traffic can be modelled as single or distributed dynamic streams and the number of distinct elements in the data streams, such as unsuccessful TCP connection setup requests, is calculated to be used as an indicator to detect certain network events such as service outage and DDoS attacks. Although there are known tight bounds for distinct element counting in insertion-only data streams, no good bounds are known for it in dynamic data streams, neither for this new type of problem. None of the existing solutions for distinct element counting can solve our problem. In this paper, we will present the first solution to this problem, using a space-bounded data structure with a computation-efficient probabilistic data streaming algorithm to estimate the number of distinct elements in single or distributed dynamic data streams. We have done both theoretical analysis and experimental evaluations, using synthetic and real data traces, of our algorithm to show its effectiveness.
Keywords: computer network security; transport protocols; DDoS attacks; TCP connection; attack detection applications; continuous data stream; distinct element counting; distributed dynamic data streams; distributed setting; network monitoring; network traffic; probabilistic data streaming algorithm; service outage; space bounded data structure; Computers; Data structures; Distributed databases; Estimation; Heuristic algorithms; Monitoring; Servers (ID#: 15-8206)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218625&isnumber=7218353
Xiaokui Shu; Jing Zhang; Danfeng Yao; Wu-chun Feng, "Rapid and Parallel Content Screening for Detecting Transformed Data Exposure," in Computer Communications Workshops (INFOCOM WKSHPS), 2015 IEEE Conference on, pp. 191-196, April 26 2015-May 1 2015. doi: 10.1109/INFCOMW.2015.7179383
Abstract: The leak of sensitive data on computer systems poses a serious threat to organizational security. Organizations need to identify the exposure of sensitive data by screening the content in storage and transmission, i.e., to detect sensitive information being stored or transmitted in the clear. However, detecting the exposure of sensitive information is challenging due to data transformation in the content. Transformations (such as insertion, deletion) result in highly unpredictable leak patterns. Existing automata-based string matching algorithms are impractical for detecting transformed data leaks because of its formidable complexity when modeling the required regular expressions. We design two new algorithms for detecting long and inexact data leaks. Our system achieves high detection accuracy in recognizing transformed leaks compared with the state-of-the-art inspection methods. We parallelize our prototype on graphics processing unit and demonstrate the strong scalability of our data leak detection solution analyzing big data.
Keywords: Big Data; security of data; Big Data analysis; automata-based string matching algorithms; data leak detection solution; graphics processing unit; organizational security; sensitive data; Accuracy; Algorithm design and analysis; Graphics processing units; Heuristic algorithms; Leak detection; Security; Sensitivity (ID#: 15-8207)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7179383&isnumber=7179273
De, D.; Das, S.K., "SREE-Tree: Self-Reorganizing Energy-Efficient Tree Topology Management in Sensor Networks," in Sustainable Internet and ICT for Sustainability (SustainIT), 2015, pp. 1-8, 14-15 April 2015. doi: 10.1109/SustainIT.2015.7101370
Abstract: The evolving applications of Information and Communications Technologies (ICT), such as smart cities, often need sustainable data collection networks. We envision the deployment of heterogeneous sensor networks that will allow dynamic self-reorganization of data collection topology, thus coping with unpredictable network dynamics and node addition/ deletion for changing application needs. However, the self-reorganization must also assure network energy efficiency and load balancing, without affecting ongoing data collection. Most of the existing literature either aim at minimizing the maximum load on a sensor node (hence maximizing network lifetime), or attempt to balance the overall load distribution on the nodes. In this work we propose to design a distributed protocol for self-organizing energy-efficient tree management, called SREE-Tree. Based on the dynamic choice of a design parameter, the in-network self-reorganization of data collection topology can achieve higher network lifetime, yet balancing the loads. In SREE-Tree, starting with an arbitrary tree the nodes periodically apply localized and distributed routines to collaboratively reduce load on the multiple bottleneck nodes (that are likely to deplete energy sooner due to a large amount of carried data flow or low energy availability). The problem of constructing and maintaining optimal data collection tree (Topt) topology that maximizes the network lifetime (L(Topt)) is an NP-Complete problem. We prove that a sensor network running the proposed SREE-Tree protocol is guaranteed to converge to a tree topology (T) with sub-optimal network lifetime. With the help of experiments using standard TinyOS based sensor network simulator TOSSIM, we have validated that SREE-Tree achieves better performance as compared to state-of-the-art solutions, for varying network sizes.
Keywords: communication complexity; distributed processing; energy conservation; power aware computing; protocols; resource allocation; telecommunication network management; telecommunication network topology; trees (mathematics);wireless sensor networks; ICT; NP-complete problem; SREE-Tree protocol; TOSSIM; TinyOS based sensor network simulator; data collection topology; design parameter; distributed protocol; dynamic self-reorganization; energy-efficient tree topology management; heterogeneous sensor networks; in-network self-reorganization; information and communications technologies; load balancing; load distribution; network dynamics; network energy efficiency; network lifetime maximization; network sizes; node addition/ deletion; optimal data collection tree topology; sensor node; smart cities; suboptimal network lifetime; sustainable data collection networks; Data collection; Network topology; Power demand; Protocols; Sensors; Switches; Topology (ID#: 15-8208)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7101370&isnumber=7101353
Haibin Zhang; Yan Wang; Jian Yang, "Space Reduction for Contextual Transaction Trust Computation in E-Commerce and E-Service Environments," in Services Computing (SCC), 2015 IEEE International Conference on, pp. 680-687, June 27 2015-July 2 2015. doi: 10.1109/SCC.2015.97
Abstract: In the literature, Contextual Transaction Trust computation (termed as CTT computation) is considered an effective approach to evaluate the trustworthiness of a seller. Specifically, it computes a seller's reputation profile to indicate his/her dynamic trustworthiness in different product categories, price ranges, time periods, and any necessary combination of them. Then, in order to promptly answer a buyer's requests on the results of CTT computation, CMK-tree has been designed to appropriately index the precomputed aggregation results over large-scale ratings and transaction data. Nevertheless, CMK-tree requires additional storage space. In practice, a seller usually has a large volume of transactions. Moreover, with significant increase of historical transaction data (e.g., Over one or two years), the size of storage space consumed by CMK-tree will become much larger. In reducing storage space consumption for CTT computation, the aggregation results that are generated based on the ratings and transaction data from remote history, e.g., "12 months ago" can be deleted, as the ratings from remote history are less important for evaluating a seller's recent behavior. However, to achieve nearly linear and robust query performance, the deletion operations in the CMK-tree become complicated. In this paper, we propose three deletion strategies for CTT computation based on CMK-tree. With our proposed deletion strategies, the additional storage space consumption can be restricted to a limited range, which offers great benefit to trust management with millions of sellers. Finally, we have conducted experiments to illustrate both advantages and disadvantages of the proposed deletion strategies.
Keywords: Web services; electronic commerce; query processing; trees (mathematics) CMK-tree; CTT computation; contextual transaction trust computation; deletion strategies; dynamic trustworthiness; e-commerce environments; e-service environments; historical transaction data; large-scale ratings; price ranges; product categories; query performance; space reduction; storage space; time periods; trust management; Aggregates; Computational modeling; Context; Context modeling; Data structures; Indexes; Robustness; Aggregation Index; Contextual Transaction Trust; Deletion Strategy; E-Commerce; Trust and Reputation (ID#: 15-8209)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207415&isnumber=7207317
Tanwir; Hendrantoro, G.; Affandi, A., "Early Result from Adaptive Combination of LRU, LFU and FIFO to Improve Cache Server Performance in Telecommunication Network," in Intelligent Technology and Its Applications (ISITIA), 2015 International Seminar on, pp. 429-432, 20-21 May 2015. doi: 10.1109/ISITIA.2015.7220019
Abstract: telecommunications system network server is a multimedia storage medium, load server is storing data transmission can be reduced with an additional caches servers which store data while making it easier for clients to access information. The more clients to access information causing increasing caches capacity is needed deletion of caches with using a combination of algorithm LRU, LFU and FIFO Queue method, in time of the initial data to be deleted (FIFO), the other algorithm will detect if such data has the most references (LFU) or LRU algorithm so that frequently accessed data to be stored is cached it will reduce delay time, Throughput and Loss Browsing.
Keywords: Internet; cache storage; client-server systems; information retrieval; network servers; FIFO queue method; LFU queue method; LRU queue method; cache server performance improvement; caches capacity; clients access; data transmission storage; delay time reduction; load server; loss browsing reduction; multimedia storage medium; telecommunication system network server; throughput reduction; Cache memory; Delays; Multimedia communication; Object recognition; Servers; Telecommunications; Throughput; Algorithms; Cache Server; FIFO; LFU; LRU (ID#: 15-8210)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7220019&isnumber=7219932
Fakcharoenphol, J.; Kumpijit, T.; Putwattana, A., "A Faster Algorithm for the Tree Containment Problem for Binary Nearly Stable Phylogenetic Networks," in Computer Science and Software Engineering (JCSSE), 2015 12th International Joint Conference on, pp. 337-342, 22-24 July 2015
doi: 10.1109/JCSSE.2015.7219820
Abstract: Phylogenetic networks and phylogenetic trees are leaf-labelled graphs used in biology to describe evolutionary histories of species whose leaves correspond to a set of taxa in the study. Given a phylogenetic network N and a phylogenetic tree T over the same set of taxa, if one can obtain T from N by edge deletions and contractions, we say that N contains T. A fundamental problem, called the tree containment problem, is to determine if N contains T. In general networks, this problem is NP-complete, but can be solved in polynomial time when N is a normal network, a binary tree-child network, or a level-k network. Recently, Gambette, Gunawan, Labarre, Vialette and Zhang showed that it is possible to solve the problem for a more general class of networks called binary nearly stable networks. Not only that binary nearly stable networks include normal and tree-child networks, they claim that important evolution histories also match this generalization. Their algorithm is also more efficient than previous algorithms as it runs in time O(n2) where n is the number of taxa. This paper presents a faster O(n log n) algorithm. We obtain this improvement from a simple observation that the iterative algorithm of Gambette et al. only performs very local modifications of the networks. Our algorithm employs elementary data structures to dynamically maintain certain internal data structures used in their algorithm instead of recomputing at every iteration.
Keywords: computational complexity; network theory (graphs); tree data structures; trees (mathematics); NP-complete problem; binary nearly stable phylogenetic networks; binary tree-child network; biology; edge contractions; edge deletions; elementary data structures; internal data structures; leaf-labelled graphs; level-k network; phylogenetic trees; tree containment problem; Contracts; Data structures; Heuristic algorithms; History; Phylogeny; Standards; Vegetation (ID#: 15-8211)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219820&isnumber=7219755
Qisong Hu; Chen Yi; Kliewer, J.; Wei Tang, "Asynchronous Communication for Wireless Sensors Using Ultra Wideband Impulse Radio," in Circuits and Systems (MWSCAS), 2015 IEEE 58th International Midwest Symposium on, pp. 1-4, 2-5 Aug. 2015. doi: 10.1109/MWSCAS.2015.7282170
Abstract: This paper addresses simulations and design of an asynchronous integrated ultra wideband impulse radio transmitter and receiver suitable for low-power miniaturized wireless sensors. This paper first presents software simulations for asynchronous transmission over noisy channels using FSK-OOK modulation, which demonstrates that the proposed architecture is capable to communicate reliably at moderate signal-to-noise ratios and that the main errors are due to deletions of received noisy transmit pulses. Then, we address a hardware chip implementation of the integrated UWB transmitter and receiver, which is fabricated using an IBM 0.18μm CMOS process. This implementation provides a low peak power consumption, i.e., 10.8 mW for the transmitter and 5.4 mW for the receiver, respectively. The measured maximum baseband data rate of the proposed radio is 2.3 Mb/s.
Keywords: CMOS integrated circuits; amplitude shift keying; frequency shift keying; power consumption; radio receivers; radio transmitters; telecommunication power management; ultra wideband communication; wireless channels; wireless sensor networks; CMOS process; FSK-OOK modulation; UWB receiver; UWB transmitter; asynchronous communication; hardware chip implementation; signal-to-noise ratio size 0.18 mum;ultra wideband impulse radio receiver; ultra wideband impulse radio transmitter; wireless sensor; Frequency shift keying; Radio transmitters; Receivers; Sensors; Signal to noise ratio; Wireless communication; Wireless sensor networks; Asynchronous Communication; Integrated Circuits; Low Power Wireless Sensors; Ultra Wideband Impulse Radio (ID#: 15-8212)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282170&isnumber=7281994
Ritter, M.; Bahr, G.S., "An Exploratory Study to Identify Relevant Cues for the Deletion of Faces for Multimedia Retrieval," in Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on, pp. 1-6, June 29 2015-July 3 2015. doi: 10.1109/ICMEW.2015.7169806
Abstract: Within our approach to big data, we reduce the number of images in video footage by applying a shot detection with a keyframe extraction of single frames. This can be followed by duplicate removal and face detection processes yielding to a further data reduction. Nevertheless, additional reductions steps are necessary in order to make the data manageable (searchable) for the end user in a meaningful way. Therefore, we investigated human inspired forgetting as a data reduction tool. We conducted an exploratory study on a subset of the remaining face data to examine patterns in the selection process of faces that are considered most memorable showing a potential of roughly above 75 % for elimination. The results of the study considered the quality and the size of the faces as important measures. In these terms, we finally show a connection to characteristics of state-of-the-art face detectors.
Keywords: Big Data; data reduction; face recognition; information retrieval; object detection; video signal processing; big data; data reduction tool; face deletion; face detection process; keyframe extraction; multimedia retrieval; selection process; shot detection; single frame; video footage; Data mining; Detectors; Face detection; Feature extraction; Indexes; Standards; Training; Big Data; Face Detection; Face Sizes; Forgetting; Most Memorable Faces; Shot Detection; Video (ID#: 15-8213)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169806&isnumber=7169738
Chaudhari, A.; Phadatare, P.M.; Kudale, P.S.; Mohite, R.B.; Petare, R.P.; Jagdale, Y.P.; Mudiraj, A., "Preprocessing of High Dimensional Dataset for Developing Expert IR System," in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, pp. 417-421, 26-27 Feb. 2015. doi: 10.1109/ICCUBEA.2015.87
Abstract: Now-a-days due to increase in the availability of computing facilities, large amount of data in electronic form is been generated. The data generated is to be analyzed in order to maximize the benefit of intelligent decision making. Text categorization is an important and extensively studied problem in machine learning. The basic phases in the text categorization include preprocessing features like removing stop words from documents and applying TF-IDF is used which results into increase efficiency and deletion of irrelevant data from huge dataset. This paper discusses the implication of Information Retrieval system for text-based data using different clustering approaches. Applying TF-IDF algorithm on dataset gives weight for each word which summarized by Weight matrix.
Keywords: decision making; information retrieval systems; learning (artificial intelligence); text analysis; TF-IDF algorithm; clustering approaches; electronic form; expert IR system; high dimensional dataset processing; information retrieval system; intelligent decision making; machine learning; text categorization; text-based data; weight matrix; Clustering algorithms; Databases; Flowcharts; Frequency measurement; Information retrieval; Text categorization; Information retrieval; TF IDF; stopwords; text based clustering (ID#: 15-8214)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155880&isnumber=7155781
Gazzah, Sami; Hechkel, Amina; Essoukri Ben Amara, Najoua, "A Hybrid Sampling Method for Imbalanced Data," in Systems, Signals & Devices (SSD), 2015 12th International Multi-Conference on, pp. 1-6, 16-19 March 2015. doi: 10.1109/SSD.2015.7348093
Abstract: With the diversification of applications and the emergence of new trends in challenging applications such as in the computer vision domain, classical machine learning systems usually perform poorly while confronting two common problems: the training data of negative examples, which outnumber the positive ones, and the large intra-class variations. These problems lead to a drop in the system performances. In this work, we propose to improve the classification accuracy in the case of imbalanced training data by equally balancing a training data set using a hybrid approach which consists in over-sampling the minority class using a ???SMOTE star topology???, and under-sampling the majority class by removing instances that are considered less relevant. The feature vector deletion has been performed with respect to intra-class variations, based on the distribution criterion. The experimental results, achieved in bio-metric data, show that the proposed approach significantly improves the overall performances measured in terms of true-positive rate.
Keywords: Correlation; Databases; Feature extraction; Principal component analysis; Support vector machines; Training; Training data; Data analysis; Imbalanced data sets; Intra-class variations; One-against-all SVM; Principal component analysis (ID#: 15-8215)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348093&isnumber=7348090
Klomsae, Atcharin; Auephanwiriyakul, Sansanee; Theera-Umpon, Nipon, "A Novel String Grammar Fuzzy C-Medians," in Fuzzy Systems (FUZZ-IEEE), 2015 IEEE International Conference on, pp. 1-5, 2-5 Aug. 2015. doi: 10.1109/FUZZ-IEEE.2015.7338109
Abstract: One of the popular classification problems is the syntactic pattern recognition. A syntactic pattern can be described using string grammar. The string grammar hard C-means is one of the classification algorithms in syntactic pattern recognition. However, it has been proved that fuzzy clustering is better than hard clustering. Hence, in this paper we develop a string grammar fuzzy C-medians algorithm. In particular, the string grammar fuzzy C-medians algorithm is a counterpart of fuzzy C-medians in which a fuzzy median approach is applied for finding fuzzy median string as the center of string data. However, the fuzzy median string may not provide a good clustering result. We then modified a method to compute fuzzy median string with the edition operations (insertion, deletion, and substitution) over each symbol of the string. The fuzzy C-medians with regular fuzzy median and the one with the modified fuzzy median are implemented on 3 real data sets, i.e., Copenhagen chromosomes data set, MNIST database of handwritten digits, and USPS database of handwritten digits. We also compare the results with those from the string grammar hard C-means. The results show that the string grammar fuzzy C-medians is better than the string grammar hard C-means.
Keywords: Biological cells; Clustering algorithms; Grammar; Mathematical model; Prototypes; Syntactics; Training; Levenshtein distance; fuzzy median; string grammar fuzzy c-medians; syntactic pattern recognition (ID#: 15-8216)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338109&isnumber=7337796
El Rouayheb, S.; Goparaju, S.; Han Mao Kiah; Milenkovic, O., "Synchronizing Edits in Distributed Storage Networks," in Information Theory (ISIT), 2015 IEEE International Symposium on, pp. 1472-1476, 14-19 June 2015. doi: 10.1109/ISIT.2015.7282700
Abstract: We consider the problem of synchronizing data in distributed storage networks under edits that include deletions and insertions. We present modifications of codes on distributed storage systems that allow updates in the parity-check values to be performed with one round of communication at low bit rates and a small storage overhead. Our main contributions are novel protocols for synchronizing both frequently updated and semi-static data, and protocols for data deduplication applications, based on intermediary coding using permutation and Vandermonde matrices.
Keywords: matrix algebra; parity check codes; Vandermonde matrices; code modifications; data deduplication applications; distributed storage networks; intermediary coding; parity-check values; permutation; Decision support systems; Distributed databases; Encoding; Maintenance engineering; Protocols; Synchronization; Tensile stress; Distributed storage; Synchronization (ID#: 15-8217)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282700&isnumber=7282397
Qiwen Wang; Cadambe, V.; Jaggi, S.; Schwartz, M.; Medard, M., "File Updates Under Random/Arbitrary Insertions and Deletions," in Information Theory Workshop (ITW), 2015 IEEE, pp. 1-5, April 26 2015-May 1 2015. doi: 10.1109/ITW.2015.7133118
Abstract: A client/encoder edits a file, as modeled by an insertion-deletion (InDel) process. An old copy of the file is stored remotely at a data-centre/decoder, and is also available to the client. We consider the problem of throughput- and computationally-efficient communication from the client to the data-centre, to enable the server to update its copy to the newly edited file. We study two models for the source files/edit patterns: the random pre-edit sequence left-to-right random InDel (RPES-LtRRID) process, and the arbitrary pre-edit sequence arbitrary InDel (APES-AID) process. In both models, we consider the regime in which the number of insertions/deletions is a small (but constant) fraction of the original file. For both models we prove information-theoretic lower bounds on the best possible compression rates that enable file updates. Conversely, our compression algorithms use dynamic programming (DP) and entropy coding, and achieve rates that are approximately optimal.
Keywords: file organisation; APES-AID process; DP; RPES-LtRRID process; client/encoder; compression algorithms; compression rates; data-centre/decoder; dynamic programming; edited file; entropy coding; file updates; information-theoretic lower bounds; insertion-deletion process; pre-edit sequence arbitrary InDel process; random/arbitrary insertions; source files/edit patterns; Computational modeling; Decoding; Entropy; Markov processes; Radio access networks; Synchronization (ID#: 15-8218)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133118&isnumber=7133075
Mori, Shohei; Shibata, Fumihisa; Kimura, Asako; Tamura, Hideyuki, "Efficient Use of Textured 3D Model for Pre-observation-based Diminished Reality," in Mixed and Augmented Reality Workshops (ISMARW), 2015 IEEE International Symposium on, pp. 32-39, Sept. 29 2015-Oct. 3 2015. doi: 10.1109/ISMARW.2015.16
Abstract: Diminished reality (DR) deletes or diminishes undesirable objects from the perceived environments. We present a pre-observation-based DR (POB-DR) framework that uses a textured 3D model (T-3DM) of a scene for efficiently deleting undesirable objects. The proposed framework and T-3DM data structure enable geometric and photometric registration that allow the user to move in six degrees-of-freedom (6DoF) under dynamic lighting during the deletion process. To accomplish these tasks, we allow the user to pre-observe backgrounds to be occluded similar to existing POB-DR approaches and preserve hundreds of view-dependent images and triangle fans as a T-3DM. The proposed system effectively uses the T-3DM for all of processes to fill in the target region in the proposed deletion scheme. The results of our experiments demonstrate that the proposed system works in unknown 3D scenes and can handle rapid and drastic 6DoF camera motion and dynamic illumination changes.
Keywords: Cameras; Fans; Image color analysis; Lighting; Real-time systems; Rendering (computer graphics); Three-dimensional displays; Diminished reality; color correction; image-based rendering; mixed/augmented reality; tracking (ID#: 15-8219)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7344754&isnumber=7344734
Sangari, A.S.; Leo, J.M., "Polynomial Based Light Weight Security in Wireless Body Area Network," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp. 1-5, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282331
Abstract: Wireless body area networks have grown more attention in healthcare applications. The development of WBAN is essential for tele medicine and Mobile healthcare. It enable remote patient monitoring of users during their day to day activities without affecting their freedom. In WBAN, the body sensors in and around the patient body that collect all patient information and transferred to remote server through wireless medium. The wearable sensors are able to monitor vital signs such as temperature, pulse, glucose information and ECG. However there are lots of research challenges in WBAN when deployed in the network. The sensors have limited resources in terms of memory, size, memory and computational capacity. The WBAN operation is closely related to patient's sensitive medical information. Because, the unsecured information will lead to wrong diagnosis and treatment. The security is important thing in wireless medium. In WBAN, the unauthorized people can easily access the patient's data and data can be modified by the attackers. The creation, deletion, modification of medical information needs a strict security mechanism.
Keywords: body area networks; body sensor networks; health care; patient diagnosis; patient monitoring; telecommunication security; telemedicine; WBAN; mobile healthcare; patient information collection; patient sensitive medical information; polynomial based light weight security; remote patient monitoring; remote server; telemedicine; wearable sensor; wireless body area network; wireless medium; Biomedical monitoring; Body area networks; Monitoring; Reliability; Wireless communication; Zigbee; Electro cardiogram signal; Wireless body area network (ID#: 15-8220)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282331&isnumber=7282219
Dongyao Wang; Lei Zhang; Yinan Qi; Quddus, A.u., "Localized Mobility Management for SDN-Integrated LTE Backhaul Networks," in Vehicular Technology Conference (VTC Spring), 2015 IEEE 81st, pp. 1-6, 11-14 May 2015. doi: 10.1109/VTCSpring.2015.7145916
Abstract: Small cell (SCell) and Software Define Network (SDN) are two key enablers to meet the evolutional requirements of future telecommunication networks, but still on the initial study stage with lots of challenges faced. In this paper, the problem of mobility management in SDN-integrated LTE (Long Term Evolution) mobile backhaul network is investigated. An 802.1ad double tagging scheme is designed for traffic forwarding between Serving Gateway (S-GW) and SCell with QoS (Quality of Service) differentiation support. In addition, a dynamic localized forwarding scheme is proposed for packet delivery of the ongoing traffic session to facilitate the mobility of UE within a dense SCell network. With this proposal, the data packets of an ongoing session can be forwarded from the source SCell to the target SCell instead of switching the whole forwarding path, which can drastically save the path-switch signalling cost in this SDN network. Numerical results show that compared with traditional path switch policy, more than 50% signalling cost can be reduced, even considering the impact on the forwarding path deletion when session ceases. The performance of data delivery is also analysed, which demonstrates the introduced extra delivery cost is acceptable and even negligible in case of short forwarding chain or large backhaul latency.
Keywords: Long Term Evolution; mobility management (mobile radio); quality of service; software defined networking; synchronisation; telecommunication network topology; telecommunication traffic; wireless LAN;IEEE 802.1ad double tagging scheme; LTE mobile backhaul network; Long Term Evolution; QoS; S-GW; SCell network; SDN; backhaul latency; data delivery; delivery cost; dynamic localized forwarding scheme; forwarding chain; forwarding path deletion; localized mobility management; packet delivery; path switch policy; path-switch signalling cost; quality of service; serving gateway; small cell; software defined network; telecommunication networks; traffic forwarding; traffic session; Handover; Mobile computing; Mobile radio mobility management; Switches (ID#: 15-8221)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145916&isnumber=7145573
Da Zhang; Hao Wang; Kaixi Hou; Jing Zhang; Wu-chun Feng, "pDindel: Accelerating Indel Detection on a Multicore CPU Architecture with SIMD," in Computational Advances in Bio and Medical Sciences (ICCABS), 2015 IEEE 5th International Conference on, pp. 1-6, 15-17 Oct. 2015. doi: 10.1109/ICCABS.2015.7344721
Abstract: Small insertions and deletions (indels) of bases in the DNA of an organism can map to functionally important sites in human genes, for example, and in turn, influence human traits and diseases. Dindel detects such indels, particularly small indels (> 50 nucleotides), from short-read data by using a Bayesian approach. Due to its high sensitivity to detect small indels, Dindel has been adopted by many bioinformatics projects, e.g., the 1,000 Genomes Project, despite its pedestrian performance. In this paper, we first analyze and characterize the current version of Dindel to identify performance bottlenecks. We then design, implement, and optimize a parallelized Dindel (pDindel) for a multicore CPU architecture by exploiting thread-level parallelism (TLP) and data-level parallelism (DLP). Our optimized pDindel can achieve up to a 37-fold speedup for the computational part of Dindel and a 9-fold speedup for the overall execution time over the current version of Dindel.
Keywords: Bayes methods; DNA; Genomics; Multicore processing; Parallel processing; Sensitivity; Sequential analysis; Dindel; OpenMP; indel detection; multithreading; short-read mapping; vectorization (ID#: 15-8222)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7344721&isnumber=7344698
Dengfeng Yao; Abulizi, A.; Renkui Hou, "An Improved Algorithm of Materialized View Selection within the Confinement of Space," in Big Data and Cloud Computing (BDCloud), 2015 IEEE Fifth International Conference on, pp. 310-313, 26-28 Aug. 2015. doi: 10.1109/BDCloud.2015.49
Abstract: Data warehouses are used to store large quantities of materialized views to accelerate OLAP server response to query. The method to efficiently and accurately return the correct results at materialized view in a limited storage space is an important question that is being emphasized and a recognized difficulty for the ROLAP server design. This paper presents an improved and effective algorithm for materialized view selection. The algorithm considered the effect on the overall space and cost by adding candidate materialized view and reducing the views, as well as optimized the addition and deletion of candidate materialized view by selecting a lower cost for selecting views. The analysis and tests show that the algorithm achieved good results and was efficient.
Keywords: data mining; data warehouses; storage management; OLAP server response; ROLAP server design; data warehouse; materialized view selection; space confinement; storage space; Algorithm design and analysis; Electronics packaging; Greedy algorithms; Indexes; Market research; Servers; Time factors; ROLAP; materialized view; multi-dimensional analysis (ID#: 15-8223)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7310763&isnumber=7310694
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Hard Problems: Policy-based Security Governance 2015 |
Policy-based governance of security is one of the five hard problems in the Science of Security. The work cited here was presented in 2015.
Zia, T.A., "Organisations Capability and Aptitude towards IT Security Governance," in IT Convergence and Security (ICITCS), 2015 5th International Conference on, pp. 1-4, 24-27 Aug. 2015
doi: 10.1109/ICITCS.2015.7293005
Abstract: In today's more digitized world, the notion of Information Technology's (IT) delivery of value to businesses has been stretched to mitigation of broader organisations' risk. This has triggered the higher management levels to provide IT security in all levels of organisations' governance and decision making processes. With such stringent governance, IT security is considered as one of the core business processes with up-to-date policies and procedures to be in placed at all levels of governance. This paper provides IT security practitioners' view on how IT security is managed in their organisations. A close look at some of the IT security governance standards and how these standards are applied in the organisations gives us astonishing results about organisations' capability levels with most practitioners thinking IT security processes are either not fully implemented or fail to achieve its purpose.
Keywords: organisational aspects; security of data; IT delivery; IT security governance; decision making process; information technology; organisation aptitude; organisation capability; Australia; IEC Standards; ISO Standards; Information security (ID#: 15-8611)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293005&isnumber=7292885
Jorshari, F.Z.; Tawil, R.H., "A High-Level Scheme for an Ontology-Based Compliance Framework in Software Development," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1479-1487, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.300
Abstract: Software development market is currently witnessing an increasing demand for software applications conformance with the international regime of GRC for Governance, Risk and Compliance. In this paper, we propose a compliance requirement analysis method for early stages of software development based on a semantically-rich model, where a mapping can be established from legal and regulatory requirements relevant to system context to software system business goals and contexts. The proposed semantic model consists of a number of ontologies each corresponding to a knowledge component within the developed framework of our approach. Each ontology is a thesaurus of concepts in the compliance and risk assessment domain related to system development along with relationships and rules between concepts that compromise the domain knowledge. The main contribution of the work presented in this paper is a case study that demonstrates how description-logic reasoning techniques can be used to simulate legal reasoning requirements employed by legal professions against the description of each ontology.
Keywords: data protection; description logic; ontologies (artificial intelligence); professional aspects; risk management; software houses; GRC; compliance requirement analysis method; description-logic reasoning techniques; domain knowledge; governance-risk-and-compliance; high-level scheme; knowledge component; legal professions; legal reasoning requirements; legal requirements; ontologies; ontology-based compliance framework; regulatory requirements; risk assessment domain; semantic model; semantically-rich model; software application conformance; software development; software system business contexts; software system business goals; system development; Cascading style sheets; Conferences; Cyberspace; Embedded software; High performance computing; Safety; Security; Compliance; Data protection; Ontology; Privacy; Requirement Engineeering; Risk; Security; Standard (ID#: 15-8612)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336377&isnumber=7336120
Shaun Shei; Marquez Alcaniz, L.; Mouratidis, H.; Delaney, A.; Rosado, D.G.; Fernandez-Medina, E., "Modelling Secure Cloud Systems Based on System Requirements," in Evolving Security and Privacy Requirements Engineering (ESPRE), 2015 IEEE 2nd Workshop on, pp. 19-24, 25-25 Aug. 2015. doi: 10.1109/ESPRE.2015.7330163
Abstract: We enhance an existing security governance framework for migrating legacy systems to the cloud by holistically modelling the cloud infrastructure. To achieve this we demonstrate how components of the cloud infrastructure can be identified from existing security requirements models. We further extend the modelling language to capture cloud security requirements through a dual layered view of the cloud infrastructure, where the notions are supported through a running example.
Keywords: cloud computing; security of data; software maintenance; specification languages; cloud infrastructure; cloud security requirements; legacy systems; modelling language; secure cloud system modeling; security governance framework; security requirements models; system requirements; Aging; Analytical models; Cloud computing; Computational modeling; Guidelines; Physical layer; Security (ID#: 15-8613)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7330163&isnumber=7330155
Piliouras, T.C.; Suss, R.J.; Yu, P.L.; Kachalia, S.V.; Bangera, R.S.; Kalra, R.R.; Maniyar, M.P., "The Rise of Mobile Technology in Healthcare: The Challenge of Securing Teleradiology," in Emerging Technologies for a Smarter World (CEWIT), 2015 12th International Conference & Expo on, pp. 1-6, 19-20 Oct. 2015. doi: 10.1109/CEWIT.2015.7338167
Abstract: There are many potential security risks associated with viewing, accessing, and storage of DICOM files on mobile devices. Digital Imaging and Communications in Medicine (DICOM) is the industry standard for the communication and management of medical imaging. DICOM files contain multidimensional image data and associated meta-data (e.g., patient name, date of birth, etc.) designated as electronic protected health information (e-PHI). The HIPAA (Health Insurance Portability and Accountability Act) Privacy Rule, the HIPAA Security Rule, the ARRA (American Recovery and Reinvestment Act), the Health Information Technology for Economic and Clinical Health Act (HITECH), and applicable state law mandate comprehensive administrative, physical, and technical security safeguards to protect e-PHI, which includes (DICOM) medical images. Implementation of HIPAA security safeguards is difficult and often falls short. Mobile device use is proliferating among healthcare providers, along with associated risks to data confidentiality, integrity, and availability (CIA). Mobile devices and laptops are implicated in wide-spread data breaches of millions of patients??? data. These risks arise in many ways, including: i) inherent vulnerabilities of popular mobile operating systems (e.g., iOS, Android, Windows Phone); ii) sharing of mobile devices by multiple users; iii) lost or stolen devices; iv) transmission of clinical images over public (unsecured) wireless networks; v) lack of adequate password protection; vi) failure to use recommended safety precautions to protect data on a lost device (e.g., data wiping); and vi) use of personal mobile devices while accessing or sharing e-PHI. Analysis of commonly used methods for DICOM image sharing on mobile devices elucidates areas of vulnerability and points to the need for holistic security approaches to ensure HIPAA compliance within and across clinical settings. Innovative information governance strategies and new security approaches are ne- ded to protect against data breaches, and to aid in the collection and analysis of compliance data. Generally, it is difficult to share DICOM images across different HIPAA compliant Picture Archive and Communication Systems (PACS) and certified electronic health record (EHR) systems - while it is easy to share images using non-FDA approved, personal devices on unsecured networks. End-users in clinical settings must understand and strictly adhere to recommended mobile security precautions, and should be held to greater standards of personal accountability when they fail to do so.
Keywords: DICOM; Medical services; Mobile communication; Mobile handsets; Picture archiving and communication systems; Security; DICOM file sharing; DICOM mobile and cloud solutions; EHRs; HIPAA violation avoidance; PACS; information governance; mobile applications management; mobile device management (ID#: 15-8614)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338167&isnumber=7338153
Sricharan, K.G.; Kisore, N.R., "Mathematical Model to Study Propagation of Computer Worm in a Network," in Advance Computing Conference (IACC), 2015 IEEE International, pp. 772-777, 12-13 June 2015. doi: 10.1109/IADCC.2015.7154812
Abstract: Large scale digitization of essential services like governance, banking, public utilities etc has made the internet an attractive target for worm programmers to launch large scale cyber attack with the intention of either stealing information or disruption of services. Large scale attacks continue to happen in spite of the best efforts to secure a network by adopting new protection mechanisms against them. Security comes at a significant operational cost and organizations need to adopt an effective and efficient strategy so that the operational costs do not become more than the combined loss in the event of a wide spread attack. The ability to access damage in the event of a cyber attack and choose an appropriate and cost effective strategy depends on the ability to successfully model the spread of a cyber attack and thus determine the number of machines that would get affected. The existing models fail to take into account the impact of security techniques deployed on worm propagation while accessing the impact of worm on the computer network. Further they consider the network links to be homogenous and lack the granularity to capture the heterogeneity in security risk across the various links in a computer network. In this paper we propose a stochastic model that takes into account the fact that different network paths have different risk levels and also capture the impact of security defenses based on memory randomization on the worm propagation.
Keywords: Internet; computer network security; invasive software; stochastic processes; Internet; computer network; computer worm propagation; cyber attack;essential service digitization; mathematical model; memory randomization; network security; operational costs; protection mechanisms; security risk; stochastic model; Computational modeling; Computers; Grippers; Internet; Mathematical model; Security; Stochastic processes; Cyber defense; Large-scale cyber attack; Stochastic model (ID#: 15-8615)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154812&isnumber=7154658
Wang Li; Liu Fengming; Yang Rongrong; Sun Wenxing, "Research on Spreading Mechanism of Network Rumors Based on Potential Energy," in Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), 2015 International Conference on, pp. 282-285, 17-19 Sept. 2015. doi: 10.1109/CyberC.2015.62
Abstract: The governance of network rumors related to social stability, economic development and national security. The research on spreading mechanism of network rumors is an effective way of governance rumors. In this paper, a model of rumors spreading is put forward based on gravitational potential energy from two aspects: rumors and the receivers. Different rumors have different attractions, and different individuals are affected differently by one rumor. In this paper, gravitational potential energy is used to express the appeal of rumor spreader to the receivers. If the appeal is beyond a certain threshold, the receivers will shift to spreaders, forming new gravitational field to attract its neighbor nodes. In this model, some factors of rumor spreading are fully considered. Based on real rumors cases, the model is simulated in NetLogo platform. And the experimental results very fit with real rumors spreading. Therewith, the corresponding strategies and suggestions for rumors governance are proposed.
Keywords: social networking (online); NetLogo platform; economic development; governance rumors; gravitational field; gravitational potential energy; national security; network rumors governance; rumor receivers; rumor spreader; rumor spreading mechanism; rumors spreading; social networks; social stability; Attenuation; Economics; Government; Gravity; Mathematical model; Media; Potential energy; Gravitational Potential Energy; Micro-Network; Rumor (ID#: 15-8616)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7307828&isnumber=7307766
Priyadarshy, S., "Big data, Smart Data, Dark Data and Open Data: eGovernment of the Future," in eDemocracy & eGovernment (ICEDEG), 2015 Second International Conference on, pp. 16-16, 8-10 April 2015. doi: 10.1109/ICEDEG.2015.7114483
Abstract: Summary form only given. The convergence of multiple rEvolutions - Internet, Data, Software, Computing, Hardware, and Personalized attention is transforming governments across the world, on how they provide services to their citizens and remain relevant. The convergence of multiple forces enables the government to leverage Big Data, Smart Data and Dark Data by leveraging the concept of Big Open Data. The Big Open Data provides holistic views of citizens and other entities, real-time delivery of information to protect and service citizens and prevent fraud and abuse of countries resources. With a focus on innovation, strategy, better and faster decision the government can maximize the benefits from the Big Data. While harnessing Big Data has proven value in many enterprises and organizations, there are many pitfalls. What are those pitfalls and How to avoid them will be presented. Big Data by virtue of its narrow definition creates fear about the privacy, security and governance of the data. One of the pillars of Big Data is Virtual, and by taking advantage of this pillar along with other six pillars of Big Data, once can address the governance, privacy and security aspects of Big Data.
Keywords: Big Data; data privacy; government data processing; Big Open Data; Internet; citizen protection; citizen services; computing analysis; country resource abuse prevention; country resource fraud prevention; dark data; data governance; data privacy; data security; eGovernment; hardware analysis; information delivery; personalized attention factor; revolutions; smart data; software analysis; Big data; Convergence; Data privacy; Government; Internet; Security; Software (ID#: 15-8617)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7114483&isnumber=7114453
Fischer, D.; Spada, M.; Job, J.-F.; Leclerc, T.; Mauny, C.; Thimont, J., "The Weak Point: A Framework to Enhance Operational Mission Data Systems Security," in Aerospace Conference, 2015 IEEE, pp. 1-17, 7-14 March 2015. doi: 10.1109/AERO.2015.7118924
Abstract: ESA and other space agencies operate assets of very high tangible and intangible value. These embed and are operated through a large number of data systems. The security and robustness of these data systems is becoming more and more important. In our paper, we present the results of the Generic Application Security Framework (GASF) study. The GASF enables the efficient development of security enhanced operational mission data systems by introducing a secure software development lifecycle but avoiding unnecessary overhead for developers and project managers. The focus lies on complex aspects of requirements specification, software assurance, certification, and governance.
Keywords: information systems; project management; security of data; software management; GASF; data systems; generic application security framework; information systems; intangible value; operational mission data systems security; project managers; secure software development lifecycle; security enhanced operational mission data systems; space agencies; Biographies; Certification; Europe; Indexes; Security; Software maintenance; Requirements Specification; Risk Assessment; Secure Software Engineering; Software Development Lifecycle (ID#: 15-8618)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7118924&isnumber=7118873
Chatterjee, P.; Nath, A., "Biometric Authentication for UID-based Smart and Ubiquitous Services in India," in Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, pp. 662-667, 4-6 April 2015. doi: 10.1109/CSNT.2015.195
Abstract: India holds one of the largest domains of the world in providing governance within a population of 1.2 billion people. Recent years have seen massive initiatives from all sides for using inclusive technology in catering public services. Several ventures like 'digital India' have been taken to improve the inbuilt technology in different governance systems. But ageing systems and isolated service-domains with voluminous structures have turned this task herculean. Biometric authentication and UID-based services come up in this scenario with an effort in simplifying the manner in which services are catered to the citizens. Though the Aadhaar project in India has been functioning in full vigor since its very inception, the service domains stay somewhat confined within specific areas. The authors have tried to extend this UID service domain to different potential sectors for catering smart services on one hand. The areas proposed cover transports, banking and even voting models. Biometric authentication on the other hand, is proposed as an alternative verification and authentication mechanism in these extended sectors. Unleashing its robustness and simplicity, biometric authentication techniques could be used to wipe out the chances of corruption in different aspects by proffering comprehensive linked-up security. Such verification and authentication mechanisms clubbed with the UID-based services would turn the existing systems smart besides opening up the foundation of ubiquitous services in India. The authors have also conducted a survey to understand the digital preparedness of the mass in accepting biometric techniques. The reports responded positive portraying the dire need of implementing interlinked ubiquitous services which could be made more robust, secured and seamless with the use of biometric authentication mechanisms.
Keywords: biometrics (access control); security of data; ubiquitous computing; Aadhaar project; India; UID-based smart service; banking; biometric authentication technique; digital India; interlinked ubiquitous service; transport; unique identification; voting model; Authentication; Biological system modeling; Databases; Face recognition; Fingerprint recognition; Radiation detectors; Robustness; UID; biometric; digital kiosks; electronic fingerprint; governance; iris technology; smart systems; ubiquitous (ID#: 15-8619)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280001&isnumber=7279856
Basu, S.S.; Tripathy, S.; Chowdhury, A.R., "Design Challenges and Security Issues in the Internet of Things," in Region 10 Symposium (TENSYMP), 2015 IEEE, pp. 90-93, 13-15 May 2015. doi: 10.1109/TENSYMP.2015.25
Abstract: The world is rapidly getting connected. Commonplace everyday things are providing and consuming software services exposed by other things and service providers. A mash up of such services extends the reach of the current Internet to potentially resource constrained "Things", constituting what is being referred to as the Internet of Things (IoT). IoT is finding applications in various fields like Smart Cities, Smart Grids, Smart Transportation, e-health and e-governance. The complexity of developing IoT solutions arise from the diversity right from device capability all the way to the business requirements. In this paper we focus primarily on the security issues related to design challenges in IoT applications and present an end-to-end security framework.
Keywords: Internet; Internet of Things; security of data; Internet of Things; IoT; e-governance; e-health; end-to-end security framework; service providers; smart cities; smart grids; smart transportation ;software services; Computer crime; Encryption; Internet of things; Peer-to-peer computing; Protocols; End-to-end (E2E) security; Internet of Things (IoT); Resource constrained devices; Security (ID#: 15-8620)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166245&isnumber=7166213
Derhamy, H.; Eliasson, J.; Delsing, J.; Priller, P., "A Survey of Commercial Frameworks for the Internet of Things," in Emerging Technologies & Factory Automation (ETFA), 2015 IEEE 20th Conference on, pp.1-8, 8-11 Sept. 2015. doi: 10.1109/ETFA.2015.7301661
Abstract: In 2011 Ericsson and Cisco estimated 50 billion Internet connected devices by 2020, encouraged by this industry is developing application frameworks to scale the Internet of Things. This paper presents a survey of commercial frameworks and platforms designed for developing and running Internet of Things applications. The survey covers frameworks supported by big players in the software and electronics industries. The frameworks are evaluated against criteria such as architectural approach, industry support, standards based protocols and interoperability, security, hardware requirements, governance and support for rapid application development. There is a multitude of frameworks available and here a total 17 frameworks and platforms are considered. The intention of this paper is to present recent developments in commercial IoT frameworks and furthermore, identify trends in the current design of frameworks for the Internet of Things; enabling massively connected cyber physical systems.
Keywords: Internet of Things; Internet of Things; commercial IoT frameworks; Internet of things; Interoperability; Protocols; Security; Servers; Standards (ID#: 15-8621)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301661&isnumber=7301399
Kuusk, A.; Jing Gao, "Factors for Successfully Integrating Operational and Information Technologies," in Management of Engineering and Technology (PICMET), 2015 Portland International Conference on, pp. 1513-1523, 2-6 Aug. 2015. doi: 10.1109/PICMET.2015.7273136
Abstract: Technology, organisation and people factors influence the success of technology integration. This paper explores recent research findings of integration of Operational Technology (OT) and Information Technology (IT) in organisations with a primary function of managing assets. The main differences between the two technologies are that one is attached to assets and governs real time asset control and performance data, the other has static information and is traditionally used to make decisions. Understanding the factors for integrating the technologies is important because if organisations can leverage understanding of the influencing people, process and technology factors on the phases of integration of OT and IT, organisations can improve asset performance and control and therefore influence the consumption, cost, maintenance and consistent, reliable, secure provision of critical services such as energy and water. Integration theory applicability may be extended to the asset management environment and provide practitioners with a holistic, end to end, integrated framework to guide the efficient integration of OT and IT. The paper explores the integration phases, influencing factors and challenges such as the role of information governance, security and reliability decision rights identified in survey and case study research with asset management practitioners. The research concludes by suggesting a validated holistic framework for integrating OT and IT in asset management oriented organisations.
Keywords: asset management; business data processing; decision making; organisational aspects; IT; Integration theory applicability; OT; asset management oriented organisations; decision making; information technology; operational technology; performance data; real-time asset control; technology integration; Context; Finance; Information technology; Manufacturing; Object recognition; Reliability (ID#: 15-8622)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273136&isnumber=7272950
Jiawei Hao; Yan Zhou; Weiran Xu, "Impact of Venture Investment Shareholders on the Financing Behavior of the Listing Corporation on A-Share Market," in Service Systems and Service Management (ICSSSM), 2015 12th International Conference on, pp. 1-6, 22-24 June 2015. doi: 10.1109/ICSSSM.2015.7170235
Abstract: Venture capital started in 1980s in China, the scale of which has grown rapidly in the recent thirty years. As the existence of lock-up period of stock right in our country, venture capital will continue to affect all aspects of corporate governance of the listed corporation after the successful listing. At present, Chinese listing Corporation generally have some problems in financing, mainly for the narrow financing channels, high exogenous financing cost, and the obvious preference to equity financing. This paper emphatically discusses whether the participation of venture capital will affect the financing behavior of the listed corporations. Based on the relevant literature, this paper collects the data of listed corporations in the A-stock market during 2006-2013 as the samples of the empirical testing. The results show that the participation of venture capital can effectively increase the debt financing and equity financing of the listed corporations.
Keywords: stock markets; venture capital; A-share market; debt financing; equity financing; financing behavior; listing corporation; venture capital; venture investment shareholders; Companies; Indexes; Investment; Security; Venture capital; debt financing; equity financing; listing Corporation; venture capital (ID#: 15-8623)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7170235&isnumber=7170133
Musarurwa, A.; Jazri, H., "A Proposed Framework to Measure Growth of Critical Information Infrastructure Protection in Africa," in Emerging Trends in Networks and Computer Communications (ETNCC), 2015 International Conference on, pp. 85-90, 17-20 May 2015. doi: 10.1109/ETNCC.2015.7184813
Abstract: Historically Africa was associated with a very low broadband penetration rate. At the turn of the millennium, there has been a massive expansion in the penetration rates of seacom cables resulting in an exponential growth on the fixed and mobile broadband in the continent. This paper investigates the effect of the exponential broadband growth on the Critical Information Infrastructure Protection (CIIP) in Africa and proposes a framework that can be used to measure the progress of CIIP and its impact in Africa.
Keywords: broadband networks; computer network security; Africa; CIIP; broadband penetration rate; critical information infrastructure protection; exponential broadband growth; fixed broadband; mobile broadband; seacom cables; Africa; Broadband communication; Computer security; Education; Internet; Mobile communication; Broadband Development in Africa; Critical Information Infrastructure Protection; Critical Infrastructure Protection in Africa; Cyber Security in Africa; ICT Governance in Africa (ID#: 15-8624)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184813&isnumber=7184793
Orji, U.J., "Multilateral Legal Responses to Cyber Security in Africa: Any Hope for Effective International Cooperation?," in Cyber Conflict: Architectures in Cyberspace (CyCon), 2015 7th International Conference on, pp. 105-118, 26-29 May 2015. doi: 10.1109/CYCON.2015.7158472
Abstract: Within the past decade, Africa has witnessed a phenomenal growth in Internet penetration and the use of Information Communications Technologies (ICTs). However, the spread of ICTs and Internet penetration has also raised concerns about cyber security at regional and sub-regional governance forums. This has led African intergovernmental organizations to develop legal frameworks for cyber security. At the sub-regional level, the Economic Community of West African States (ECOWAS) has adopted a Directive on Cybercrime, while the Common Market for Eastern and Southern Africa (COMESA) and the Southern African Development Community (SADC) have adopted model laws. At the regional level, the African Union (AU) has adopted a Convention on Cyber Security and Personal Data Protection. This paper seeks to examine these legal instruments with a view to determining whether they provide adequate frameworks for mutual assistance and international cooperation on cyber security and cyber crime control. The paper will argue that the AU Convention on Cyber Security and Personal Data Protection does not provide an adequate framework for mutual assistance and international cooperation amongst African States and that this state of affairs may limit and fragment international cooperation and mutual assistance along sub-regional lines or bilateral arrangements. It will recommend the development of international cooperation and mutual assistance mechanisms within the framework of the AU and also make a case for the establishment of a regional Computer Emergency Response Team to enhance cooperation as well as the coordination of responses to cyber security incidents.
Keywords: Internet; data protection; industrial property; security of data; AU; African Union; African intergovernmental organizations; COMESA; Common Market for Eastern and Southern Africa; ECOWAS; Economic Community of West African States; ICTs; Internet penetration; Southern African Development Community; cyber crime control; cyber security; effective international cooperation; information and communication technology; legal instruments; multilateral legal responses; mutual assistance mechanisms; personal data protection; regional governance forums; sub-regional governance forums; Africa; Computer crime; Computers; Gold; Law; African Union; Computer Emergency Response Teams; Mutual Legal Assistance; dual criminality (ID#: 15-8625)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158472&isnumber=7158456
De Lange, J.; Von Solms, R.; Gerber, M., "Better Information Security Management In Municipalities," in IST-Africa Conference, 2015, pp. 1-10, 6-8 May 2015. doi: 10.1109/ISTAFRICA.2015.7190529
Abstract: Municipalities handle valuable information in very large quantities on a daily basis. Due to the value, and often confidential nature, of this information, the protection of the information and the related technologies are a key concern for municipalities, especially in South Africa. For this very reason, several official government documents require South African municipalities to implement effective information security management systems. However, according to the Auditor General of South Africa, municipalities are struggling in this regard. This study uses a literature review, document analysis, and argumentation to identify the crucial components of an information security management system. These components are then logically presented in a hierarchical structure to possibly assist municipalities to improve their individual information security management processes. Addressing these components can also be applied in municipalities across Africa to improve information security management.
Keywords: document handling; government data processing; local government; security of data; South Africa; document analysis; hierarchical structure; information protection; information security management systems; literature review; municipalities; official government documents; Best practices; IEC Standards; ISO Standards; Information security; Local government; Governance of Information Security; ISO/IEC 27002 standard; Information Security; Information Security Management; Information Security Policy; Municipal Council; Municipalities (ID#: 15-8626)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7190529&isnumber=7190513
Zhang Ying-Hua; Ji Yu-Chen; Huang Zhi-An; Zhao Qian; Gao Yu-Kun, "The Laboratory Studies of Slope Stability in Luming Molybdenum Mine West-I District," in Measuring Technology and Mechatronics Automation (ICMTMA), 2015 Seventh International Conference on, pp. 1224-1227, 13-14 June 2015. doi: 10.1109/ICMTMA.2015.298
Abstract: Slope height gradually increased with the increase of mining depth. While the probability of slope instability and the difficulty of preventing a growing stope from destruction give mine a huge security risk and economic losses. By means of the slope simulation experiments we can directly observe and record the deformation, damage evolution of the object and each stage of the deformation of the stress can be obtained in stress analysis. Analyzing the stability of slope in 2-2 profile of Luming molybdenum West-I district by a similar simulation has these advantages: intuitive, clear, short test cycle and low-cost. And some important factors hard to be considered in calculating mathematical analysis can be considered. First, get the ratio number of each simulation stratified by similar material ratio test. Followed by the establishment of a similar model and simulating the excavation process, directly observing and recording the displacement of test model with every step of the slope excavation as well as the stress changes of strain gauges. Through the analysis of experimental data, a conclusion is drawn: stress concentration is most likely to occur in 180-meter platform and 240-meter platform located in F3 and F4 fault. And they should be a priority in slope monitoring and reinforcement of governance. The study plays a role to protect the safety of Luming molybdenum production.
Keywords: deformation; geotechnical engineering; mechanical stability; mining; molybdenum; strain gauges; stress analysis; Luming molybdenum mine West-I district; Luming molybdenum production; damage evolution; economic losses; experimental data analysis; laboratory studies; material ratio test; mathematical analysis; mining depth; security risk; size 180 m; size 240 m; slope height; slope instability probability; slope simulation experiments; slope stability analysis; strain gauges; stress analysis; stress concentration; stress deformation; the excavation process; Analytical models; Molybdenum; Monitoring; Rocks; Stability analysis; Strain; Stress; similar material; simulation experiment; slope; stability (ID#: 15-8627)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7263794&isnumber=7263490
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Hard Problems: Predictive Metrics 2015 |
One of the hard problems in the Science of Security is the development of predictive metrics. The work on this topic cited here was presented in 2015.
Abraham, S.; Nair, S., "Exploitability Analysis Using Predictive Cybersecurity Framework," in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, pp. 317-323, 24-26 June 2015. doi: 10.1109/CYBConf.2015.7175953
Abstract: Managing Security is a complex process and existing research in the field of cybersecurity metrics provide limited insight into understanding the impact attacks have on the overall security goals of an enterprise. We need a new generation of metrics that can enable enterprises to react even faster in order to properly protect mission-critical systems in the midst of both undiscovered and disclosed vulnerabilities. In this paper, we propose a practical and predictive security model for exploitability analysis in a networking environment using stochastic modeling. Our model is built upon the trusted CVSS Exploitability framework and we analyze how the atomic attributes namely Access Complexity, Access Vector and Authentication that make up the exploitability score evolve over a specific time period. We formally define a nonhomogeneous Markov model which incorporates time dependent covariates, namely the vulnerability age and the vulnerability discovery rate. The daily transition-probability matrices in our study are estimated using a combination of Frei's model & Alhazmi Malaiya's Logistic model. An exploitability analysis is conducted to show the feasibility and effectiveness of our proposed approach. Our approach enables enterprises to apply analytics using a predictive cyber security model to improve decision making and reduce risk.
Keywords: Markov processes; authorisation; decision making; risk management; access complexity; access vector; authentication; daily transition-probability matrices; decision making; exploitability analysis; nonhomogeneous Markov model; predictive cybersecurity framework; risk reduction; trusted CVSS exploitability framework; vulnerability age; vulnerability discovery rate; Analytical models; Computer security; Markov processes; Measurement; Predictive models; Attack Graph; CVSS; Markov Model; Security Metrics; Vulnerability Discovery Model; Vulnerability Lifecycle Model. (ID#: 15-8566)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175953&isnumber=7175890
Yaming Tang; Fei Zhao; Yibiao Yang; Hongmin Lu; Yuming Zhou; Baowen Xu, "Predicting Vulnerable Components via Text Mining or Software Metrics? An Effort-Aware Perspective," in Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, pp. 27-36, 3-5 Aug. 2015. doi: 10.1109/QRS.2015.15
Abstract: In order to identify vulnerable software components, developers can take software metrics as predictors or use text mining techniques to build vulnerability prediction models. A recent study reported that text mining based models have higher recall than software metrics based models. However, this conclusion was drawn without considering the sizes of individual components which affects the code inspection effort to determine whether a component is vulnerable. In this paper, we investigate the predictive power of these two kinds of prediction models in the context of effort-aware vulnerability prediction. To this end, we use the same data sets, containing 223 vulnerabilities found in three web applications, to build vulnerability prediction models. The experimental results show that: (1) in the context of effort-aware ranking scenario, text mining based models only slightly outperform software metrics based models, (2) in the context of effort-aware classification scenario, text mining based models perform similarly to software metrics based models in most cases, and (3) most of the effect sizes (i.e. the magnitude of the differences) between these two kinds of models are trivial. These results suggest that, from the viewpoint of practical application, software metrics based models are comparable to text mining based models. Therefore, for developers, software metrics based models are practical choices for vulnerability prediction, as the cost to build and apply these models is much lower.
Keywords: Internet; data mining; software metrics; text analysis; Web applications; effort-aware perspective; effort-aware ranking scenario; effort-aware vulnerability prediction; software metrics based models; text mining; vulnerability prediction models; vulnerable software components; Context; Context modeling; Predictive models; Software; Software metrics; Text mining; effort-aware; prediction; software metrics; text mining; vulnerability (ID#: 15-8567)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272911&isnumber=7272893
Woody, C.; Ellison, R.; Nichols, W., "Predicting Cybersecurity Using Quality Data," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-5, 14-16 April 2015. doi: 10.1109/THS.2015.7225327
Abstract: Within the process of system development and implementation, programs assemble hundreds of different metrics for tracking and monitoring software such as budgets, costs and schedules, contracts, and compliance reports. Each contributes, directly or indirectly, toward the cybersecurity assurance of the results. The Software Engineering Institute has detailed size, defect, and process data on over 100 software development projects. The projects include a wide range of application domains. Data from five projects identified as successful safety-critical or security-critical implementations were selected for cybersecurity consideration. Material was analyzed to identify a possible correlation between modeling quality and security and to identify potential predictive cybersecurity modeling characteristics. While not a statistically significant sample, this data indicates the potential for establishing benchmarks for ranges of quality performance (for example, defect injection rates and removal rates and test yields) that provide a predictive capability for cybersecurity results.
Keywords: safety-critical software; security of data; software quality; system monitoring; Software Engineering Institute; cybersecurity assurance; cybersecurity consideration; predictive capability; predictive cybersecurity modeling characteristics; programs assemble; quality data; quality performance; safety-critical implementation; security-critical implementation; software development project; software monitoring; software tracking; system development; Contracts; Safety; Schedules; Software; Software measurement; Testing; Topology; engineering security; quality modeling; security predictions; software assurance (ID#: 15-8568)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225327&isnumber=7190491
Abraham, S.; Nair, S., "A Novel Architecture for Predictive CyberSecurity Using Non-homogenous Markov Models," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 774-781, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.446
Abstract: Evaluating the security of an enterprise is an important step towards securing its system and resources. However existing research provide limited insight into understanding the impact attacks have on the overall security goals of an enterprise. We still lack effective techniques to accurately measure the predictive security risk of an enterprise taking into account the dynamic attributes associated with vulnerabilities that can change over time. It is therefore critical to establish an effective cyber-security analytics strategy to minimize risk and protect critical infrastructure from external threats before it even starts. In this paper we present an integrated view of security for computer networks within an enterprise, understanding threats and vulnerabilities, performing analysis to evaluate the current as well as future security situation of an enterprise to address potential situations. We formally define a non-homogeneous Markov model for quantitative security evaluation using Attack Graphs which incorporates time dependent covariates, namely the vulnerability age and the vulnerability discovery rate to help visualize the future security state of the network leading to actionable knowledge and insight. We present experimental results from applying this model on a sample network to demonstrate the practicality of our approach.
Keywords: Markov processes; computer network security; attack graphs; computer networks; cyber security analytics strategy; dynamic attributes; enterprise security goals; external threats; impact attacks; nonhomogeneous Markov model; nonhomogenous Markov Models; predictive cybersecurity; predictive security risk; quantitative security evaluation; time dependent covariates; Biological system modeling; Computer architecture; Computer security; Markov processes; Measurement; Attack Graph; CVSS; Cyber Situational Awareness; Markov Model; Security Metrics; Vulnerability Discovery Model; Vulnerability Lifecycle Model (ID#: 15-8569)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345354&isnumber=7345233
Anger, E.; Yalamanchili, S.; Dechev, D.; Hendry, G.; Wilke, J., "Application Modeling for Scalable Simulation of Massively Parallel Systems," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 238-247, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.286
Abstract: Macro-scale simulation has been advanced as one tool for application -- architecture co-design to express operation of exascale systems. These simulations approximate the behavior of system components, trading off accuracy for increased evaluation speed. Application skeletons serve as the vehicle for these simulations, but they require accurately capturing the execution behavior of computation. The complexity of application codes, the heterogeneity of the platforms, and the increasing importance of simulating multiple performance metrics (e.g., execution time, energy) require new modeling techniques. We propose flexible statistical models to increase the fidelity of application simulation at scale. We present performance model validation for several exascale mini-applications that leverage a variety of parallel programming frameworks targeting heterogeneous architectures for both time and energy performance metrics. When paired with these statistical models, application skeletons were simulated on average 12.5 times faster than the original application incurring only 6.08% error, which is 12.5% faster and 33.7% more accurate than baseline models.
Keywords: parallel architectures; parallel programming; power aware computing; principal component analysis; application-architecture codesign; energy performance metrics; exascale systems; flexible statistical model; heterogeneous architectures; massively parallel systems; parallel programming frameworks; performance metrics; performance model; scalable simulation modeling; statistical model; time performance metrics; Analytical models; Computational modeling; Data models;Hardware; Load modeling; Predictive models; Skeleton (ID#: 15-8570)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336170&isnumber=7336120
Daniel R. Thomas, Alastair R. Beresford, Andrew Rice; “Security Metrics for the Android Ecosystem;” SPSM '15 Proceedings of the 5th Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices, October 2015, Pages 87-98. Doi: 10.1145/2808117.2808118
Abstract: The security of Android depends on the timely delivery of updates to fix critical vulnerabilities. In this paper we map the complex network of players in the Android ecosystem who must collaborate to provide updates, and determine that inaction by some manufacturers and network operators means many handsets are vulnerable to critical vulnerabilities. We define the FUM security metric to rank the performance of device manufacturers and network operators, based on their provision of updates and exposure to critical vulnerabilities. Using a corpus of 20 400 devices we show that there is significant variability in the timely delivery of security updates across different device manufacturers and network operators. This provides a comparison point for purchasers and regulators to determine which device manufacturers and network operators provide security updates and which do not. We find that on average 87.7% of Android devices are exposed to at least one of 11 known critical vulnerabilities and, across the ecosystem as a whole, assign a FUM security score of 2.87 out of 10. In our data, Nexus devices do considerably better than average with a score of 5.17; and LG is the best manufacturer with a score of 3.97.
Keywords: android, ecosystems, metrics, updates, vulnerabilities (ID#: 15-8571)
URL: http://doi.acm.org/10.1145/2808117.2808118
Cormac Herley, Wolter Pieters; “’If You Were Attacked, You'd Be Sorry’: Counterfactuals as Security Arguments;” NSPW '15 Proceedings of the 2015 New Security Paradigms Workshop, September 2015, Pages 112-123. Doi: 10.1145/2841113.2841122
Abstract: Counterfactuals (or what-if scenarios) are often employed as security arguments, but the dos and don'ts of their use are poorly understood. They are useful to discuss vulnerability of systems under threats that haven't yet materialized, but they can also be used to justify investment in obscure controls. In this paper, we shed light on the role of counterfactuals in security, and present conditions under which counterfactuals are legitimate arguments, linked to the exclusion or inclusion of the threat environment in security metrics. We provide a new paradigm for security reasoning by deriving essential questions to ask in order to decide on the acceptability of specific counterfactuals as security arguments, which can serve as a basis for further study in this field. We conclude that counterfactuals are a necessary evil in security, which should be carefully controlled.
Keywords: adversarial risk, control strength, counterfactuals, security arguments, security metrics, threat environment (ID#: 15-8572)
URL: http://doi.acm.org/10.1145/2841113.2841122
Yang Liu, Jing Zhang, Armin Sarabi, Mingyan Liu, Manish Karir, Michael Bailey; “Predicting Cyber Security Incidents Using Feature-Based Characterization of Network-Level Malicious Activities;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 3-9. Doi: 10.1145/2713579.2713582
Abstract: This study offers a first step toward understanding the extent to which we may be able to predict cyber security incidents (which can be of one of many types) by applying machine learning techniques and using externally observed malicious activities associated with network entities, including spamming, phishing, and scanning, each of which may or may not have direct bearing on a specific attack mechanism or incident type. Our hypothesis is that when viewed collectively, malicious activities originating from a network are indicative of the general cleanness of a network and how well it is run, and that furthermore, collectively they exhibit fairly stable and thus predictive behavior over time. To test this hypothesis, we utilize two datasets in this study: (1) a collection of commonly used IP address-based/host reputation blacklists (RBLs) collected over more than a year, and (2) a set of security incident reports collected over roughly the same period. Specifically, we first aggregate the RBL data at a prefix level and then introduce a set of features that capture the dynamics of this aggregated temporal process. A comparison between the distribution of these feature values taken from the incident dataset and from the general population of prefixes shows distinct differences, suggesting their value in distinguishing between the two while also highlighting the importance of capturing dynamic behavior (second order statistics) in the malicious activities. These features are then used to train a support vector machine (SVM) for prediction. Our preliminary results show that we can achieve reasonably good prediction performance over a forecasting window of a few months.
Keywords: network reputation, network security, prediction, temporal pattern, time-series data (ID#: 15-8573)
URL: http://doi.acm.org/10.1145/2713579.2713582
Patrick Morrison, Kim Herzig, Brendan Murphy, Laurie Williams; “Challenges with Applying Vulnerability Prediction Models;” HotSoS '15 Proceedings of the 2015 Symposium and Bootcamp on the Science of Security, April 2015, Article No. 4. Doi: 10.1145/2746194.2746198
Abstract: Vulnerability prediction models (VPM) are believed to hold promise for providing software engineers guidance on where to prioritize precious verification resources to search for vulnerabilities. However, while Microsoft product teams have adopted defect prediction models, they have not adopted vulnerability prediction models (VPMs). The goal of this research is to measure whether vulnerability prediction models built using standard recommendations perform well enough to provide actionable results for engineering resource allocation. We define 'actionable' in terms of the inspection effort required to evaluate model results. We replicated a VPM for two releases of the Windows Operating System, varying model granularity and statistical learners. We reproduced binary-level prediction precision (~0.75) and recall (~0.2). However, binaries often exceed 1 million lines of code, too large to practically inspect, and engineers expressed preference for source file level predictions. Our source file level models yield precision below 0.5 and recall below 0.2. We suggest that VPMs must be refined to achieve actionable performance, possibly through security-specific metrics.
Keywords: churn, complexity, coverage, dependencies, metrics, prediction, vulnerabilities (ID#: 15-8574)
URL: http://doi.acm.org/10.1145/2746194.2746198
Gargi Saha, T. Pranav Bhat, K. Chandrasekaran; “A Generic Approach to Security Evaluation for Multimedia Data;” ICCCT '15 Proceedings of the Sixth International Conference on Computer and Communication Technology 2015, September 2015, Pages 333-338. Doi: 10.1145/2818567.2818669
Abstract: Beginning with a critical analysis of existing multimedia metrics, this paper builds upon their drawbacks in streaming media by the introduction of alternate metrics, backed by analytical correctness proofs of accuracy and comparative simulation with earlier metrics to justify the improvements made in security judgement techniques.
Keywords: Luminance Similarity Score, Multimedia, Multimedia security metrics, Security Metrics, Security evaluation metrics (ID#: 15-8575)
URL: http://doi.acm.org/10.1145/2818567.2818669
Mohammad Noureddine, Ken Keefe, William H. Sanders, Masooda Bashir; “Quantitative Security Metrics with Human in the Loop;” HotSoS '15 Proceedings of the 2015 Symposium and Bootcamp on the Science of Security, April 2015, Article No. 21. Doi: 10.1145/2746194.2746215
Abstract: The human factor is often regarded as the weakest link in cybersecurity systems. The investigation of several security breaches reveals an important impact of human errors in exhibiting security vulnerabilities. Although security researchers have long observed the impact of human behavior, few improvements have been made in designing secure systems that are resilient to the uncertainties of the human element. In this work, we summarize the state of the art work in human cybersecurity research, and present the Human-Influenced Task-Oriented (HITOP) formalism for modeling human decisions in security systems. We also provide a roadmap for future research. We aim at developing a simulation tool that allows modeling and analysis of security systems in light of the uncertainties of human behavior.
Keywords: human models, quantitative security metrics, security modeling (ID#: 15-8576)
URL: http://doi.acm.org/10.1145/2746194.2746215
Shouling Ji, Shukun Yang, Ting Wang, Changchang Liu, Wei-Han Lee, Raheem Beyah; “PARS: A Uniform and Open-source Password Analysis and Research System;” ACSAC 2015 Proceedings of the 31st Annual Computer Security Applications Conference, December2015, Pages 321-330. Doi:
Abstract: In this paper, we introduce an open-source and modular password analysis and research system, PARS, which provides a uniform, comprehensive and scalable research platform for password security. To the best of our knowledge, PARS is the first such system that enables researchers to conduct fair and comparable password security research. PARS contains 12 state-of-the-art cracking algorithms, 15 intra-site and cross-site password strength metrics, 8 academic password meters, and 15 of the 24 commercial password meters from the top-150 websites ranked by Alexa. Also, detailed taxonomies and large-scale evaluations of the PARS modules are presented in the paper.
Keywords: Passwords, cracking, evaluation, measurement, metrics (ID#: 15-8577)
URL: http://doi.acm.org/10.1145/2818000.2818018
Sofia Charalampidou, Apostolos Ampatzoglou, Paris Avgeriou; “Size and Cohesion Metrics as Indicators of the Long Method Bad Smell: An Empirical Study;” PROMISE '15 Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering; October 2015, Article No. 8. Doi: 10.1145/2810146.2810155
Abstract: Source code bad smells are usually resolved through the application of well-defined solutions, i.e., refactoring. In the literature, software metrics are used as indicators of the existence and prioritization of resolving bad smells. In this paper, we focus on the long method smell (i.e. one of the most frequent and persistent bad smells) that can be resolved by the extract method refactoring. Until now, the identification of long methods or extract method opportunities has been performed based on cohesion, size or complexity metrics. However, the empirical validation of these metrics has exhibited relatively low accuracy with regard to their capacity to indicate the existence of long methods or extract method opportunities. Thus, we empirically explore the ability of size and cohesion metrics to predict the existence and the refactoring urgency of long method occurrences, through a case study on java open-source methods. The results of the study suggest that one size and four cohesion metrics are capable of characterizing the need and urgency for resolving the long method bad smell, with a higher accuracy compared to the previous studies. The obtained results are discussed by providing possible interpretations and implications to practitioners and researchers.
Keywords: Long method, case study, cohesion, metrics, size (ID#: 15-8578)
URL: http://doi.acm.org/10.1145/2810146.2810155
Haining Chen, Omar Chowdhury, Jing Chen, Ninghui Li, Robert Proctor; “Towards Quantification of Firewall Policy Complexity;” HotSoS '15 Proceedings of the 2015 Symposium and Bootcamp on the Science of Security, April 2015, Article No. 18. Doi: 10.1145/2746194.2746212
Abstract: Developing metrics for quantifying the security and usability aspects of a system has been of constant interest to the cybersecurity research community. Such metrics have the potential to provide valuable insight on security and usability of a system and to aid in the design, development, testing, and maintenance of the system. Working towards the overarching goal of such metric development, in this work we lay down the groundwork for developing metrics for quantifying the complexity of firewall policies. We are particularly interested in capturing the human perceived complexity of firewall policies. To this end, we propose a potential workflow that researchers can follow to develop empirically-validated, objective metrics for measuring the complexity of firewall policies. We also propose three hypotheses that capture salient properties of a firewall policy which constitute the complexity of a policy for a human user. We identify two categories of human-perceived policy complexity (i.e., syntactic complexity and semantic complexity), and for each of them propose potential complexity metrics for firewall policies that exploit two of the hypotheses we suggest. The current work can be viewed as a stepping stone for future research on development of such policy complexity metrics.
Keywords: firewall policies, policy complexity metrics (ID#: 15-8579)
URL: http://doi.acm.org/10.1145/2746194.2746212
Niketa Gupta, Deepali Singh, Ashish Sharma; “Identifying Effective Software Metrics for Categorical Defect Prediction Using Structural Equation Modeling;” WCI '15 Proceedings of the Third International Symposium on Women in Computing and Informatics, April 2015, Pages 59-65. Doi: 10.1145/2791405.2791484
Abstract: Software Defect prediction is the pre-eminent area of software engineering which has witnessed huge importance over last decades. The identification of defects in the early stages of software development improves the quality of the software system and reduce the effort in maintaining the quality of software product. Many research studies have been conducted to construct the prediction model that considers the CK metrics suite and object oriented software metrics. For the prediction model development, consideration of interaction among the metrics is not a common practice. This paper presents the empirical evaluation in which several software metrics were investigated in order to identify the effective set of the metrics for each defect category which can significantly improve the defect prediction model made for each defect category. For each of the metrics, Pearson correlation coefficient with the number of defect categories were calculated and subsequently stepwise regression model is constructed to predict the reduced set metrics for each defect category. We have proposed a novel approach for modeling the defects using structural equation modeling further which validates our work. Structural models were built for each defect category using structural equation modeling which claims that results are validated.
Keywords: Defect Prediction, Software Metrics, Stepwise regression model, Structural Equation Modeling (ID#: 15-8580)
URL: http://doi.acm.org/10.1145/2791405.2791484
Xiaoyuan Jing, Fei Wu, Xiwei Dong, Fumin Qi, Baowen Xu; “Heterogeneous Cross-Company Defect Prediction by Unified Metric Representation and CCA-Based Transfer Learning;” ESEC/FSE 2015 Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, August 2015, Pages 496-507. Doi: 10.1145/2786805.2786813
Abstract: Cross-company defect prediction (CCDP) learns a prediction model by using training data from one or multiple projects of a source company and then applies the model to the target company data. Existing CCDP methods are based on the assumption that the data of source and target companies should have the same software metrics. However, for CCDP, the source and target company data is usually heterogeneous, namely the metrics used and the size of metric set are different in the data of two companies. We call CCDP in this scenario as heterogeneous CCDP (HCCDP) task. In this paper, we aim to provide an effective solution for HCCDP. We propose a unified metric representation (UMR) for the data of source and target companies. The UMR consists of three types of metrics, i.e., the common metrics of the source and target companies, source-company specific metrics and target-company specific metrics. To construct UMR for source company data, the target-company specific metrics are set as zeros, while for UMR of the target company data, the source-company specific metrics are set as zeros. Based on the unified metric representation, we for the first time introduce canonical correlation analysis (CCA), an effective transfer learning method, into CCDP to make the data distributions of source and target companies similar. Experiments on 14 public heterogeneous datasets from four companies indicate that: 1) for HCCDP with partially different metrics, our approach significantly outperforms state-of-the-art CCDP methods; 2) for HCCDP with totally different metrics, our approach obtains comparable prediction performances in contrast with within-project prediction results. The proposed approach is effective for HCCDP.
Keywords: Heterogeneous cross-company defect prediction (HCCDP), canonical correlation analysis (CCA), common metrics, company-specific metrics, unified metric representation (ID#: 15-8581)
URL: http://doi.acm.org/10.1145/2786805.2786813
Christoffer Rosen, Ben Grawi, Emad Shihab; “Commit Guru: Analytics and Risk Prediction of Software Commits;” ESEC/FSE 2015 Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, August 2015, Pages 966-969. Doi: 10.1145/2786805.2803183
Abstract: Software quality is one of the most important research sub-areas of software engineering. Hence, a plethora of research has focused on the prediction of software quality. Much of the software analytics and prediction work has proposed metrics, models and novel approaches that can predict quality with high levels of accuracy. However, adoption of such techniques remains low; one of the reasons for this low adoption of the current analytics and prediction technique is the lack of actionable and publicly available tools. We present Commit Guru, a language agnostic analytics and prediction tool that identifies and predicts risky software commits. Commit Guru is publicly available and is able to mine any GIT SCM repository. Analytics are generated at both, the project and commit levels. In addition, Commit Guru automatically identifies risky (i.e., bug-inducing) commits and builds a prediction model that assess the likelihood of a recent commit introducing a bug in the future. Finally, to facilitate future research in the area, users of Commit Guru can download the data for any project that is processed by Commit Guru with a single click. Several large open source projects have been successfully processed using Commit Guru. Commit Guru is available online at commit.guru. Our source code is also released freely under the MIT license.
Keywords: Risky Software Commits, Software Analytics, Software Metrics, Software Prediction (ID#: 15-8582)
URL: http://doi.acm.org/10.1145/2786805.2803183
Junhyung Moon, Kyoungwoo Lee; ”Spatio-Temporal Visual Security Metric for Secure Mobile Video Applications;” MoVid '15 Proceedings of the 7th ACM International Workshop on Mobile Video, March 2015, Pages 9-14. Doi: 10.1145/2727040.2727047
Abstract: According to the widespread mobile devices and wearable devices, various mobile video applications are emerging. Some of those applications contain sensitive data such as military information so that they need to be protected from anonymous intruders. Thus, several video encryption techniques have been proposed. Accordingly, it has become essential to evaluate the visual security of encrypted videos. Several techniques have attempted to evaluate the visual security in the spatial domain but failed to capture it in the temporal domain. Thus, we present a temporal visual security metric and consequently propose a spatio-temporal visual security metric by combining ours with an existing metric which evaluates the spatial visual security. Our experimental results demonstrate that our proposed metrics appropriately evaluate temporal distortion as well as spatial distortion of encrypted videos while ensuring high correlation with subjective evaluation scores. Further we examine the tradeoff between the energy consumption for mobile video encryption techniques and visual security of encrypted videos. This tradeoff study is useful in determining a right encryption technique which satisfies the energy budget for secure mobile video applications.
Keywords: metric, spatio-temporal, spatio-temporal metric, video encryption, visual quality, visual security (ID#: 15-8583)
URL: http://doi.acm.org/10.1145/2727040.2727047
Niketa Gupta, Deepali Panwar, Ashish Sharma; “Modeling Structural Model for Defect Categories Based On Software Metrics for Categorical Defect Prediction;” ICCCT '15 Proceedings of the Sixth International Conference on Computer and Communication Technology 2015,September 2015, Pages 46-50. Doi: 10.1145/2818567.2818576
Abstract: Software Defect prediction is the pre-eminent area of software engineering which has witnessed huge importance over last decades. The identification of defects in the early stages of software development not only improve the quality of the software system but also reduce the time, cost and effort associated in maintaining the quality of software product. The quality of the software can be best assessed by software metrics. To evaluate the quality of the software, a number of software metrics have been proposed. Many research studies have been conducted to construct the prediction model that considers the CK (Chidamber and Kemerer) metrics suite and object oriented software metrics. For the prediction model development, consideration of interaction among the metrics is not a common practice. This paper presents the empirical evaluation in which several software metrics were investigated in order to identify the effective set of the metrics for each defect category which can significantly improve the defect prediction model made for each defect category. For each of the metrics, Pearson correlation coefficient with the number of defect categories were calculated and subsequently stepwise regression model is constructed for each defect category to predict the set of the metrics that are the good indicator of each defect category. We have proposed a novel approach for modelling the defects using structural equation modeling further which validates our work. Structural models were built for each defect category using structural equation modeling which claims that results are validated.
Keywords: Defect Prediction, Software Metrics, Stepwise regression model, Structural Equation Modeling (ID#: 15-8584)
URL: http://doi.acm.org/10.1145/2818567.2818576
Meriem Laifa, Samir Akrouf, Ramdane Maamri; “Online Social Trust: an Overview;” IPAC '15 Proceedings of the International Conference on Intelligent Information Processing, Security and Advanced Communication, November 2015, Article No. 9. Doi: 10.1145/2816839.2816912
Abstract: There is a wealth of information created every day through computer-mediated communications. Trust is an important component to sustain successful interactions and to filter the overflow of information. The concept of trust is widely used in computer science in various contexts and for different aims. This variety can confuse or mislead new researchers who are interested in trust concept but not familiar enough with it to find relevant related work to their projects. Therefore, we give in this paper an overview of online trust by focusing on its social aspect, and we classify important reviewed work in an attempt to guide new researchers in this domain and facilitate the first steps of their research projects. Based on previous trust surveys, we considered the following criteria: (1) trust dimension and its research purpose, (2) the trusted context and (3) the application domain in which trust is applied.
Keywords: Trust, classification, metrics, online social network (ID#: 15-8585)
URL: http://doi.acm.org/10.1145/2816839.2816912
Ben Stock, Stephan Pfistner, Bernd Kaiser, Sebastian Lekies, Martin Johns; ”From Facepalm to Brain Bender: Exploring Client-Side Cross-Site Scripting;” CCS '15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015, Pages 1419-1430. Doi: 10.1145/2810103.2813625
Abstract: Although studies have shown that at least one in ten Web pages contains a client-side XSS vulnerability, the prevalent causes for this class of Cross-Site Scripting have not been studied in depth. Therefore, in this paper, we present a large-scale study to gain insight into these causes. To this end, we analyze a set of 1,273 real-world vulnerabilities contained on the Alexa Top 10k domains using a specifically designed architecture, consisting of an infrastructure which allows us to persist and replay vulnerabilities to ensure a sound analysis. In combination with a taint-aware browsing engine, we can therefore collect important execution trace information for all flaws. Based on the observable characteristics of the vulnerable JavaScript, we derive a set of metrics to measure the complexity of each flaw. We subsequently classify all vulnerabilities in our data set accordingly to enable a more systematic analysis. In doing so, we find that although a large portion of all vulnerabilities have a low complexity rating, several incur a significant level of complexity and are repeatedly caused by vulnerable third-party scripts. In addition, we gain insights into other factors related to the existence of client-side XSS flaws, such as missing knowledge of browser-provided APIs, and find that the root causes for Client-Side Cross-Site Scripting range from unaware developers to incompatible first- and third-party code.
Keywords: analysis, client-side XSS, complexity metrics (ID#: 15-8586)
URL: http://doi.acm.org/10.1145/2810103.2813625
Daniel Vecchiato, Marco Vieira, Eliane Martins; “A Security Configuration Assessment for Android Devices;” SAC '15 Proceedings of the 30th Annual ACM Symposium on Applied Computing, April 2015, Pages 2299-2304. Doi: 10.1145/2695664.2695679
Abstract: The wide spreading of mobile devices, such as smartphones and tablets, and their always-advancing capabilities makes them an attractive target for attackers. This, together with the fact that users frequently store critical personal information in such devices and that many organizations currently allow employees to use their personal devices to access the enterprise information infrastructure and applications, makes the assessment of the security of mobile devices a key issue. This paper proposes an approach supported by a tool that allows assessing the security of Android devices based on the user-defined settings, which are known to be a key source of security vulnerabilities. The tool automatically extracts 41 settings from the mobile devices under testing, 14 of which defined and proposed in this work and the remaining adapted from the well-known CIS benchmarks. The paper discusses the settings that are analyzed, describes the overall architecture of the tool, and presents a preliminary evaluation that demonstrates the importance of this type of tools as a foundation towards the assessment of the security of mobile devices.
Keywords: android security, mobile device, security assessment (ID#: 15-8587)
URL: http://doi.acm.org/10.1145/2695664.2695679
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Hard Problems: Resilient Security Architectures 2015 |
Resilient security architectures are a hard problem in the Science of Security. These scholarly articles about research into resilient security architectures were presented in 2015. A great deal of research useful to resilience is coming from the literature on control theory. In addition to the Science of Security community, much of this work is also relevant to the SURE project.
Serageldin, A.; Krings, A., "A Resilient Real-Time Traffic Control System," in Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on, pp. 2869-2876, 15-18 Sept. 2015. doi: 10.1109/ITSC.2015.461
Abstract: This paper describes a resilient control system operating in a critical infrastructure. The system is a real-time weather responsive system that accesses weather information that provides near-real-time atmospheric and pavement observation data that is used to adapt traffic signal timing to increase safety. Since the system controls part of a safety critical application survivability and resilience considerations must be an integral part of the system architecture. In order to provide adaptation to system behavior as the result of faults or malicious acts an architecture is presented that monitors itself and adapts its behavior in real-time. The main theoretical contributions are the combination and extension of approaches introduced in previous work. The theory of certifying executions is extended by three concepts: the detection of dependency violations, exceptions triggers, and sensor analysis are considered, a dual-bound threshold approach for detecting off-nominal executions is introduced, profiling is augmented with the concept of behavior sets. Extensive evidence of the effectiveness of the solutions based on a one-year observation of the system in action is presented.
Keywords: control engineering computing; intelligent transportation systems; real-time systems; road traffic control; safety-critical software; software architecture; ITS; critical infrastructure; intelligent transportation system; real-time traffic control system; resilient control system; safety critical application; system architecture; weather information access; weather responsive system; Control systems; Meteorology; Monitoring; Rabbits; Real-time systems; Software; Timing (ID#: 15-8588)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313553&isnumber=7312804
Ali, K.; Nguyen, H.X.; Quoc-Tuan Vien; Shah, P., "Disaster Management Communication Networks: Challenges and Architecture Design," in Pervasive Computing and Communication Workshops (PerCom Workshops), 2015 IEEE International Conference on, pp. 537-542, 23-27 March 2015
doi: 10.1109/PERCOMW.2015.7134094
Abstract: In the past decades, serious natural disasters such as earthquakes, tsunamis, floods, and storms have occurred frequently worldwide with catastrophic consequences. They also helped us understand that organising and maintaining effective communication during the disaster are vital for the execution of rescue operations. As communication resources are often entirely or partially damaged by disasters, the demand for information and communication technology (ICT) services explosively increases just after the events. These situations instigate serious network traffic congestions and physical damage of ICT equipments and emergency ICT networks if uprooted as a pre-disaster network system. This article proposes a network architecture design by integrating the existing network infrastructure with the reinforcement of layers based techniques and cloud processing concepts. This resilient network architecture allows the ICT services to be launched within a reasonable short period of time of development. Furthermore, communication in a disaster is sustained by implementing a three-tier fortification of the overall network architecture which would also minimize the physical and logical redundancy for resilient and flexible ICT resources. As cloud processing will work as a parallel reinforced infrastructure, the proposed approach and network design will give new hope for the developing countries to consider cloud computing services for effectiveness and better dependability on the architecture to save ICT and humanitarian network at the time of disaster.
Keywords: cloud computing; emergency management; telecommunication congestion control; telecommunication network management; telecommunication traffic; ICT equipments; ICT services explosively; cloud computing services; cloud processing concepts; communication resources; emergency ICT networks; information and communication technology services; layers based techniques; logical redundancy; natural disasters; network traffic congestions; parallel reinforced infrastructure; physical redundancy; pre-disaster network system; rescue operations; three-tier fortification; Cities and towns; Cloud computing; Computer architecture; Conferences; Earthquakes; Emergency services; Reliability (ID#: 15-8589)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7134094&isnumber=7133953
Lecocke, M.B.; Blount, J.; Blount, J., "Use of Formal Modeling to Automatically Generate Correct Fault Detection and Response Methods," in Aerospace Conference, 2015 IEEE, pp. 1-7, 7-14 March 2015. doi: 10.1109/AERO.2015.7119245
Abstract: This paper describes an approach to fault tolerant design and implementation that uses a formal model to automatically generate fault detection and response methods. The approach is designed for resource-constrained embedded systems with high reliability requirements such as manned or critical space assets. The formal model-based approach offers several advantages over a conventional approach based on manual failure mode analysis (FMA). The primary benefits are increased confidence in the fault tolerance of the design and in the corresponding implementation. Increased confidence in the design is achieved because both the system architecture and reliability requirements are precisely described in a single formal model written in Answer Set Prolog (ASP). The readability of ASP facilitates precise communication between system engineers and stakeholders, thus increasing the likelihood that design errors are corrected early in the development cycle. Increased confidence in the implementation is achieved because it is automatically generated using the model and is guaranteed to satisfy the specified reliability requirements. Importantly, the control flow of the resulting implementation is straightforward and readable. Besides increased confidence, our approach is resilient to architecture and requirements changes. In our experience, once the model is updated it takes less than 10 minutes to re-generate the implementation and download to the target.
Keywords: PROLOG; aerospace computing; embedded systems; fault diagnosis; formal specification; logic programming; software architecture; software fault tolerance; Answer Set Prolog; automatic correct fault detection method generation; automatic correct fault response method generation; control flow; critical space assets; design errors; development cycle; fault tolerance; formal modeling; high reliability requirements; manned space assets; manual failure mode analysis; resource-constrained embedded systems; system architecture; Biographies; Biological system modeling; Computers; Manuals (ID#: 15-8590)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119245&isnumber=7118873
Wenchao Li; Gerard, L.; Shankar, N., "Design and Verification of Multi-Rate Distributed Systems," in Formal Methods and Models for Codesign (MEMOCODE), 2015 ACM/IEEE International Conference on, pp. 20-29, 21-23 Sept. 2015. doi: 10.1109/MEMCOD.2015.7340463
Abstract: Multi-rate systems arise naturally in distributed settings where computing units execute periodically according to their local clocks and communicate among themselves via message passing. We present a systematic way of designing and verifying such systems with the assumption of bounded drift for local clocks and bounded communication latency. First, we capture the system model through an architecture definition language (called RADL) that has a precise model of computation and communication. The RADL paradigm is simple, compositional, and resilient against denial-of-service attacks. Our radler build tool takes the architecture definition and individual local functions as inputs and generate executables for the overall system as output. In addition, we present a modular encoding of multi-rate systems using calendar automata and describe how to verify real-time properties of these systems using SMT-based infinite-state bounded model checking. Lastly, we discuss our experiences in applying this methodology to building high-assurance cyber-physical systems.
Keywords: distributed processing; formal verification; specification languages; RADL language; RADL paradigm; SMT-based infinite-state bounded model checking; architecture definition language; bounded communication latency; calendar automata; cyber-physical systems; denial-of-service attacks; multirate distributed systems; radler build tool; Clocks; Computational modeling; Computer architecture; Cyber-physical systems; Real-time systems; Robots; Sensors (ID#: 15-8591)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7340463&isnumber=7340456
Antsaklis, P., "The Quest for Autonomy. Are We There Yet? Are CPS a Way to Build Autonomous Systems?," in American Control Conference (ACC), 2015, pp. 5080-5080, 1-3 July 2015. doi: 10.1109/ACC.2015.7172128
Abstract: Summary form only given. Achieving autonomy has been a dream for many years. The term autonomous system has had different meanings depending on who and when it was used. Attempts to build autonomous vehicles by major corporations and grand challenges by government funding agencies have captured the public's imagination. How much closer are today to this dream than we were 25 years ago? The issues surrounding autonomy together with the needed properties that make a system autonomous will be discussed and put in context. How do we go about realizing these properties in a safe, secure manner, to obtain a resilient system that keeps performing well over the lifetime of the control system? Could CPS provide an approach towards building autonomous systems? How would autonomous control architectures look like? Solutions to some problems will be proposed. Concrete approaches, that use CPS and energy like concepts such as passivity/dissipativity to preserve properties will be briefly discussed.
Keywords: control systems; CPS; autonomous control architecture; autonomous system; autonomous vehicle; autonomy; control system; cyber physical system; dissipativity; government funding agency; passivity; public imagination; Architecture; Concrete; Context; Control systems; Government; Mobile robots (ID#: 15-8592)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7172128&isnumber=7170700
Kemal, M.; Iov, F.; Olsen, R.; Le Fevre, T.; Apostolopoulos, C., "On-Line Configuration of Network Emulator for Intelligent Energy System Testbed Applications," in AFRICON, 2015 , vol., no., pp.1-4, 14-17 Sept. 2015
doi: 10.1109/AFRCON.2015.7331979
Abstract: Intelligent energy networks (or Smart Grids) provide efficient solutions for a grid integrated with near-real-time communication technologies between various grid assets in power generation, transmission and distribution systems. The design of a communication network associated with intelligent power system involves detailed analysis of its communication requirements, a proposal of the appropriate protocol architecture, the choice of appropriate communication technologies for each case study, and a means to support heterogeneous communication technology management system. This paper discuses a mechanism for on-line configuration and monitoring of heterogeneous communication technologies implemented at smart energy system testbed of Aal-borg university. It proposes a model with three main components, a network emulator used to emulate the communication scenarios using KauNet, graphical user interface for visualizing, configuring and monitoring of the emulated scenarios and a network socket linking the graphic server and network emulation server on-line. Specifically, our focus area is to build a model that gives us ability to look at some of the challenges on implementing inter-operable and resilient Smart Grid networks and how the current state of the art communication technologies are employed for smart control of energy distribution grids.
Keywords: graphical user interfaces; power engineering computing; smart power grids; KauNet; communication technologies; energy distribution grids; graphic server; graphical user interface; heterogeneous communication technology; intelligent energy networks; intelligent energy system testbed applications; near-real-time communication technologies; network emulation; network emulator; network socket linking; on-line configuration; power distribution systems; power generation systems; power transmission systems; smart control; smart grid networks; Emulation; Graphics; Mathematical model; Quality of service; Servers; Smart grids; Smart Control; Smart Grid; interoperability; renewable energy; wireline and wireless communications (ID#: 15-8593)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7331979&isnumber=7331857
Vega, Augusto; Lin, Chung-Ching; Swaminathan, Karthik; Buyuktosunoglu, Alper; Pankanti, Sharathchandra; Bose, Pradip, "Resilient, UAV-Embedded Real-Time Computing," in Computer Design (ICCD), 2015 33rd IEEE International Conference on, pp. 736-739, 18-21 Oct. 2015. doi: 10.1109/ICCD.2015.7357189
Abstract: In this paper, we propose a hierarchical computational system architecture to support the target domain of realtime mobile computing in the context of unmanned aerial vehicles (UAVs). The overall architectural vision includes support for system resilience in the presence of uncertainties in the operational environment of surveillance UAVs. We report measurement-based results that are obtained from a UAV proxy demonstration apparatus. The apparatus consists of a Raspberry Pi (RPi) board that serves as an on-board UAV computer, working with support from a laptop that serves as the on-ground computing infrastructure where an operator "consumes" video information received from the UAV. We quantify the gap between the on-board UAV camera frame rate (input) and the on-ground operator-observed frame rate (output) for a specialized class of computer vision applications germane to the UAV-based aerial surveillance domain. The goal is to keep the frame rate observed by the ground operator as close (or ideally equal) to the on-board UAV camera frame rate (i.e. to preserve the real-time aspect) despite the unstable bandwidth availability in the channel connecting both ends. The proposed hierarchical approach significantly outperforms two considered baselines: one in which computation takes place entirely on the UAV computer and another in which computation takes place entirely on the ground. This improved performance is due to a more balanced resource sharing between the on-board UAV computer and UAV-to-ground communication channel. Later, we show how the observed frame rate improves when the RPi board is replaced with an NVIDIA Jetson TK1 board. Based on the observations gleaned from these "proxy" experiments, we sketch the fundamentals of our ongoing work in model-based predictive analysis of resilient "UAV swarm" computational architectures of the future.
Keywords: Bandwidth; Cameras; Economic indicators; Portable computers; Real-time systems; Streaming media; Surveillance (ID#: 15-8594)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357189&isnumber=7357071
Salato, Maurizio; Vig, Harry; Pauplis, Robert, "Flexible, Modular and Universal Power Conversion for Small Cell Stations in Distributed Systems," in PCIM Europe 2015; International Exhibition and Conference for Power Electronics, Intelligent Motion, Renewable Energy and Energy Management; Proceedings of, pp. 1-7, 19-20 May 2015. doi: (not provided)
Abstract: This article lays out power system architecture for Small Cell and Distributed Antenna Systems applications. The exponential increase in mobile data traffic forces the mobile telecom infrastructure to be distributed within diverse coverage areas ranging from heavily urbanized environments to rural settings. At the same time, high level of availability that is provided by classic landlines is expected from the mobile network, which in turns raise the question of how reliable and resilient is the power system that supplies energy to a large number of small, distributed base-stations. Guidelines and benefit analysis of a modular, power component based, distribution and conversion approach are introduced along with implementation results.
Keywords: (not provided) (ID#: 15-8595)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149029&isnumber=7148817
Douziech, P.-E.; Curtis, B., "Cross-Technology, Cross-Layer Defect Detection in IT Systems -- Challenges and Achievements," in Complex Faults and Failures in Large Software Systems (COUFLESS), 2015 IEEE/ACM 1st International Workshop on, pp. 21-26, 23-23 May 2015. doi: 10.1109/COUFLESS.2015.11
Abstract: Although critical for delivering resilient, secure, efficient, and easily changed IT systems, cross-technology, cross- layer quality defect detection in IT systems still faces hurdles. Two hurdles involve the absence of an absolute target architecture and the difficulty of apprehending multi-component anti-patterns. However, Static analysis and measurement technologies are now able to both consume contextual input and detect system-level anti-patterns. This paper will provide several examples of the information required to detect system-level anti-patterns using examples from the Common Weakness Enumeration repository maintained by MITRE Corp.
Keywords: program diagnostics; program testing; software architecture; software quality; IT systems; MITRE Corp; common weakness enumeration repository; cross-layer quality defect detection; cross-technology defect detection; measurement technologies; multicomponent antipatterns; static analysis; system-level antipattern detection; Computer architecture; Java; Organizations Reliability; Security; Software; Software measurement; CWE; IT systems; software anti-patterns; software architecture; software pattern detection; software quality measures; structural quality (ID#: 15-8596)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181478&isnumber=7181467
Chakraborty, M.; Chaki, N., "An Ipv6 Based Hierarchical Address Configuration Scheme for Smart Grid," in Applications and Innovations in Mobile Computing (AIMoC), 2015, pp. 109-116, 12-14 Feb. 2015. doi: 10.1109/AIMOC.2015.7083838
Abstract: Smart Grid (SG) is an intelligent and adaptive energy delivery network that combines the traditional power grid and IT communication network. It aims to provide more efficient, better fault-resilient and reliable energy support. Robust communication architecture is the key that differentiates Smart Grid from the traditional energy delivery system. IP enabled devices are necessary to build such network spread over a large geographic region and connecting devices starting from common household electrical appliances up to power generation units. With the huge number of devices including the smart electrical appliances, increasingly being used in homes, IPv6 become an obvious choice for Smart Grid for its bandwidth. However, one of the main challenges of connecting IPv6 with Smart Grid will be address configuration. In this paper, a new IPv6 address configuration schema for Smart Grid has been proposed. The proposed schema is consistent with the demands of large, dynamic, hierarchical smart grid network. The schema improves accessibility and scalability in terms of configuring a huge number of devices in the smart grid, thereby, fully extracting the potential of 128-bit IPv6 addressing mode.
Keywords: IP networks; power engineering computing; smart power grids; IPv6 based hierarchical address configuration scheme; adaptive energy delivery network; hierarchical topology; intelligent energy delivery network; smart grid; IP networks; Organizations; Routing; Smart grids; Smart meters; Topology; Wireless sensor networks; IPv6 addressing; Smart Grid; address configuration; hierarchical topology (ID#: 15-8597)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7083838&isnumber=7083813
Kumar, S.; Das, N.; Islam, S., "High Performance Communication Redundancy in a Digital Substation Based on IEC 62439-3 with a Station Bus Configuration," in Power Engineering Conference (AUPEC), 2015 Australasian Universities, pp. 1-5, 27-30 Sept. 2015. doi: 10.1109/AUPEC.2015.7324838
Abstract: High speed communication is critical in a digital substation from protection, control and automation perspectives. Although International Electro-technical Commission (IEC) 61850 standard has proven to be a reliable guide for the substation automation and communication systems, yet it has few shortcomings in offering redundancies in the protection architecture, which has been addressed better in IEC 62439-3 standard encompassing Parallel Redundancy Protocol (PRP) and High-availability Seamless Redundancy (HSR). Due to single port failure, data losses and interoperability issues related to multi-vendor equipment, IEC working committee had to look beyond IEC 61850 standard. The enhanced features in a Doubly Attached Node components based on IEC 62439-3 provides redundancy in protection having two active frames circulating data packets in the ring. These frames send out copies in the ring and should one of the frame is lost, the other copy manages to reach the destination node via an alternate path, ensuring flawless data transfer at a significant faster speed using multi-vendor equipment and fault resilient circuits. The PRP and HSR topologies provides higher performance in a digitally protected substation and promise better future over the IEC 61850 standard due to its faster processing capabilities, increased availability and minimum delay in data packet transfer and wireless communication in the network. This paper exhibits the performance of PRP and HSR topologies focusing on the redundancy achievement within the network and at the end nodes of a station bus ring architecture, based on IEC 62439-3.
Keywords: IEC standards; redundancy; substation automation; substation protection; telecommunication networks; HSR topology; IEC 62439-3;International Electrotechnical Commission 61850 standard; PRP topology; data loss; data packet transfer; digital substation; doubly attached node components; fault resilient circuit; high performance communication redundancy; high speed communication; high-availability seamless redundancy; parallel redundancy protocol; single port failure; standard encompassing parallel redundancy protocol; station bus configuration; substation automation and communication systems; wireless communication; IEC Standards; Network topology; Peer-to-peer computing; Redundancy; Substations; Topology; Ethernet; IEC 61850; IEC 62439-3;PRP and HSR (ID#: 15-8598)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7324838&isnumber=7324780
Alzahrani, A.; DeMara, R.F., "Hypergraph-Cover Diversity for Maximally-Resilient Reconfigurable Systems," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp.1086-1092, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.294
Abstract: Scaling trends of reconfigurable hardware (RH) and their design flexibility have proliferated their use in dependability-critical embedded applications. Although their reconfigurability can enable significant fault tolerance, due to the complexity of execution time in their design flow, in-field reconfigurability can be infeasible and thus limit such potential. This need is addressed by developing a graph and set theoretic approach, named hypergraph-cover diversity (HCD), as a preemptive design technique to shift the dominant costs of resiliency to design-time. In particular, union-free hypergraphs are exploited to partition the reconfigurable resources pool into highly separable subsets of resources, each of which can be utilized by the same synthesized application netlist. The diverse implementations provide reconfiguration-based resilience throughout the system lifetime while avoiding the significant overheads associated with runtime placement and routing phases. Two novel scalable algorithms to construct union-free hypergraphs are proposed and described. Evaluation on a Motion-JPEG image compression core using a Xilinx 7-series-based FPGA hardware platform demonstrates a statistically significant increase in fault tolerance and area efficiency when using proposed work compared to commonly-used modular redundancy approaches.
Keywords: data compression; embedded systems; field programmable gate arrays; graph theory; image coding; motion estimation; reconfigurable architectures; HCD; Motion-JPEG image compression core; RH; Xilinx 7-series-based FPGA hardware platform; area efficiency; dependability-critical embedded applications; design flexibility; execution time; fault tolerance; hypergraph-cover diversity; in-field reconfigurability; maximally-resilient reconfigurable systems; preemptive design technique; reconfigurable hardware; reconfigurable resource partitioning; reconfiguration-based resilience; resiliency costs; routing phases; runtime placement; separable resource subsets; set theoretic approach; statistical analysis; synthesized application netlist; union-free hypergraphs; Circuit faults; Embedded systems; Fault tolerance; Fault tolerant systems; Field programmable gate arrays; Hardware; Runtime; Area Efficiency; Design Diversity; FPGAs; Fault Tolerance; Hypergraphs; Reconfigurable Systems; Reliability (ID#: 15-8599)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336313&isnumber=7336120
Gomez, K.; Hourani, A.; Goratti, L.; Riggio, R.; Kandeepan, S.; Bucaille, I., "Capacity Evaluation of Aerial LTE Base-Stations for Public Safety Communications," in Networks and Communications (EuCNC), 2015 European Conference on, pp. 133-138, June 29 2015-July 2 2015. doi: 10.1109/EuCNC.2015.7194055
Abstract: Aerial-Terrestrial communication networks able to provide rapidly-deployable and resilient communications capable of offering broadband connectivity are emerging as a suitable solution for public safety scenarios. During natural disasters or unexpected events, terrestrial infrastructure can be seriously damaged or disrupted due to physical destruction of network components, disruption in subsystem interconnections and/or network congestion. In this context, Aerial-Terrestrial communication networks are intended to provide temporal large coverage with the provision of broadband services at the disaster area. This paper studies the performance of Aerial UMTS Long Term Evolution (LTE) base stations in terms of coverage and capacity. Network model relies on appropriate channel model, LTE 3GPP specifications and well known schedulers are used. The results show the effect of the temperature, bandwidth, and scheduling discipline on the system capacity while at the same time coverage is investigated in different public safety scenarios.
Keywords: 3G mobile communication; Long Term Evolution; aircraft communication; broadband networks; disasters; telecommunication scheduling; wireless channels; LTE 3GPP specification; aerial LTE base station capacity evaluation; aerial UMTS long term evolution base station; aerial-terrestrial communication network congestion; channel model; natural disaster; public safety communication; Bandwidth; Computer architecture; Indexes; Long Term Evolution; Phase shift keying; Safety; Signal to noise ratio; Aerial network infrastructure; Long Term Evolution (LTE); emergency communications; low altitude platforms (ID#: 15-8600)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194055&isnumber=7194024
Hoefling, M.; Heimgaertner, F.; Menth, M.; Katsaros, K.V.; Romano, P.; Zanni, L.; Kamel, G., "Enabling Resilient Smart Grid Communication over the Information-Centric C-DAX Middleware," in Networked Systems (NetSys), 2015 International Conference and Workshops on, pp. 1-8, 9-12 March 2015
doi: 10.1109/NetSys.2015.7089080
Abstract: Limited scalability, reliability, and security of today’s utility communication infrastructures are main obstacles to the deployment of smart grid applications. The C-DAX project aims at providing and investigating a communication middleware for smart grids to address these problems, applying the information-centric networking and publish/subscribe paradigm. We briefly describe the C-DAX architecture, and extend it with a flexible resilience concept, based on resilient data forwarding and data redundancy. Different levels of resilience support are defined, and their underlying mechanisms are described. Experiments show fast and reliable performance of the resilience mechanism.
Keywords: middleware; power engineering computing; smart power grids; communication middleware; data redundancy; flexible resilience concept; information-centric C-DAX middleware; information-centric networking; publish/subscribe paradigm; resilient data forwarding; resilient smart grid communication; smart grids; utility communication infrastructures; Delays; Monitoring; Reliability; Resilience; Security; Subscriptions; Synchronization (ID#: 15-8601)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7089080&isnumber=7089054
Viguier, R.; Lin, C.-C.; Swaminathan, K.; Vega, A.; Buyuktosunoglu, A.; Pankanti, S.; Bose, P.; Akbarpour, H.; Bunyak, F.; Palaniappan, K.; Seetharaman, G., "Resilient Mobile Cognition: Algorithms, Innovations, and Architectures," in Computer Design (ICCD), 2015 33rd IEEE International Conference on, pp.728-731, 18-21 Oct. 2015. doi: 10.1109/ICCD.2015.7357187
Abstract: The importance of the internet-of-things (IOT) is now an established reality. With that backdrop, the phenomenal emergence of cameras/sensors mounted on unmanned aerial, ground and marine vehicles (UAVs, UGVs, UMVs) and body worn cameras is a notable new development. The swarms of cameras and real-time computing thereof are at the heart of new technologies like connected cars, drone-based city-wide surveillance and precision agriculture, etc. Smart computer vision algorithms (with or without dynamic learning) that enable object recognition and tracking, supported by baseline video content summarization or 2D/3D image reconstruction of the scanned environment are at the heart of such new applications. In this article, we summarize our recent innovations in this space. We focus primarily on algorithms and architectural design considerations for video summarization systems.
Keywords: Cameras; Computer architecture; Image segmentation; Metadata; Motion estimation; Streaming media; Tensile stress (ID#: 15-8602)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357187&isnumber=7357071
Sommer, Matthias; Tomforde, Sven; Haehner, Joerg, "A Systematic Study on Forecasting of Traffic Flows with Artificial Neural Networks," in Architecture of Computing Systems. Proceedings, ARCS 2015 - The 28th International Conference on, vol., no., pp. 1-8, 24-27 March 2015. Doi: (not provded)
Abstract: Traffic flow is highly dynamic and complex to foresee, therefore it offers an interesting application domain for Organic Computing. Most traffic management systems try to adapt their traffic signalisation to the current traffic flow patterns, but for an optimal and fast adaptation, traffic flow forecasts are needed. A resilient traffic management system needs the ability to forecast traffic flows in order to pro-actively adapt the signalisation with the goal to decrease or even prevent negative impacts on the traffic network. Artificial Neural Networks have shown to be a powerful tool in forecasting traffic flows. This paper investigates a systematic study of Artificial Neural Networks and presents which variants and parameter settings are most profitable in which situations.
Keywords: (not provided) (ID#: 15-8603)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107101&isnumber=7107092
Januario, F.; Santos, A.; Palma, L.; Cardoso, A.; Gil, P., "A Distributed Multi-Agent Approach for Resilient Supervision over a IPv6 WSAN Infrastructure," in Industrial Technology (ICIT), 2015 IEEE International Conference on, pp. 1802-1807, 17-19 March 2015. doi: 10.1109/ICIT.2015.7125358
Abstract: Wireless Sensor and Actuator Networks has become an important area of research. They can provide flexibility, low operational and maintenance costs and they are inherently scalable. In the realm of Internet of Things the majority of devices is able to communicate with one another, and in some cases they can be deployed with an IP address. This feature is undoubtedly very beneficial in wireless sensor and actuator networks applications, such as monitoring and control systems. However, this kind of communication infrastructure is rather challenging as it can compromise the overall system performance due to several factors, namely outliers, intermittent communication breakdown or security issues. In order to improve the overall resilience of the system, this work proposes a distributed hierarchical multi-agent architecture implemented over a IPv6 communication infrastructure. The Contiki Operating System and RPL routing protocol were used together to provide a IPv6 based communication between nodes and an external network. Experimental results collected from a laboratory IPv6 based WSAN test-bed, show the relevance and benefits of the proposed methodology to cope with communication loss between nodes and the server.
Keywords: Internet of Things; multi-agent systems; routing protocols; wireless sensor networks; Contiki operating system; IP address;IPv6 WSAN infrastructure;IPv6 communication infrastructure; Internet of Things; RPL routing protocol; distributed hierarchical multiagent architecture; distributed multiagent approach; external network; intermittent communication; resilient supervision; wireless sensor and actuator networks; Actuators; Electric breakdown; Monitoring; Peer-to-peer computing; Routing protocols; Security (ID#: 15-8604)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7125358&isnumber=7125066
Heimgaertner, F.; Hoefling, M.; Vieira, B.; Poll, E.; Menth, M., "A Security Architecture for the Publish/Subscribe C-DAX Middleware," in Communication Workshop (ICCW), 2015 IEEE International Conference on, pp. 2616-2621, 8-12 June 2015. doi: 10.1109/ICCW.2015.7247573
Abstract: The limited scalability, reliability, and security of today's utility communication infrastructures are main obstacles for the deployment of smart grid applications. The C-DAX project aims at providing a cyber-secure publish/subscribe middleware tailored to the needs of smart grids. C-DAX provides end-to-end security, and scalable and resilient communication among participants in a smart grid. This work presents the C-DAX security architecture, and proposes different key distribution mechanisms. Security properties are defined for control plane and data plane communication, and their underlying mechanisms are explained. The presented work is partially implemented in the C-DAX prototype and will be deployed in a field trial.
Keywords: middleware; power engineering computing; power system security; security of data; smart power grids; software architecture; C-DAX project; control plane communication; cyber-secure publish/subscribe middleware; data plane communication; end-to-end security; key distribution mechanisms; publish/subscribe C-DAX middleware; reliability; resilient communication; scalability; scalable communication; security architecture; security properties; smart grid applications; utility communication infrastructures; Authentication; Encryption; Middleware; Public key; Smart grids (ID#: 15-8605)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247573&isnumber=7247062
Spalla, E.S.; Mafioletti, D.R.; Liberato, A.B.; Rothenberg, C.; Camargos, L.; da S Villaca, R.; Martinello, M., "Resilient Strategies to SDN: An Approach Focused on Actively Replicated Controllers," in Computer Networks and Distributed Systems (SBRC), 2015 XXXIII Brazilian Symposium on, pp. 246-259, 18-22 May 2015. doi: 10.1109/SBRC.2015.37
Abstract: Software Defined Networking (SDN) are based on the separation of control and data planes. The SDN controller, although logically centralized, should be effectively distributed for high availability. Since the specification of OpenFlow 1.2, there are new features that allow the switches to communicate with multiple controllers that can play different roles -- master, slave, and equal. However, these roles alone are not sufficient to guarantee a resilient control plane and the actual implementation remains an open challenge for SDN designers. In this paper, we explore the OpenFlow roles for the design of resilient SDN architectures relying on multi-controllers. As a proof of concept, a strategy of active replication was implemented in the Ryu controller, using the OpenReplica service to ensure consistent state among the distributed controllers. The prototype was tested with commodity RouterBoards/MikroTik switches and evaluated for latency in failure recovery and switch migration for different workloads. We observe a set of trade-offs in real experiments with varyin workloads at both the data and control plane.
Keywords: distributed control; formal specification; software defined networking; OpenFlow 1.2 specification; OpenReplica service; Ryu controller; SDN architectures; SDN controller; active replication; actively replicated controllers; commodity MikroTik switch; commodity RouterBoard switch; distributed controllers; failure recovery; multicontrollers; resilient strategies; software defined networking; switch migration; Computer architecture; Computer networks; Control systems; Process control; Prototypes; Routing protocols; Software; Network; OpenFlow; Resilient; SDN (ID#: 15-8606)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7320532&isnumber=7320494
Xinxin Jin; Soyeon Park; Tianwei Sheng; Rishan Chen; Zhiyong Shan; Yuanyuan Zhou, "FTXen: Making Hypervisor Resilient to Hardware Faults on Relaxed Cores," in High Performance Computer Architecture (HPCA), 2015 IEEE 21st International Symposium on, pp. 451-462, 7-11 Feb. 2015. doi: 10.1109/HPCA.2015.7056054
Abstract: As CMOS technology scales, the Increasingly smaller transistor components are susceptible to a variety of in-field hardware errors. Traditional redundancy techniques to deal with the increasing error rates are expensive and energy inefficient. To address this emerging challenge, many researchers have recently proposed the idea of relaxed hardware design and exposing errors to software. For such relaxed hardware to become a reality, it is crucially important for system software, such as the virtual machine hypervisor, to be resilient to hardware faults. To address the above fundamental software challenge in enabling relaxed hardware design, we are making a major effort in restructuring an important part of system software, namely the virtual machine hypervisor, to be resilient to faulty cores. A fault in a relaxed core can only affect those virtual machines (and applications) running on that core, but the hypervisor and other virtual machines remain intact and continue providing services. We have redesigned every component of Xen, a large, popular virtual machine hypervisor, to achieve such error resiliency. This paper presents our design and implementation of the restructured Xen (we refer to it as FTXen). Our experimental evaluation on real systems shows that FTXen adds minimum application overhead, and scales well to different ratios of reliable and relaxed cores. Our results with random fault injection show that FTXen can successfully survive all injected hardware faults.
Keywords: fault tolerant computing; virtual machines; CMOS technology; FTXen; error resiliency; faulty cores; hardware faults; in-field hardware errors; random fault injection; relaxed cores; relaxed hardware design; system software; transistor components; virtual machine hypervisor; Data structures; Hardware; Reliability; System software; Virtual machine monitors; Virtual machining (ID#: 15-8607)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056054&isnumber=7056013
Oboril, F.; Ebrahimi, M.; Kiamehr, S.; Tahoori, M.B., "Cross-Layer Resilient System Design Flow," in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, pp. 2457-2460, 24-27 May 2015. doi: 10.1109/ISCAS.2015.7169182
Abstract: Accelerated transistor aging is one of the major unreliability sources at nano-scale technology nodes. Aging causes the circuit delay to increase and eventually leads to timing failures. Since aging is dependent on various factors such as temperature and workload, the aging rates of different components of the circuit are non-uniform. However, timing failures start to occur once the most-aged part fails to meet the timing constraint. In this paper, we present a cross-layer aging mitigation methodology from device level up to architecture level by balancing the delays of different parts of the design at the desired lifetime rather than at design time. Our results show that the proposed approach can efficiently prolong the system lifetime with a negligible impact on area and power.
Keywords: delay circuits; failure analysis; integrated circuit design; timing circuits; accelerated transistor aging; architecture level; circuit delay; cross-layer aging mitigation methodology; cross-layer resilient system design flow; device level; nanoscale technology nodes; nonuniform circuit; timing constraint; timing failures; Aging; Delays; Logic gates; Microprocessors; Pipelines; Transistors (ID#: 15-8608)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169182&isnumber=7168553
Ghazarian, A., "A Theory of Software Complexity," in General Theory of Software Engineering (GTSE), 2015 IEEE/ACM 4th SEMAT Workshop on a, pp. 29-32, 18-18 May 2015. doi: 10.1109/GTSE.2015.11
Abstract: The need for a theory of software complexity to serve as a rigorous, scientific foundation for software engineering has long been recognized. However, unfortunately, the complexity measures proposed thus far have only resulted in rough heuristics and rules of thumb. In this paper, we propose a new information theoretic measure of software complexity that, unlike previous measures, captures the volume of design information in software modules. By providing proof outlines for a number of theorems that collectively represent our current understanding and intuitions about software complexity, we demonstrate that this new, information-based formulation of software complexity is not only capable of explaining our current understanding of software complexity, but also is resilient to the factors that cause inaccuracies in previous measures.
Keywords: information theory; software architecture; software metrics; design information; information theoretic measure; scientific foundation; software complexity; software engineering; software modules; Complexity theory; Current measurement; Software measurement; Software systems; Volume measurement; Design Decisions; Information Volume; Metrics; Software Complexity; Software Design; Theory (ID#: 15-8609)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169392&isnumber=7169380
Ukwandu, E.; Buchanan, W.J.; Fan, L.; Russell, G.; Lo, O., "RESCUE: Resilient Secret Sharing Cloud-Based Architecture," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 872-879, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.459
Abstract: This paper presents an architecture (RESCUE) of a system that is capable of implementing: a keyless encryption method, self-destruction of data within a time frame without user's intervention, and break-glass data recovery, with in-built failover protection. It aims to overcome many of the current problems within Cloud-based infrastructures, such as in the loss of private keys, and inherent failover protection. The architecture uses a secret share method with: an Application Platform, Proxy Servers with Routers, and a Metadata Server. These interact within a multi-cloud environment to provide a robust, secure and dependable system, and which showcases a new direction in an improved cloud computing environment. It aims to ensure user privacy, and reduces the potential for data loss, as well as reducing denial-of-service outages within the cloud, and with failover protection for stored data. In order to assessment the best secret sharing method that could be used for the architecture, the paper outlines a range of experiments on the performance footprint of the most relevant secret sharing schemes.
Keywords: cloud computing; cryptography; RESCUE; application platform; denial-of-service outages; improved cloud computing environment; keyless encryption method; metadata server; proxy servers; resilient secret sharing cloud-based architecture; secret sharing method; Cloud computing; Computer architecture; Electronic voting; Encryption; Servers; break-glass data recovery; failover protection; multi-cloud; secret shares; self-destruct and keyless encryption (ID#: 15-8610)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345367&isnumber=7345233
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Hard Problems: Scalability and Composability 2015 |
This bibliographical collection on scalability and compositionality is part of a series on the five identified hard problems in Science of Security. The works cited here were published or presented in 2015. All are early access articles.
Genge, B.; Haller, P.; Kiss, I., "Cyber-Security-Aware Network Design of Industrial Control Systems," in Systems Journal, IEEE, vol. PP, no. 99, pp. 1-12, August 2015, doi: 10.1109/JSYST.2015.2462715
Abstract: The pervasive adoption of traditional information and communication technologies hardware and software in industrial control systems (ICS) has given birth to a unique technological ecosystem encapsulating a variety of objects ranging from sensors and actuators to video surveillance cameras and generic PCs. Despite their invaluable advantages, these advanced ICS create new design challenges, which expose them to significant cyber threats. To address these challenges, an innovative ICS network design technique is proposed in this paper to harmonize the traditional ICS design requirements pertaining to strong architectural determinism and real-time data transfer with security recommendations outlined in the ISA-62443.03.02 standard. The proposed technique accommodates security requirements by partitioning the network into security zones and by provisioning critical communication channels, known as security conduits, between two or more security zones. The ICS network design is formulated as an integer linear programming (ILP) problem that minimizes the cost of the installation. Real-time data transfer limitations and security requirements are included as constraints imposing the selection of specific traffic paths, the selection of routing nodes, and the provisioning of security zones and conduits. The security requirements of cyber assets denoted by traffic and communication endpoints are determined by a cyber attack impact assessment technique proposed in this paper. The sensitivity of the proposed techniques to different parameters is evaluated in a first scenario involving the IEEE 14-bus model and in a second scenario involving a large network topology based on generated data. Experimental results demonstrate the efficiency and scalability of the ILP model.
Keywords: Bandwidth; Cascading style sheets; Hardware; Process control; Real-time systems; Security; Sensors; ISA-62443;Industrial control systems (ICS); network design; security conduit; security zone (ID#: 15-8628)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210183&isnumber=4357939
Hong, J.B.; Kim, D.S., "Assessing the Effectiveness of Moving Target Defenses using Security Models," in Dependable and Secure Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, 11 June 2015. doi: 10.1109/TDSC.2015.2443790
Abstract: Cyber crime is a developing concern, where criminals are targeting valuable assets and critical infrastructures within networked systems, causing a severe socio-economic impact on enterprises and individuals. Adopting Moving Target Defense (MTD) helps thwart cyber attacks by continuously changing the attack surface. There are numerous MTD techniques proposed in various domains (e.g., virtualized network, wireless sensor network), but there is still a lack of methods to assess and compare the effectiveness of them. Security models, such as an Attack Graph (AG), provide a formal method of analyzing the security, but incorporating MTD techniques in those security models has not been studied. In this paper, we incorporate MTD techniques into a security model, namely a Hierarchical Attack Representation Model (HARM), to assess the effectiveness of them. In addition, we use importance measures (IMs) for deploying MTD techniques to enhance the scalability. Finally, we compare the scalability of AG and HARM when deploying MTD techniques, as well as changes in performance and security in our experiments.
Keywords: Analytical models; Computational modeling; Internet; Linux; Redundancy; Scalability; Security; Attack Graph; Attack Tree; Importance Measures; Moving Target Defense; Security Analysis (ID#: 15-8629)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122306&isnumber=4358699
Pak, W.; Choi, Y., "High Performance and High Scalable Packet Classification Algorithm for Network Security Systems," in Dependable and Secure Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, June 2015. doi: 10.1109/TDSC.2015.2443773
Abstract: Packet classification is a core function in network and security systems; hence, hardware-based solutions, such as packet classification accelerator chips or T-CAM (Ternary Content Addressable Memory), have been widely adopted for high-performance systems. With the rapid improvement of general hardware architectures and growing popularity of multi-core multi-threaded processors, software-based packet classification algorithms are attracting considerable attention, owing to their high flexibility in satisfying various industrial requirements for security and network systems. For high classification speed, these algorithms internally use large tables, whose size increases exponentially with the ruleset size; consequently, they cannot be used with a large rulesets. To overcome this problem, we propose a new software-based packet classification algorithm that simultaneously supports high scalability and fast classification performance by merging partition decision trees in a search table. While most partitioning-based packet classification algorithms show good scalability at the cost of low classification speed, our algorithm shows very high classification speed, irrespective of the number of rules, with small tables and short table building time. Our test results confirm that the proposed algorithm enables network and security systems to support heavy traffic in the most effective manner.
Keywords: Buildings; Classification algorithms; Decision trees; Heuristic algorithms; Partitioning algorithms; Scalability; Security; Packet classification; cache-aware table structure; integrated inter- and intra-table search; partitioning (ID#: 15-8628)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120939&isnumber=4358699
Nick, M.; Alizadeh-Mousavi, O.; Cherkaoui, R.; Paolone, M., "Security Constrained Unit Commitment With Dynamic Thermal Line Rating," in Power Systems, IEEE Transactions on, vol. PP, no. 99, pp. 1-12, July 2015. doi: 10.1109/TPWRS.2015.2445826
Abstract: The integration of the dynamic line rating (DLR) of overhead transmission lines (OTLs) in power systems security constrained unit commitment (SCUC) potentially enhances the overall system security as well as its technical/economic performances. This paper proposes a scalable and computationally efficient approach aimed at integrating the DLR in SCUC problem. The paper analyzes the case of the SCUC with AC load flow constraints. The AC-optimal power flow (AC-OPF) is linearized and incorporated into the problem. The proposed multi-period formulation takes into account a realistic model to represent the different terms appearing in the Heat-Balance Equation (HBE) of the OTL conductors. In order to include the HBE in the OPF, a relaxation is proposed for the heat gain associated to resistive losses while the inclusion of linear approximations are investigated for both convection and radiation heat losses. A decomposition process relying on the Benders decomposition is used in order to breakdown the problem and incorporate a set of contingencies representing both generators and line outages. The effects of different linearization, as well as time step discretization of HBE, are investigated. The scalability of the proposed method is verified using IEEE 118-bus test system.
Keywords: Conductors; Heating; Mathematical model; Reactive power; Security; Wind speed; AC optimal power flow; Benders decomposition; Heat Balance Equation (HBE); convex formulation (ID#: 15-8629)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160786&isnumber=4374138
Unal, E.; Savas, E., "On Acceleration and Scalability of Number Theoretic Private Information Retrieval," in Parallel and Distributed Systems, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, July 2015. doi: 10.1109/TPDS.2015.2456021
Abstract: We present scalable and parallel versions of Lipmaa’s computationally-private information retrieval (CPIR) scheme [20], which provides log-squared communication complexity. In the proposed schemes, instead of binary decision diagrams utilized in the original CPIR, we employ an octal tree based approach, in which non-sink nodes have eight child nodes. Using octal trees offers two advantages: i) a serial implementation of the proposed scheme in software is faster than the original scheme and ii) its bandwidth usage becomes less than the original scheme when the number of items in the data set is moderately high (e.g., 4,096 for 80-bit security level using Damg°ard-Jurik cryptosystem). In addition, we present a highly-optimized parallel algorithm for shared-memory multi-core/processor architectures, which minimizes the number of synchronization points between the cores. We show that the parallel implementation is about 50 times faster than the serial implementation for a data set with 4,096 items on an eight-core machine. Finally, we propose a hybrid algorithm that scales the CPIR scheme to larger data sets with small overhead in bandwidth complexity. We demonstrate that the hybrid scheme based on octal trees can lead to more than two orders of magnitude faster parallel implementations than serial implementations based on binary trees. Comparison with the original as well as the other schemes in the literature reveals that our scheme is the best in terms of bandwidth requirement.
Keywords: Bandwidth; Complexity theory; Databases; Encryption; Protocols; Servers; Number Theoretic Private Information Retrieval; Parallel Algorithms; Privacy; Security (ID#: 15-8630)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155582&isnumber=4359390
Rabieh, K.; Mahmoud, M.; akkaya, K.; Tonyali, S., "Scalable Certificate Revocation Schemes for Smart Grid AMI Networks Using Bloom Filters," in Dependable and Secure Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, August 2015. doi: 10.1109/TDSC.2015.2467385
Abstract: Given the scalability of the Advanced Metering Infrastructure (AMI) networks, maintenance and access of certificate revocation lists (CRLs) pose new challenges. It is inefficient to create one large CRL for all the smart meters (SMs) or create a customized CRL for each SM since too many CRLs will be required. In order to tackle the scalability of the AMI network, we divide the network into clusters of SMs, but there is a tradeoff between the overhead at the certificate authority (CA) and the overhead at the clusters. We use Bloom filters to reduce the size of the CRLs in order to alleviate this tradeoff by increasing the clusters’ size with acceptable overhead. However, since Bloom filters suffer from false positives, there is a need to handle this problem so that SMs will not discard important messages due to falsely identifying the certificate of a sender as invalid. To this end, we propose two certificate revocation schemes that can identify and nullify the false positives. While the first scheme requires contacting the gateway to resolve them, the second scheme requires the CA additionally distribute the list of certificates that trigger false positives. Using mathematical models, we have demonstrated that the probability of contacting the gateway in the first scheme and the overhead of the second scheme can be very low by properly designing the Bloom filters. In order to assess the scalability and validate the mathematical formulas, we have implemented the proposed schemes using Visual C. The results indicate that our schemes are much more scalable than the conventional CRL and the mathematical and simulation results are almost identical. Moreover, we simulated the distribution of the CRLs in a wireless mesh-based AMI network using ns-3 network simulator and assessed its distribution overhead.
Keywords: Companies; Logic gates; Public key; Relays; Scalability; Smart grids; AMI; Certificate revocation; Public key infrastructure; public key cryptography; smart grid security (ID#: 15-8631)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7192615&isnumber=4358699
Hu, K.; Chandrikakutty, H.; Goodman, Z.; Tessier, R.; Wolf, T., "Dynamic Hardware Monitors for Network Processor Protection," in Computers, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, May 2015. doi: 10.1109/TC.2015.2435750
Abstract: The importance of the Internet for society is increasing. To ensure a functional Internet, its routers need to operate correctly. However, the need for router flexibility has led to the use of softwareprogrammable network processors in routers, which exposes these systems to data plane attacks. Recently, hardware monitors have been introduced into network processors to verify the expected behavior of processor cores at run time. If instruction-level execution deviates from the expected sequence, an attack is identified, triggering processor core recovery efforts. In this manuscript, we describe a scalable network processor monitoring system that supports the reallocation of hardware monitors to processor cores in response to workload changes. The scalability of our monitoring architecture is demonstrated using theoretical models, simulation, and router system-level experiments implemented on an FPGA-based hardware platform. For a system with four processor cores and six monitors, the monitors result in a 6% logic and 38% memory bit overhead versus the processor’s core logic and instruction storage. No slowdown of system throughput due to monitoring is reported.
Keywords: Hardware; Internet; Monitoring; Multicore processing; Process control; Runtime; FPGA; data plane attack; hardware monitor; multicore processor; network infrastructure; network security (ID#: 15-8632)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7110561&isnumber=4358213
Zhang, Y.; Li, D.; Sun, Z.; Zhao, F.; Su, J.; Lu, X., "CSR: Classified Source Routing in DHT-Based Networks," in Cloud Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, June 2015. doi: 10.1109/TCC.2015.2440242
Abstract: In recent years cloud computing provides a new way to address the constraints of limited energy, capabilities, and resources. Distributed hash table (DHT) based networks have become increasingly important for efficient communication in large-scale cloud systems. Previous studies mainly focus on improving the performance such as latency, scalability and robustness, but seldom consider the security demands on the routing paths, for example, bypassing untrusted intermediate nodes. Inspired by Internet source routing, in which the source nodes specify the routing paths taken by their packets, this paper presents CSR, a tag-based, Classified Source Routing scheme in DHT-based cloud networks to satisfy the security demands on the routing paths. Different from Internet source routing which requires some map of the overall network, CSR operates in a distributed manner where nodes with certain security level are tagged with a label and routing messages requiring that level of security are forwarded only to the qualified next-hops. We show how this can be achieved efficiently, by simple extensions of the traditional routing structures, and safely, so that the routing is uniformly convergent. The effectiveness of our proposals is demonstrated through theoretical analysis and extensive simulations.
Keywords: Cloud computing; Robustness; Routing; Security; Servers; Topology; CSR (classified source routing); DLG-de Bruijn (DdB); distributed hash table (DHT); path diversity; tag (ID#: 15-8633)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116526&isnumber=6562694
Ding, J.; Bouabdallah, A.; Tarokh, V., "Key Pre-Distributions From Graph-Based Block Designs," in Sensors Journal, IEEE, vol. PP, no. 99, pp. 1-1, June 2015. doi: 10.1109/JSEN.2015.2501429
Abstract: With the development of wireless communication technologies which considerably contributed to the development of wireless sensor networks (WSN), we have witnessed an ever increasing WSN based applications which induced a host of research activities in both academia and industry. Since most of the target WSN applications are very sensitive, security issue is one of the major challenges in the deployment of WSN. One of the important building blocks in securing WSN is key management. Traditional key management solutions developed for other networks are not suitable for WSN since WSN networks are resource (e.g. memory, computation, energy) limited. Key pre-distribution algorithms have recently evolved as efficient alternatives of key management in these networks. Secure communication is achieved between a pair of nodes either by the existence of a key allowing for direct communication or by a chain of keys forming a key-path between the pair. In this paper, we consider prior knowledge of network characteristics and application constraints in terms of communication needs between sensor nodes, and we propose methods to design key pre-distribution schemes, in order to provide better security and connectivity while requiring less resources. Our methods are based on casting the prior information as a graph. Motivated by this idea, we also propose a class of quasi-symmetric designs referred here to as g-designs. Our proposed key pre-distribution schemes significantly improve upon the existing constructions based on the unital designs. We give some examples, and point out open problems for future research.
Keywords: Knowledge engineering; Military computing; Probabilistic logic; Scalability; Security; Sensors; Wireless sensor networks; Balanced incomplete block design; graph; key pre-distribution; quasi-symmetric design; sensor networks (ID#: 15-8634)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7331238&isnumber=4427201
Guan, S.; De Grande, R.; Boukerche, A., "A Multi-layered Scheme for Distributed Simulations on the Cloud Environment," in Cloud Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, July 2015. doi: 10.1109/TCC.2015.2453945
Abstract: In order to improve simulation performance and to integrate simulation resources among geographically distributed locations, the concept of distributed simulation is proposed. Several types of distributed simulation standards, such as DIS and HLA, are established to formalize simulations and achieve reusability and interoperability of simulation components. To implement these distributed simulation standards and to manage the underlying system of distributed simulation applications, we employ Grid Computing and Cloud Computing technologies. These tackle the details of operation, configuration, and maintenance of simulation platforms in which simulation applications are deployed. However, for modelers who may not be familiar with the management of distributed systems, it is challenging to make a simulation-run-ready environment among different types of computing resources and network environments. In this article, a new multi-layered cloud-based scheme is proposed for enabling modeling and simulation based on different distributed simulation standards. This scheme is designed to ease the management of underlying resources and to achieve rapid elasticity that can provide unlimited computing capability to end users; it considers energy consumption, security, multi-user availability, scalability, and deployment issues. A mechanism for handling diverse network environments is described; by adopting it, idle public resources can be easily configured as additional computing capabilities for the local resource pool. A fast deployment model is built to relieve the migration and installation process of this platform. An energy-saving strategy is utilized to reduce the consumption of computing resources. Security components are implemented to protect sensitive information and block malicious attacks in the cloud. In the experiments, the proposed scheme is compared with its corresponding grid computing platform; the cloud computing platform achieves similar performance, but incorporates many advantages that the Cloud can provide.
Keywords: Analytical models; Cloud computing; Computational modeling; Energy consumption; Load modeling; Security; Standards; Availability; Cloud Computing; DIS; Distributed Simulations; Elasticity; Energy Consumption; HLA; Usability (ID#: 15-8635)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152867&isnumber=6562694
Su, S.; Teng, Y.; Cheng, X.; Xiao, K.; Li, G.; Chen, J., "Privacy-Preserving Top-k Spatial Keyword Queries in Untrusted Cloud Environments," in Services Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, September 2015. doi: 10.1109/TSC.2015.2481900
Abstract: With the rapid development of location-based services in mobile Internet, spatial keyword queries have been widely employed in various real-life applications in recent years. To realize the great flexibility and cost savings, more and more data owners are motivated to outsource their spatio-textual data services to the cloud. However, directly outsourcing such services to the untrusted cloud may arise serious privacy concerns. In this paper, we study the privacy-preserving top-k spatial keyword query problem in untrusted cloud environments. Existing studies primarily focus on the design of privacy-preserving schemes for either spatial or keyword queries, and they cannot be applied to solve the privacy-preserving spatial keyword query problem. To address this problem, we present a novel privacy-preserving top-k spatial keyword query scheme. In particular, we build an encrypted tree index to facilitate privacy-preserving top-k spatial keyword queries, where spatial and textual data are encrypted in a unified way. To search with the encrypted tree index, we propose two effective techniques for the similarity computations between queries and tree nodes under encryption. To improve query performance on large-scale spatio-textual data, we further propose a keyword-based secure pruning method. Thorough analysis shows the validity and security of our scheme. Extensive experimental results on real datasets demonstrate our scheme achieves high efficiency and good scalability.
Keywords: Encryption; Indexes; Noise; Privacy; Servers; Spatial databases; Cloud computing; Data outsourcing; Location-based Services; Privacy; Spatial keyword query (ID#: 15-8636)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275181&isnumber=4629387
Chandrasekhar, S.; Singhal, M., "Efficient and Scalable Query Authentication for Cloud-based Storage Systems with Multiple Data Sources," in Services Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, November 2015. doi: 10.1109/TSC.2015.2500568
Abstract: Storage services are among the primary cloud computing offerings, providing advantages of scale, cost and availability to its customers. However, studies and past experiences show that large-scale storage service can be unreliable, and vulnerable to various internal and external threats that cause loss and/or corruption of customer data. In this work, we present a query authentication scheme for cloud-based storage system where the data is populated by multiple sources and retrieved by the clients. The system allows clients to verify the authenticity and integrity of the retrieved data in a scalable and efficient way, without requiring implicit trust on the storage service provider. The proposed mechanism is based on our recently proposed multi-trapdoor hash functions, using its properties to achieve near constant communication and computation overhead for authenticating query responses, regardless of the data size, or the number of sources. We develop a discrete log-based instantiation of the scheme and evaluate its security and performance. Our security analysis shows that forging the individual or aggregate authentication tags is infeasible under the discrete log assumption. Our performance evaluation demonstrates that the proposed scheme achieves superior efficiency and scalability compared to existing query authentication schemes offering support for multiple sources.
Keywords: Aggregates; Authentication; Cloud computing; Databases; Organizations; Scalability; Cloud-based storage systems; aggregate authentication tags; discrete log; multi-trapdoor hash functions; query authentication (ID#: 15-8637)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328758&isnumber=4629387
Yan, Q.; Yu, R.; Gong, Q.; Li, J., "Software-Defined Networking (SDN) and Distributed Denial of Service (DDoS) Attacks in Cloud Computing Environments: A Survey, Some Research Issues, and Challenges," in Communications Surveys & Tutorials, IEEE, vol. PP, no. 99, pp. 1-1, November 2015. doi: 10.1109/COMST.2015.2487361
Abstract: Distributed Denial of Service (DDoS) attacks in cloud computing environments are growing due to the essential characteristics of cloud computing. With recent advances in software-defined networking (SDN), SDN-based cloud brings us new chances to defeat DDoS attacks in cloud computing environments. Nevertheless, there is a contradictory relationship between SDN and DDoS attacks. On one hand, the capabilities of SDN, including software-based traffic analysis, centralized control, global view of the network, dynamic updating of forwarding rules, make it easier to detect and react to DDoS attacks. On the other hand, the security of SDN itself remains to be addressed, and potential DDoS vulnerabilities exist across SDN platforms. In this paper, we discuss the new trends and characteristics of DDoS attacks in cloud computing, and provide a comprehensive survey of defense mechanisms against DDoS attacks using SDN. In addition, we review the studies about launching DDoS attacks on SDN, as well as the methods against DDoS attacks in SDN. To the best of our knowledge, the contradictory relationship between SDN and DDoS attacks has not been well addressed in previous works. This work can help to understand how to make full use of SDN’s advantages to defeat DDoS attacks in cloud computing environments and how to prevent SDN itself from becoming a victim of DDoS attacks, which are important for the smooth evolution of SDN-based cloud without the distraction of DDoS attacks.
Keywords: Cloud computing; Computer architecture; Computer crime; Control systems; Scalability; Virtualization; Cloud Computing; Distributed Denial of Service Attacks (DDoS); Software Defined Networking (SDN) (ID#: 15-8638)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289347&isnumber=5451756
Nitti, M.; Pilloni, V.; Colistra, G.; Atzori, L., "The Virtual Object as a Major Element of the Internet of Things: a Survey," in Communications Surveys & Tutorials, IEEE, vol. PP, no. 99, pp. 1-1, November 2015. doi: 10.1109/COMST.2015.2498304
Abstract: The Internet of Things (IoT) paradigm has been evolving towards the creation of a cyber-physical world where everything can be found, activated, probed, interconnected, and updated, so that any possible interaction, both virtual and/or physical, can take place. Crucial concept of this paradigm is that of the virtual object, which is the digital counterpart of any real (human or lifeless, static or mobile, solid or intangible) entity in the IoT. It has now become a major component of the current IoT platforms, supporting the discovery and mash up of services, fostering the creation of complex applications, improving the objects energy management efficiency, as well as addressing heterogeneity and scalability issues. This paper aims at providing the reader with a survey of the virtual object in the IoT world. Virtualness is addressed from several perspectives: historical evolution of its definitions; current functionalities assigned to the virtual object and how they tackle the main IoT challenges; major IoT platforms which implement these functionalities. Finally, we illustrate the lessons learned after having acquired a comprehensive view of the topic.
Keywords: Context; Internet of things; Security; Semantics; Sensors; Tutorials; Virtualization; Internet of Things; IoT architectural solutions; virtual objects (ID#: 15-8639)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7320954&isnumber=5451756
Zeng, W.; Zhang, Y.; Chow, Mo-Yuen, "Resilient Distributed Energy Management Subject to Unexpected Misbehaving Generation Units," in Industrial Informatics, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, October 2015. doi: 10.1109/TII.2015.2496228
Abstract: Distributed energy management algorithms are being developed for the smart grid to efficiently and economically allocate electric power among connected distributed generation units and loads. The use of such algorithms provides flexibility, robustness, and scalability, while it also increases the vulnerability of smart grid to unexpected faults and adversaries. The potential consequences of compromising the power system can be devastating to public safety and economy. Thus, it is important to maintain the acceptable performance of distributed energy management algorithms in a smart grid environment under malicious cyberattacks. In this paper, a neighborhood-watch based distributed energy management algorithm is proposed to guarantee the accurate control computation in solving the economic dispatch problem in the presence of compromised generation units. The proposed method achieves the system resilience by performing a reliable distributed control without a central coordinator and allowing all the well-behaving generation units to reach the optimal operating point asymptotically. The effectiveness of the proposed method is demonstrated through case studies under several different adversary scenarios.
Keywords: Algorithm design and analysis; Energy management; Integrated circuits; Resilience; Security; Smart grids; Economic dispatch; neighborhood-watch; resilient distributed energy management (ID#: 15-8640)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312956&isnumber=4389054
Goryczka, S.; Xiong, L., "A Comprehensive Comparison of Multiparty Secure Additions with Differential Privacy," in Dependable and Secure Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, October 2015. doi: 10.1109/TDSC.2015.2484326
Abstract: This paper considers the problem of secure data aggregation (mainly summation) in a distributed setting, while ensuring differential privacy of the result. We study secure multiparty addition protocols using well known security schemes: Shamir’s secret sharing, perturbation-based, and various encryptions. We supplement our study with our new enhanced encryption scheme EFT, which is efficient and fault tolerant. Differential privacy of the final result is achieved by either distributed Laplace or Geometric mechanism (respectively DLPA or DGPA), while approximated differential privacy is achieved by diluted mechanisms. Distributed random noise is generated collectively by all participants, which draw random variables from one of several distributions: Gamma, Gauss, Geometric, or their diluted versions. We introduce a new distributed privacy mechanism with noise drawn from the Laplace distribution, which achieves smaller redundant noise with efficiency. We compare complexity and security characteristics of the protocols with different differential privacy mechanisms and security schemes. More importantly, we implemented all protocols and present an experimental comparison on their performance and scalability in a real distributed environment. Based on the evaluations, we identify our security scheme and Laplace DLPA as the most efficient for secure distributed data aggregation with differential privacy.
Keywords: Cryptography; Data privacy; Distributed databases; Noise; Privacy; Protocols; Distributed differential privacy; decentralized noise generation; redundant noise; secure multiparty computations (ID#: 15-8641)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286780&isnumber=4358699
Cao, X.; Zhang, C.; Fu, H.; Guo, X.; Tian, Q., "Saliency-Aware Nonparametric Foreground Annotation Based on Weakly Labeled Data," in Neural Networks and Learning Systems, IEEE Transactions on, vol. PP, no. 99, pp. 1-13, October 2015. doi: 10.1109/TNNLS.2015.2488637
Abstract: In this paper, we focus on annotating the foreground of an image. More precisely, we predict both image-level labels (category labels) and object-level labels (locations) for objects within a target image in a unified framework. Traditional learning-based image annotation approaches are cumbersome, because they need to establish complex mathematical models and be frequently updated as the scale of training data varies considerably. Thus, we advocate the nonparametric method, which has shown potential in numerous applications and turned out to be attractive thanks to its advantages, i.e., lightweight training load and scalability. In particular, we exploit the salient object windows to describe images, which is beneficial to image retrieval and, thus, the subsequent image-level annotation and localization tasks. Our method, namely, saliency-aware nonparametric foreground annotation, is practical to alleviate the full label requirement of training data, and effectively addresses the problem of foreground annotation. The proposed method only relies on retrieval results from the image database, while pretrained object detectors are no longer necessary. Experimental results on the challenging PASCAL VOC 2007 and PASCAL VOC 2008 demonstrate the advance of our method.
Keywords: Computational efficiency; Data models; Detectors; Image retrieval; Training; Training data; Foreground annotation; nonparametric; saliency aware; weakly labeled (ID#: 15-8642)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7307208&isnumber=6104215
Todoran Koitz, I.; Glinz, M., "A Fuzzy Galois Lattices Approach to Requirements Elicitation for Cloud Services," in Services Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, August 2015. doi: 10.1109/TSC.2015.2466538
Abstract: The cloud paradigm has become increasingly attractive throughout the recent years due to its both technical and economic foreseen impact. Therefore, researchers’ and practitioners’ attention has been drawn to enhancing the technological characteristics of cloud services, such as performance, scalability or security. However, the topic of identifying and understanding cloud consumers’ real needs has largely been ignored. Existing requirements elicitation methods are not appropriate for the cloud computing domain, where consumers are highly heterogeneous and geographically distributed, have frequent change requests and expect services to be delivered at a fast pace. In this paper, we introduce a new approach to requirements elicitation for cloud services, which utilizes consumers’ advanced search queries for services to infer requirements that can lead to new cloud solutions. For this, starting from the queries, we build fuzzy Galois lattices that can be used by public cloud providers to analyze market needs and trends, as well as optimum solutions for satisfying the largest populations possible with a minimum set of features implemented. This new approach complements the existing requirements elicitation techniques in that it is a dedicated cloud method which operates with data that already exists, without entailing the active participation of consumers and requirements specialists.
Keywords: Algorithm design and analysis; Analytical models; Cloud computing; Computational modeling; Data models; Lattices; Unified modeling language; Galois lattice; cloud services; data analysis; requirements elicitation (ID#: 15-8643)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7185443&isnumber=4629387
Premarathne, U.; Khalil, I.; Tari, Z.; Zomaya, A., "Cloud-based Utility Service Framework forTrust Negotiations using Federated Identity Management," in Cloud Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, February 2015. doi: 10.1109/TCC.2015.2404816
Abstract: Utility based cloud services can efficiently provide various supportive services to different service providers. Trust negotiations with federated identity management are vital for preserving privacy in open systems such as distributed collaborative systems. However, due to the large amounts of server based communications involved in trust negotiations scalability issues prove to be less cumbersome when offloaded on to the cloud as a utility service. In this view, we propose trust based federated identity management as a cloud based utility service. The main component of this model is the trust establishment between the cloud service provider and the identity providers. We propose novel trust metrics based on the potential vulnerability to be attacked, the available security enforcements and a novel cost metric based on policy dependencies to rank the cooperativeness of identity providers. Practical use of these trust metrics is demonstrated by analyses using simulated data sets, attack history data: published by MIT Lincoln laboratory, real-life attacks and vulnerabilities extracted from Common Vulnerabilities and Exposures (CVE) repository and fuzzy rule based evaluations. The results of the evaluations imply the significance of the proposed trust model to support cloud based utility services to ensure reliable trust negotiations using federated identity management.
Keywords: Authorization; Cloud computing; Collaboration; Computational modeling; Interoperability; Measurement; Reliability; Cloud; Distributed Collaborative Services; Federated Identity Management; Trust; Utility Computing (ID#: 15-8644)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045552&isnumber=6562694
Sood, K.; Yu, S.; Xiang, Y., "Software Defined Wireless Networking Opportunities and Challenges for Internet of Things: A Review," in Internet of Things Journal, IEEE, vol. PP, no. 99, pp. 1-1, September 2015. doi: 10.1109/JIOT.2015.2480421
Abstract: With the emergence of Internet of Things (IoT), there is now growing interest to simplify wireless network controls. This is a very challenging task, comprising information acquisition, information analysis, decision making and action implementation on large scale IoT networks. Resulting in research to explore the integration of Software Defined Networking (SDN) and IoT for a simpler, easier, and strain less network control. SDN is a promising novel paradigm shift which has the capability to enable a simplified and robust programmable wireless network serving an array of physical objects and applications. This review article starts with the emergence of SDN and then highlights recent significant developments in the wireless and optical domains with the aim of integrating SDN and IoT. Challenges in SDN and IoT integration are also discussed from both security and scalability perspectives.
Keywords: Control systems; Handover; Internet of things; Protocols; Software; Wireless communication; Internet of Things; SDN; SDN Use Case; Software Defined Wireless Networks (SDWN) (ID#: 15-8645)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279061&isnumber=6702522
Byun, H.; So, J., "Node Scheduling Control Inspired by Epidemic Theory for Data Dissemination in Wireless Sensor-Actuator Networks with Delay Constraints," in Wireless Communications, IEEE Transactions on, vol. PP, no.99, pp. 1-1, November 2015. doi: 10.1109/TWC.2015.2496596
Abstract: Wireless sensor-actuator networks (WSANs) enhance the existing wireless sensor networks (WSNs) by equipping sensor nodes with actuators. The actuators work with the sensor nodes to perform application-specific operations. The WSAN systems have several applications such as disaster relief, intelligent building management, military surveillance, health monitoring, and infrastructure security. These applications require capability of fast data dissemination in order to act responsively to events. However, due to strict resource constraints of the nodes, WSANs pose significant challenges in network protocol design to support applications with delay requirements. Biologically inspired modeling techniques have received considerable attention for achieving robust- ness, scalability, and adaptability, while retaining individual simplicity. Specifically, data dissemination, packet routing, and broadcasting protocols for wireless networks have been modeled by epidemic theory. However, existing bio-inspired algorithms are mostly based on predefined heuristics and fixed parameters, and thus it is difficult for them to achieve desired level of performance under dynamic environments. In order to solve this problem, we propose an epidemic-inspired algorithm for data dissemination in WSANs which automatically controls node states to meet the delay requirements while minimizing energy consumption. Through mathematical analysis, behavior of the algorithm in terms of converge time and steady state can be predicted. Also, the analysis shows that the system achieves stability, and derives parameter conditions for achieving the stability. Finally, extensive simulation results indicate that the proposed scheme outperforms existing protocols in achieving delay requirements and conserving energy.
Keywords: Actuators; Adaptation models; Delays; Protocols; Wireless networks; Wireless sensor networks; Wireless sensor-actuator networks; delay constraints; epidemics; node scheduling (ID#: 15-8646)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7314970&isnumber=4656680
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Hash Algorithms 2015 |
Hashing algorithms are used extensively in information security and forensics. Research focuses on new methods and techniques to optimize security. The articles cited here cover topics such as Secure Hash Algorithm (SHA)-1 and SHA-3, one time password generation, Keccac and Mceliece algorithms, and Bloom filters. All were presented in 2015.
Aggarwal, K.; Verma, H.K., "Hash_RC6 — Variable Length Hash Algorithm using RC6," in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, pp. 450-456, 19-20 March 2015.
doi: 10.1109/ICACEA.2015.7164747
Abstract: In this paper, we present a hash algorithm using RC6 that can generate hash value of variable length. Hash algorithms play major part in cryptographic security as these algorithms are used to check the integrity of the received message. It is possible to generate hash algorithm using symmetric block cipher. The main idea behind this is that if the symmetric block algorithm is secure then the generated hash function will also be secure [1]. As RC6 is secure against various linear and differential attacks algorithm presented here will also be secure against these attack. The algorithm presented here can have variable number of rounds to generate hash value. It can also have variable block size.
Keywords: cryptography; Hash_RC6 - variable length hash algorithm; cryptographic security; differential attacks algorithm; generated hash function; linear attack algorithm; received message; symmetric block algorithm; symmetric block cipher; Ciphers; Computers; Encryption; Receivers; Registers; Throughput; Access Control; Asymmetric Encryption; Authentication; Confidentiality; Cryptography; Data Integrity; Hash; Non-Repudiation; RC6; Symmetric Encryption (ID#: 15-8647)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164747&isnumber=7164643
Pandiaraja, P.; Parasuraman, S., "Applying Secure Authentication Scheme to Protect DNS from Rebinding Attack Using Proxy," in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, pp. 1-6, 19-20 March 2015. doi: 10.1109/ICCPCT.2015.7159255
Abstract: Internet is critical to both the economy and society in today's world. Domain Name System (DNS) is a key building block of the internet and the DNS hides all technical infrastructures, software and hardware required for the domain name system to function correctly. It allows users to access websites and exchange emails. It runs a strong mechanism to provide the IP address of the internet host name. An attacker can launch rebinding attack when the DNS server sends a query to any particular server on the network. Different types of techniques have been proposed to prevent this attack that all have some pros and also cones. A new technique is proposed in this paper by using security proxy with a hash function. Rebinding attack can be avoided by using this technique. It provides a secured environment for the DNS to communicate with other DNS. While the source DNS are receiving response from any DNS it will authenticate of all the receiving packets and then sends the data to the client. It gives a secure environment for DNS communication. For this purpose 2 different algorithms are used, namely SHA-2 and AES algorithms. First a random ID will be given to the query and then the query is sent to the DNS server.
Keywords: Internet; authorisation; computer network security; cryptography; AES algorithms; DNS communication; DNS server; IP address; Internet host name; SHA-2; Websites; domain name system; emails; hash function; random ID; rebinding attack; secure authentication scheme; security proxy; Computer crime; Cryptography; Electronic mail; IP networks; Receivers; Servers; Advanced Encryption Standard; Proxy; Rebinding attack; Secure Hash algorithm (ID#: 15-8648)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159255&isnumber=7159156
Dakhore, S.; Lohiya, P., "Location Aware Selective Unlocking & Secure Verification Safer Card Forenhancing RFID Security by Using SHA-3," in Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, pp. 477-482, 1-2 May 2015. doi: 10.1109/ICACCE.2015.65
Abstract: In This Paper, we report a new approach for providing security as well as privacy to the corporate user. With the help of locations sensing mechanism by using GPS we can avoid the un-authorized reading & relay attacks on RFID system. For example, location sensing mechanism with RFID card is used for location specific application such as ATM cash transfer van for open the door of van. So after reaching the pre-specified location (ATM) the RFID card is active & then it accepts the fingerprint of the registered person only. In this way we get a stronger cross layer security. SHA-3 algorithm is used to avoid the collision (due to fraud fingerprint) effect on server side.
Keywords: Global Positioning System; banking; cryptography; fingerprint identification; mobility management (mobile radio);radiofrequency identification; relay networks (telecommunication); smart cards; telecommunication security; ATM cash transfer van; GPS; Global Positioning System; RFID card; RFID security; RFID system;SHA-3 algorithm; Secure Hash Algorithm 3;cross layer security; fingerprint; location aware selective unlocking; location sensing mechanism; location specific application; relay attacks; secure verification; Fingerprint recognition; Global Positioning System; Privacy; Radiofrequency identification; Relays; Security; Servers; Java Development kit (JDK); Location Aware Selective unlocking; RFID; Secure Hash Algorithm (ID#: 15-8649)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306732&isnumber=7306547
Ragini, K.; Sivasankar, S., "Security and Performance Analysis of Identity Based Schemes in Sensor Networks," in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, pp. 1-5, 19-20 March 2015. doi: 10.1109/ICIIECS.2015.7192881
Abstract: Security and efficient data transmission without any hurdles caused by external Attackers is an issue in sensor networks. This paper deals with the provision of an assured efficient data transmission in the sensor networks. To ensure this requirement Hash based Message Authentication Code (HMAC) and Message Digest (MD) is envisaged by employing identity based digital signature scheme (IBS). Identity based scheme is an encryption scheme that generates an operation of developing secret code with secret key that protects the data during transmission without any cryptanalysis. To achieve the above requisite the modalities used in HMAC and MD5 which simulates the functional efficiency &security of data transmission in sensor networks.
Keywords: data communication; data protection; digital signatures; private key cryptography; telecommunication security; wireless sensor networks; HMAC; IBS; MD; data protection; data transmission security; hash based message authentication code ;identity based digital signature scheme; message digest; secret key encryption scheme; wireless sensor network security; Authentication; Cryptography; Data communication; Message authentication; Wireless sensor networks; HMAC; Hash algorithm; IBS; MD5; Security (ID#: 15-8650)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7192881&isnumber=7192777
Fuss, J.; Gradinger, S.; Greslehner-Nimmervoll, B.; Kolmhofer, R., "Complexity Estimates of a SHA-1 Near-Collision Attack for GPU and FPGA," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 274-280, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.34
Abstract: The complexity estimate of a hash collision algorithm is given by the unit hash compressions. This paper shows that this figure can lead to false runtime estimates when accelerating the algorithm by the use of graphics processing units (GPU) and field-programmable gate arrays (FPGA). For demonstration, parts of the CPU reference implementation of Marc Stevens' SHA-1 Near-Collision Attack are implemented on these two accelerators by taking advantage of their specific architectures. The implementation, runtime behavior and performance of these ported algorithms are discussed, and in conclusion, it is shown that the acceleration results in different complexity estimates for each type of coprocessor.
Keywords: coprocessors; field programmable gate arrays; graphics processing units; FPGA; GPU; complexity estimation; coprocessor; field programmable gate arrays; graphics processing units; hash collision algorithm; unit hash compressions; Complexity theory; Field programmable gate arrays; Graphics processing units; Instruction sets; Kernel; Message systems; Throughput; FPGA; GPU; SHA-1; hash collisions; hash function; near-collision (ID#: 15-8651)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299926&isnumber=7299862
Kumar, A.; Arora, V., "Analyzing the Performance and Security by using SHA3 in WEP," in Engineering and Technology (ICETECH), 2015 IEEE International Conference on, pp. 1-4, 20-20 March 2015. doi: 10.1109/ICETECH.2015.7275026
Abstract: This paper deals with the problems arising in WEP and how we can improve it by using SHA3 in WEP. First part of the paper basically focuses on WLAN, WEP, Encryption and decryption in WEP, Weaknesses of WEP. Second part explains SHA3 (Secure Hash Algorithm-3) and its comparison with earlier versions. Practical work of Paper focuses on performance improvement in WEP by replacing CRC-32 with SHA3, by using perimeters like Packet Delivery Fraction, and End to End Delay.
Keywords: cryptography; telecommunication security; SHA3; WEP; end to end delay; packet delivery fraction; secure hash algorithm-3; Communication system security; Conferences; Delays; Encryption; Protocols; Wireless LAN; CRC-32; SHA1; SHA3; WEP; WLAN (ID#: 15-8652)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275026&isnumber=7274993
Yi Wang; Youhua Shi; Chao Wang; Yajun Ha, "FPGA-based SHA-3 Acceleration on a 32-bit Processor via Instruction Set Extension," in Electron Devices and Solid-State Circuits (EDSSC), 2015 IEEE International Conference on, pp. 305-308, 1-4 June 2015. doi: 10.1109/EDSSC.2015.7285111
Abstract: As embedded systems play more and more important roles Internet of Things (IoT), the integration of cryptographic functionalities is an urgent demand to ensure data and information security. Recently, Keccak was declared as the winner of the third generation of Secure Hashing Algorithm (SHA-3). However, implementing SHA-3 on a specific 32-bit processor failed to meet the performance requirement. On the other hand, implementing it as a cryptographic coprocessor consumes a lot of extra area and requires a customized driver program. Although implementing Keccak on a 64-bit platform is more efficient, this platform is not suitable for embedded implementation. In this paper, we propose a novel SHA-3 implementation using instruction set extension based on a 32-bit LEON3 processor (an open source processor), with the goals of reducing execution cycles and code size. Experimental results show that the proposed design reduces around 87% execution cycles and 10.5% code size as compared to reference designs. Our design takes up only 9.44% extra area with negligible speed overhead compared to the standard LEON3 processor. Compared to the existing hardware accelerators, our proposed design occupies only half of area resources and does not require extra driver programs to be developed when integrated into the overall system.
Keywords: coprocessors; cryptography; embedded systems; field programmable gate arrays; instruction sets;32-bit LEON3 processor; 64-bit platform; FPGA-based SHA-3 acceleration; Internet of things; IoT; KECCAK; code size; cryptographic coprocessor; cryptographic functionalities; data security; embedded implementation; embedded systems; execution cycles; information security; instruction set extension; open source processor; secure hashing algorithm; speed overhead; Acceleration; Algorithm design and analysis; Cryptography; Field programmable gate arrays; Hardware; Registers; Throughput (ID#: 15-8653)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7285111&isnumber=7285012
Vinayaga Sundaram, B.; Ramnath, M.; Prasanth, M.; Varsha Sundaram, J., "Encryption and Hash Based Security in Internet of Things," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-6, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219926
Abstract: The Internet of Things (IoT) promises to be the next big revolution of the World Wide Web. It has a very wide range of applications, ranging from smart cities, smart homes, monitoring radiation levels in nuclear plants, animal tracking, health surveillance and a lot more. When nodes in wireless sensor networks are monitored through internet it becomes a part of Internet of Things. This brings in a lot of concerns related to security, privacy, standardization, power management. This paper aims at enhancing security in smart home systems. Devices like thermostat, air conditioners, doors and lighting systems are connected with each other and the internet through the internet of things technologies. Encryption and hash algorithms are proposed in this paper through which devices in the IoT can securely send messages between them. Encryption algorithm is used to ensure confidentiality as the attackers cannot interpret the cipher text that is sent. In order to ensure integrity (cipher text is not changed) hash algorithm is used.
Keywords: Internet; Internet of Things; Web sites; computer network security; cryptography; data integrity; home automation; telecommunication power management; wireless sensor networks; Internet; Internet of Things; World Wide Web; animal tracking; encryption; hash based security; health surveillance; loT; nuclear plant radiation level monitoring; power management; smart city; smart home system security enhancement; wireless sensor network; Cryptography; Monitoring; Prediction algorithms; Internet of Things; Security; Smart Homes; ireless Sensor Networks (ID#: 15-8654)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219926&isnumber=7219823
Mozaffari-Kermani, M.; Azarderakhsh, R., "Reliable Hash Trees for Post-Quantum Stateless Cryptographic Hash-Based Signatures," in Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS), 2015 IEEE International Symposium on, pp. 103-108, 12-14 Oct. 2015. doi: 10.1109/DFT.2015.7315144
Abstract: The potential advent of quantum computers in coming years has motivated security researchers to start developing resistant systems capable of thwarting future attacks, i.e., developing post-quantum cryptographic approaches. Hash-based, code-based, lattice-based, multivariate-quadratic-equations, and secret-key cryptography are all potential candidates, the merit of which is that they are believed to resist both classical and quantum computers and applying “Shor's algorithm”-the quantum-computer discrete-logarithm algorithm that breaks classical schemes-to them is infeasible. In this paper, we propose reliable and error detection hash trees for stateless hash-based signatures which are believed to be one of the prominent post-quantum schemes, offering security proofs relative to plausible properties of the hash function. We note that this work on the emerging area of reliable, error detection post-quantum cryptography, can be extended and scaled to other approaches as well. We also note that the proposed approaches make such schemes more reliable against natural faults and help protecting them against malicious faults. We propose, benchmark, and discuss fault diagnosis methods for this post-quantum cryptography variant choosing a case study for hash functions, and present the simulations and implementations results to show the applicability of the presented schemes. The presented architectures can be tailored for different reliability objectives based on the resources available, and would initiate the new research area of reliable, error detection postquantum cryptographic architectures.
Keywords: error detection; fault diagnosis; private key cryptography; Shor algorithm; code-based cryptography; error detection post-quantum cryptographic architecture; fault diagnosis methods; hash function; lattice-based cryptography; multivariate-quadratic-equation; post-quantum stateless cryptographic hash-based signature; quantum-computer discrete-logarithm algorithm; reliable hash tree; secret-key cryptography; Computer architecture; Cryptography; Hardware; Reliability; Transient analysis; Vegetation; Error detection; hash-based signatures; postquantum cryptography; reliability (ID#: 15-8655)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7315144&isnumber=7315124
Aldwairi, M.; Al-Khamaiseh, K., "Exhaust: Optimizing Wu-Manber Pattern Matching For Intrusion Detection using Bloom filters," in Web Applications and Networking (WSWAN), 2015 2nd World Symposium on, pp. 1-6, 21-23 March 2015. doi: 10.1109/WSWAN.2015.7209081
Abstract: Intrusion detection systems are widely accepted as one of the main tools for monitoring and analyzing host and network traffic to protect data from illegal access or modification. Almost all types of signature-based intrusion detection systems must employ a pattern matching algorithm to inspect packets for malicious signatures. Unfortunately, pattern matching algorithms dominate the execution time and have become the bottleneck. To remedy that, we introduce a new software-based pattern matching algorithm that modifies Wu-Manber pattern matching algorithm using Bloom filters. The Bloom filter acts as an exclusion filter to reduce the number of searches to the large HASH table. The HASH table is accessed if there is a probable match represented by a shift value of zero. On average the HASH table search is skipped 10.6% of the time with a worst case average running time speedup over Wu-Manber of 33%. The maximum overhead incurred on preprocessing time is 1.1% and the worst case increase in memory usage was limited to 0.33%.
Keywords: data structures; digital signatures; search problems; security of data; Bloom filters; HASH table search; Wu-Manber pattern matching; data protection; exclusion filter; execution time; host traffic; network traffic; signature-based intrusion detection systems; Classification algorithms; Filtering algorithms; Filtering theory; Intrusion detection; Matched filters; Pattern matching; Payloads; Bloom Filters; Intrusion Detection Systems; Network Security; Pattern Matching; Wu-Manber (ID#: 15-8656)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7209081&isnumber=7209078
Harikrishnan, T.; Babu, C., "Cryptanalysis of Hummingbird Algorithm with Improved Security and Throughput," in VLSI Systems, Architecture, Technology and Applications (VLSI-SATA), 2015 International Conference on, pp. 1-6, 8-10 Jan. 2015. doi: 10.1109/VLSI-SATA.2015.7050460
Abstract: Hummingbird is a Lightweight Authenticated Cryptographic Encryption Algorithm. This light weight cryptographic algorithm is suitable for resource constrained devices like RFID tags, Smart cards and wireless sensors. The key issue of designing this cryptographic algorithm is to deal with the trade off among security, cost and performance and find an optimal cost-performance ratio. This paper is an attempt to find out an efficient hardware implementation of Hummingbird Cryptographic algorithm to get improved security and improved throughput by adding Hash functions. In this paper, we have implemented an encryption and decryption core in Spartan 3E and have compared the results with the existing lightweight cryptographic algorithms. The experimental result shows that this algorithm has higher security and throughput with improved area than the existing algorithms.
Keywords: cryptography; telecommunication security; Hash functions; RFID tags; Spartan 3E; decryption core; hummingbird algorithm cryptanalysis; hummingbird cryptographic algorithm; lightweight authenticated cryptographic encryption algorithm; optimal cost-performance ratio; resource constrained devices; security; smart cards; wireless sensors; Authentication; Ciphers; Logic gates; Protocols; Radiofrequency identification; FPGA Implementation; Lightweight Cryptography; Mutual authentication protocol; Security analysis (ID#: 15-8657)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7050460&isnumber=7050449
Rao, Muzaffar; Newe, Thomas; Grout, Ian; Lewis, Elfed; Mathur, Avijit, "FPGA Based Reconfigurable IPSec AH Core Suitable for IoT Applications," in Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, pp. 2212-2216, 26-28 Oct. 2015. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.327
Abstract: Real-world deployments of Internet of Things (IoTs) applications require secure communication. The IPSec (Internet Protocol Security) is an important and widely used security protocol (in the IP layer) to provide end to end secure communication. Implementation of the IPSec is a computing intensive work, which significantly limits the performance of the high speed networks. To overcome this issue, hardware implementation of IPSec is a best solution. IPSec includes two main protocols namely, Authentication Header (AH) and Encapsulating Security Payload (ESP) with two modes of operations, transport mode and tunnel mode. In this work we presented an FPGA implementation of IPSec AH protocol. This implementation supports both, tunnel and transport mode of operation. Cryptographic hash function called Secure Hash Algorithm -- 3 (SHA-3) is used to calculate hash value for AH protocol. The proposed IPSec AH core can be used to provide data authentication security service to IoT applications.
Keywords: Authentication; Cryptography Field programmable gate arrays; IP networks; Internet; Protocols; AH; FPGA; IPSec; SHA-3 (ID#: 15-8658)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363373&isnumber=7362962
Patil, M.A.; Karule, P.T., "Design and Implementation of Keccak Hash Function for Cryptography," in Communications and Signal Processing (ICCSP), 2015 International Conference on, pp. 0875-0878, 2-4 April 2015. doi: 10.1109/ICCSP.2015.7322620
Abstract: Security has become a very demanding parameter in today's world of speed communication. It plays an important role in the network and communication fields where cryptographic processes are involved. These processes involve hash function generation which is a one-way encryption code used for security of data. The main examples include digital signatures, MAC (message authentication codes) and in smart cards. Keccak, the SHA-3 (secure hash algorithm) has been discussed in this paper which consists of padding and permutation module. This is a one way encryption process. High level of parallelism is exhibited by this algorithm. This has been implemented on FPGA. The implementation process is very fast and effective. The algorithm aims at increasing the throughput and reducing the area.
Keywords: cryptography; digital signatures; field programmable gate arrays; smart cards; telecommunication security; FPGA; Keccak Hash function implementation;MAC;SHA-3;cryptographic process; cryptography; data security; digital signature; message authentication code; one-way encryption code; smart card; Algorithm design and analysis; Cryptography; Hardware; Registers; Software; Cryptography; FPGA; encryption; hash function; permutation; security (ID#: 15-8659)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322620&isnumber=7322423
Guifen Zhao; Ying Li; Liping Du; Xin Zhao, "Asynchronous Challenge-Response Authentication Solution Based on Smart Card in Cloud Environment," in Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 156-159, 24-26 April 2015. doi: 10.1109/ICISCE.2015.42
Abstract: In order to achieve secure authentication, an asynchronous challenge-response authentication solution is proposed. SD key, encryption cards or encryption machine provide encryption service. Hash function, symmetric algorithm and combined secret key method are adopted while authenticating. The authentication security is guaranteed due to the properties of hash function, combined secret key method and one-time authentication token generation method. Generate random numbers, one-time combined secret key and one-time token on the basis of smart card, encryption cards and cryptographic technique, which can avoid guessing attack. Moreover, the replay attack is avoided because of the time factor. The authentication solution is applicable for cloud application systems to realize multi-factor authentication and enhance the security of authentication.
Keywords: cloud computing; message authentication; private key cryptography; smart cards; SD key; asynchronous challenge-response authentication solution; authentication security; cloud application systems; combined secret key method; cryptographic technique; encryption cards; encryption machine; encryption service; hash function; multifactor authentication; one-time authentication token generation method; one-time combined secret key; random number generation; replay attack; smart card; symmetric algorithm; time factor; Authentication; Encryption; Servers; Smart cards; Time factors; One-time password; asynchronous challenge-response authentication; multi-factor authentication; smart card (ID#: 15-8660)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120582&isnumber=7120439
Amin, R.; Biswas, G.P., "Anonymity Preserving Secure Hash Function Based Authentication Scheme for Consumer USB Mass Storage Device," in Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1-6, 7-8 Feb. 2015
doi: 10.1109/C3IT.2015.7060190
Abstract: A USB (Universal Serial Bus) mass storage device, which makes a (USB) device accessible to a host computing device and enables file transfers after completing mutual authentication between the authentication server and the user. It is also very popular device because of it's portability, large storage capacity and high transmission speed. To protect the privacy of a file transferred to a storage device, several security protocols have been proposed but none of them is completely free from security weaknesses. Recently He et al. proposed a multi-factor based security protocol which is efficient but the protocol is not applicable for practical implementation, as they does not provide password change procedure which is an essential phase in any password based user authentication and key agreement protocol. As the computation and implementation of the cryptographic one-way hash function is more trouble-free than other existing cryptographic algorithms, we proposed a light weight and anonymity preserving three factor user authentication and key agreement protocol for consumer mass storage devices and analyzes our proposed protocol using BAN logic. Furthermore, we have presented informal security analysis of the proposed protocol and confirmed that the protocol is completely free from security weaknesses and applicable for practical implementation.
Keywords: cryptographic protocols; file organisation; BAN logic; USB device; anonymity preserving secure hash function based authentication scheme; anonymity preserving three factor user authentication; authentication server; consumer USB mass storage device; consumer mass storage devices; cryptographic algorithms; cryptographic one-way hash function; file transfers; host computing device; informal security analysis; key agreement protocol; multifactor based security protocols; password based user authentication; password change procedure; storage capacity; universal serial bus mass storage device; Authentication; Cryptography; Protocols; Servers; Smart cards; Universal Serial Bus; Anonymity; Attack; File Secrecy; USB MSD; authentication (ID#: 15-8661)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060190&isnumber=7060104
Dubey, Gaurav; Khurana, Vikram; Sachdeva, Shelly, "Implementing Security Technique on Generic Database," in Contemporary Computing (IC3), 2015 Eighth International Conference on, pp.370-376, 20-22 Aug. 2015. doi: 10.1109/IC3.2015.7346709
Abstract: Database maintenance has become an important issue in today's world. Addition or alteration of any field to an existing database schema cost high to a corporation. Whenever new data types are introduced or existing types are modified in a conventional relational database system, the physical design of the database must be changed accordingly. For this reason, it is desirable that a database should be flexible and allow for modification and addition of new types of data without having to change the physical database schema. The generic model is designed to allow a wide variety of data to be accommodated in a general purpose set of data structures. This generic mechanism for data storage has been used in various information systems such as banking, defense, e-commerce and especially healthcare domain. But, addressing security on generic databases is a challenging task. To the best of our knowledge, applying security on generic database has not been addressed yet. Various cryptographic security techniques, such as hashing algorithms, public and private key algorithms, have already been applied on a database. In this paper, we are proposing an extra layer of security to the existing databases, through Negative Database technique. The advantages of the negative database approach on generic database has been demonstrated and emphasized. Correspondingly, the complexity of the proposed algorithm has been computed.
Keywords: Data models; Databases; Diseases; Encryption; Database security; Generic Database; Information Security; Negative Database; Privacy; Security (ID#: 15-8662)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346709&isnumber=7346637
Xiaojing An; Haipeng Jia; Yunquan Zhang, "Optimized Password Recovery for Encrypted RAR on GPUs," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 591-598, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.270
Abstract: Files are often compressed for efficiency. RAR is a common archive file format that supports data compression, error recovery and file spanning. RAR uses classic symmetric encryption algorithm SHA-1 hashing and AES algorithm for encryption. These two algorithms cannot be cracked, so the only method of password recovery is brute force, which is very time-consuming. In this paper, we present an approach using GPUs to speed up the password recovery process. However, because the major calculation and time-consuming part, SHA-1 hashing, is hard to be parallelized, so this paper adopts coarse granularity parallel. That is, one GPU thread is responsible for the validation of one password. We mainly use four optimization methods to optimize this parallel version: asynchronous parallel between CPU and GPU, redundant calculations and conditional statements reduction, data locality by using LDS, and the usage of registers optimization. Experiment result shows that the final version reaches 43~57 times speedup on an AMD FirePro W8000 GPU, compared to a well-optimized serial version on Intel Core i5 CPU. Meanwhile, linear performance acceleration is achieved when using multi-GPUs.
Keywords: cryptography; data compression; data reduction; file organisation; graphics processing units; multi-threading; AES algorithm; AMD FirePro W8000 GPU;CPU;GPU thread; Intel Core; LDS; SHA-1 hashing; asynchronous parallel; classic symmetric encryption algorithm; coarse granularity parallel; conditional statement reduction; data compression; data locality; encrypted RAR; error recovery; file compression; file spanning; linear performance acceleration; optimized password recovery; password recovery method; password recovery process; password validation; register optimization; Algorithm design and analysis; Encryption; Force; Graphics processing units; Optimization; Registers; GPGPU; OpenCL; RAR password recovery; performance optimization (ID#: 15-8663)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336222&isnumber=7336120
Thomas, M.; Panchami, V., "An Encryption Protocol for End-To-End Secure Transmission of SMS," in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, pp. 1-6, 19-20 March 2015. doi: 10.1109/ICCPCT.2015.7159471
Abstract: Short Message Service (SMS) is a process of transmission of short messages over the network. SMS is used in daily life applications including mobile commerce, mobile banking, and so on. It is a robust communication channel to transmit information. SMS pursue a store and forward way of transmitting messages. The private information like passwords, account number, passport number, and license number are also send through message. The traditional messaging service does not provide security to the message since the information contained in the SMS transmits as plain text from one mobile phone to other. This paper explains an efficient encryption protocol for securely transmitting the confidential SMS from one mobile user to other which serves the cryptographic goals like confidentiality, authentication and integrity to the messages. The Blowfish encryption algorithm gives confidentiality to the message, the EasySMS protocol is used to gain authentication and MD5 hashing algorithm helps to achieve integrity of the messages. Blowfish algorithm utilizes only less battery power when compared to other encryption algorithms. The protocol prevents various attacks, including SMS disclosure, replay attack, man-in-the middle attack and over the air modification.
Keywords: cryptographic protocols; data integrity; data privacy; electronic messaging; message authentication; mobile radio; Blowfish encryption algorithm; SMS disclosure; encryption protocol; end-to-end secure transmission; man-in-the middle attack; message authentication; message confidentiality; message integrity; mobile phone; over the air modification; replay attack; short message service; Authentication; Encryption; Mobile communication; Protocols; Throughput; Asymmetric Encryption; Cryptography; Encryption; Secure Transmission; Symmetric Encryption (ID#: 15-8664)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159471&isnumber=7159156
Wahab, Hala B.Abdul; Mohammed, Mohanad A., "Improvement A5/1 Encryption Algorithm Based on Sponge Techniques," in Information Technology and Computer Applications Congress (WCITCA), 2015 World Congress on, pp. 1-5, 11-13 June 2015. doi: 10.1109/WCITCA.2015.7367031
Abstract: A5/1 stream cipher is used in Global System for Mobile Communications (GSM) in order to provide privacy on air communication. In this paper introduce new improvements to the A5/1 stream cipher based on using new technology concepts called sponge function. Sponge functions that represent in this paper constructed based on combine between the advantage of stream cipher and hash concepts. New S-box generation is proposed to provide the dynamic features to the sponge technology in order solve the weakness that appear in majority function that used in A5/1 stream cipher by provide dynamic behavior in number of registers and transformation. According the experimental results and the compassion between the A5/1 and the proposed improvement shown the proposed algorithm will increase the randomness features for the A5/l algorithm. The output bit-stream generated by the proposed stream cipher has improved the randomness performance and provide more security to the GSM security algorithm.
Keywords: Ciphers; Encryption; GSM; Heuristic algorithms; Mobile communication; Registers; A5/1; GSM; randomness; s-box; sponge; stream cipher (ID#: 15-8665)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7367031&isnumber=7367013
Mathe, S.E.; Boppana, L.; Kodali, R.K., "Implementation of Elliptic Curve Digital Signature Algorithm on an IRIS mote using SHA-512," in Industrial Instrumentation and Control (ICIC), 2015 International Conference on, pp. 445-449, 28-30 May 2015. doi: 10.1109/IIC.2015.7150783
Abstract: Wireless Sensor Networks (WSN) are spatially distributed nodes monitoring physical or environmental conditions such as temperature, pressure, sound, light etc using sensors. The sensed data is cooperatively passed through a series of nodes in a network to a main base-station (BS) where it is analysed by the user. The data is communicated over a wireless channel between the nodes and since wireless channel has minimum security, the data has to communicated in a secure manner. Different encryption techniques can be applied to transmit the data securely. This work provides an efficient implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) using SHA-512 algorithm on an IRIS mote. The ECDSA does not actually encrypt the data but provides a means to check the integrity of the received data. If the received data has been modified by an attacker, the ECDSA detects it and signals to the transmitter for retransmission. The SHA-512 algorithm is the hash algorithm used in the ECDSA and is implemented for an 8-bit architecture. The SHA-512 algorithm is chosen as it provides better security than its predecessors.
Keywords: digital signatures; public key cryptography; radio transmitters; telecommunication security; wireless channels; wireless sensor networks; IRIS mote;SHA-512 algorithm; WSN; elliptic curve digital signature algorithm; encryption techniques; main base station; minimum security; received data; retransmission transmitter; wireless channel; wireless sensor networks; word length 8 bit; Algorithm design and analysis; Elliptic curve cryptography; Elliptic curves; Wireless sensor networks; ECDSA; IRIS mote; SHA-512; WSN (ID#: 15-8666)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150783&isnumber=7150576
Zhenjiu Xiao; Yongbin Wang; Zhengtao Jiang, "Research and Implementation of Four-Prime RSA Digital Signature Algorithm," in Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, pp. 545-549, June 28 2015-July 1 2015. doi: 10.1109/ICIS.2015.7166652
Abstract: Big module RSA signature algorithm is very popular these years. We try to improve it and get more operation efficiency. We proposed a four-prime Chinese Remainder Theorem (CRT)-RSA digital signature algorithm in this paper. We used the Hash function SHA512 to make message digest. We optimized large number modular exponentiation with CRT combining in Montgomery algorithm. Our experiment shows that our method got good performance. The security analysis shows higher signature efficiency on resistance of common attacks.
Keywords: digital signatures; public key cryptography; CRT; Chinese remainder theorem; Montgomery algorithm; big module RSA signature algorithm; four-prime RSA digital signature algorithm; modular exponentiation; security analysis; Algorithm design and analysis; Digital signatures; Encryption; Public key cryptography; Chinese remainder theorem; Digital signature; Four prime; Hash function; Montgomery algorithm; RSA encryption algorithm (ID#: 15-8667)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166652&isnumber=7166553
Rahman, L., "Detecting MITM Based on Challenge Request Protocol," in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3, pp. 625-626, 1-5 July 2015. doi: 10.1109/COMPSAC.2015.135
Abstract: There are various issues on current wireless network technology. MITM (Man-In-The-Middle) attack is generally done by spoofing between network access point and clients. MITM attacked is hard to be aware by the client. In this paper, we propose an algorithm, SALT-HASH, to detect MITM attack without necessity of certifications.
Keywords: computer network security; protocols; radio networks; MITM attack; MITM detection; SALT-HASH; challenge request protocol; man-in-the-middle; network access point; spoofing; wireless network technology; Authentication; Certification; Computers; Protocols; Usability; Wireless networks; Challenge request; MITM; SALT (ID#: 15-8668)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273438&isnumber=7273299
Ahmad, M.; Pervez, Z.; Byeong Ho Kang; Sungyoung Lee, "O-Bin: Oblivious Binning for Encrypted Data over Cloud," in Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, pp. 352-357, 24-27 March 2015. doi: 10.1109/AINA.2015.206
Abstract: In recent years, the data growth rate has been observed growing at a staggering rate. Considering data search as a primitive operation and to optimize this process on large volume of data, various solution have been evolved over a period of time. Other than finding the precise similarity, these algorithms aim to find the approximate similarities and arrange them into bins. Locality sensitive hashing (LSH) is one such algorithm that discovers probable similarities prior calculating the exact similarity thus enhance the overall search process in high dimensional search space. Realizing same strategy for encrypted data and that too in public cloud introduces few challenges to be resolved before probable similarity discovery. To address these issues and to formalize a similar strategy like LSH, in this paper we have formalized a technique O-Bin that is designed to work over encrypted data in cloud. By exploiting existing cryptographic primitives, O-Bin preserves the data privacy during the similarity discovery for the binning process. Our experimental evaluation for O-Bin produces results similar to LSH for encrypted data.
Keywords: cloud computing; cryptography; data privacy; information retrieval; LSH; O-Bin; approximate similarities; cryptographic primitives; data growth rate; data privacy; data search; encrypted data; high dimensional search space; locality sensitive hashing; oblivious binning process; probable similarity discovery; public cloud; search process; Cloud computing; Data privacy; Encryption; Outsourcing; Servers; Binning; Cloud; Security and Privacy; Similarity discovery (ID#: 15-8669)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097991&isnumber=7097928
Grawinkel, M.; Mardaus, M.; Suess, T.; Brinkmann, A., "Evaluation of a Hash-Compress-Encrypt Pipeline for Storage System Applications," in Networking, Architecture and Storage (NAS), 2015 IEEE International Conference on, pp. 355-356, 6-7 Aug. 2015. doi: 10.1109/NAS.2015.7255216
Abstract: Great efforts are made to store data in a secure, reliable, and authentic way in large storage systems. Specialized, system specific clients help to achieve these goals. Nevertheless, often standard tools for hashing, compressing, and encrypting data are arranged in transparent pipelines. We analyze the potential of Unix shell pipelines with several high-speed and high-compression algorithms that can be used to achieve data security, reduction, and authenticity. Furthermore, we compare the pipelines of standard tools against a house made pipeline implemented in C++ and show that there is great potential for performance improvement.
Keywords: C++ language; cryptography; data reduction; file organisation; pipeline processing; C++;Unix shell pipelines; data authenticity; data reduction; data security; hash-compress-encrypt pipeline evaluation; high-compression algorithms; high-speed algorithms; performance improvement; standard tools; storage system applications; transparent pipelines; Cryptography; Data processing; Hardware; Pipelines; Reliability; Standards; Throughput (ID#: 15-8670)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7255216&isnumber=7255186
Kharod, Seema; Sharma, Nidhi; Sharma, Alok, "An Improved Hashing Based Password Security Scheme Using Salting and Differential Masking," in Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, pp. 1-5, 2-4 Sept. 2015
doi: 10.1109/ICRITO.2015.7359225
Abstract: In this era of digitization, the foremost requirement is information security which is normally secured by some authentication process. Password security is a major issue for any authenticating process and different researches in past have proposed different techniques like hashing, salting, honeywords to make the process most secured. Here, we propose a new technique which involves hashing, then salting and a crash list generation formed by differential masking process. The crash list for any user is stored in the password file. Any security breach with the password file can lead to login attempts by hacker using one of these passwords from the list. An attempt to log in using any of these crash list words raises an alarm for the application and the application can block that user or address. This process performs a unique hashing algorithm with very low time complexity as most of the steps involved simple binary operations.
Keywords: Authentication; Computer crashes; Computer crime; Databases; Force; Time complexity; Login; authentication; crash list; differential masking; hashing; password security; salting (ID#: 15-8671)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359225&isnumber=7359191
Chankasame, W.; Wimol San-Um, "A Chaos-Based Keyed Hash Function for Secure Protocol and Messege Authentication in Mobile Ad Hoc Wireless Networks," in Science and Information Conference (SAI), 2015, pp. 1357-1364, 28-30 July 2015. doi: 10.1109/SAI.2015.7237319
Abstract: The design of communication protocols in the Mobile Ad hoc Networks (MANET) is challenging due to limited wireless transmission ranges of node mobility, limited power resources, and limited physical security. The advantages of MANET include simple and fast deployment, robustness, adaptive and self-organizing networks. Nonetheless, routing protocols are important operations for communication among wireless devices. Assuring secure routing protocols is challenging since MANET wireless networks are highly vulnerable to security attacks. Most traditional routing protocols and message authentication designs do not address security, and are mainly based on a mutual trust relationship among nodes. This paper therefore proposes a new chaos-based keyed hash function that can be used for communication protocols in MANET. The proposed chaotic map realizes an absolute-value nonlinearity, which offers robust chaos over wide parameter spaces, i.e. high degree of randomness through chaoticity measurements using Lyapunov exponent. The proposed keyed hash function structure is compact through the use of a single stage chaos-based topology. Hash function operations involve an initial stage when the chaotic map accepts input message and initial conditions, and a hashing stage where alterable-length hash values are generated iteratively. Hashing performances are evaluated in terms of original message condition changes, statistical analyses, and collision analyses. Results of hashing performances show that the mean changed probabilities are very close to 50%, and the mean changed bit number is also close to a half of hash value lengths. The proposed keyed hash function enhances the collision resistance, comparing to typical MD5 and SHA1, and is faster than other complicated chaos-based approaches.
Keywords: cryptographic protocols; mobile ad hoc networks; routing protocols; statistical analysis; telecommunication security; Lyapunov exponent; MANET; MANET wireless networks; chaos based keyed hash function; chaoticity measurements; collision analyses; communication protocols; keyed hash function structure; limited physical security; limited power resources; message authentication; mobile ad hoc networks; mobile ad hoc wireless networks; node mobility; routing protocols; secure protocol; secure routing protocols; security attacks; statistical analyses; wireless devices; wireless transmission; Algorithm design and analysis; Chaotic communication; Mobile ad hoc networks; Protocols; Security; Clustering; Coverage and connectivity; Mobility Management; Social networks; Topology control; synchronization (ID#: 15-8672)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237319&isnumber=7237120
Shimbre, N.; Deshpande, P., "Enhancing Distributed Data Storage Security for Cloud Computing Using TPA and AES Algorithm," in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, pp. 35-39, 26-27 Feb. 2015. doi: 10.1109/ICCUBEA.2015.16
Abstract: Cloud computing model is very exciting model especially for business peoples. Many business peoples are getting attracted towards cloud computing model because of the features easy to manage, device independent, location independent. But this cloud models comes with many security issues. A business person keeps crucial information on cloud, so security of data is crucial issue as probability of hacking and unauthorised access is there. Also availability is a major concern on cloud. This paper, discusses the file distribution and SHA-1 technique. When file is distributed then data is also segregated into many servers. So here the need of data security arises. Every block of file contains its own hash code, using hash code which will enhance user authentication process, only authorized person can access the data. Here, the data is encrypted using advanced encryption standard, so data is successfully and securely stored on cloud. Third party auditor is used for public auditing. This paper discusses the handling of some security issues like Fast error localization, data integrity, data security. The proposed design allows users to audit the data with lightweight communication and computation cost. Analysis shows that proposed system is highly efficient against malicious data modification attack and server colluding attack. Performance and extensive security analysis shows that proposed systems are provably secure and highly efficient.
Keywords: business data processing; cloud computing; cryptography; data integrity; storage management; AES algorithm;SHA-1 technique; TPA algorithm; advanced encryption standard; business peoples; cloud computing model; data integrity; data security issues; distributed data storage security; fast error localization; file distribution technique; hacking; hash code; malicious data modification attack; public auditing; server colluding attack; third party auditor; unauthorised access; user authentication process; Cloud computing; Computational modeling; Encryption; Memory; Servers; CSP and TPA; Cloud security; Encryption; Hash code (ID#: 15-8673)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155804&isnumber=7155781
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Human Trust 2015 |
Human behavior is complex and that complexity creates a tremendous problem for cybersecurity. The works cited here address a range of human trust issues related to behaviors, deception, enticement, sentiment and other factors difficult to isolate and quantify. For the Science of Security community, human behavior is a Hard Problem. The work cited here was presented in 2015.
Adeka, M.; Shepherd, S.; Abd-Alhameed, R.; Ahmed, N.A.S., "A Versatile and Ubiquitous Secret Sharing," in Internet Technologies and Applications (ITA), 2015, pp. 466-471, 8-11 Sept. 2015. doi: 10.1109/ITechA.2015.7317449
Abstract: The Versatile and Ubiquitous Secret Sharing System, a cloud data repository secure access and a web based authentication scheme. It is designed to implement the sharing, distribution and reconstruction of sensitive secret data that could compromise the functioning of an organisation, if leaked to unauthorised persons. This is carried out in a secure web environment, globally. It is a threshold secret sharing scheme, designed to extend the human trust security perimeter. The system could be adapted to serve as a cloud data repository and secure data communication scheme. A secret sharing scheme is a method by which a dealer distributes shares of a secret data to trustees, such that only authorised subsets of the trustees can reconstruct the secret. This paper gives a brief summary of the layout and functions of a 15-page secure server-based website prototype; the main focus of a PhD research effort titled `Cryptography and Computer Communications Security: Extending the Human Security Perimeter through a Web of Trust'. The prototype, which has been successfully tested, has globalised the distribution and reconstruction processes.
Keywords: Internet; cloud computing; message authentication; trusted computing; ubiquitous computing;AdeVersUS3;Adekas Versatile and Ubiquitous Secret Sharing System; Web based authentication scheme; cloud data repository secure access; human trust security perimeter; secure data communication; secure server-based Website prototype; threshold secret sharing scheme; Computer science; Cryptography; Electrical engineering; IP networks; Prototypes; Radiation detectors; Servers; (k, n)-threshold; authentication; authorised user; cloud data repository; combiner; cryptography; dealer or distributor; human security perimeter; interpolation; key management; participants (trustees); secret sharing}, (ID#: 15-8540)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7317449&isnumber=7317353
Dawson, S.; Crawford, C.; Dillon, E.; Anderson, M., "Affecting Operator Trust in Intelligent Multirobot Surveillance Systems," in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp. 3298-3304, 26-30 May 2015. doi: 10.1109/ICRA.2015.7139654
Abstract: Homeland safety and security will increasingly depend upon autonomous unmanned vehicles as a method of assessing and maintaining situational awareness. As autonomous team algorithms evolve toward requiring less human intervention, it may be that having an “operator-in-the-loop” becomes the ultimate goal in utilizing autonomous teams for surveillance. However studies have shown that trust plays a factor in how effectively an operator can work with autonomous teammates. In this work, we study mechanisms that look at autonomy as a system and not as the sum of individual actions. First, we conjecture that if the operator understands how the team autonomy is designed that the user would better trust that the system will contribute to the overall goal. Second, we focus on algorithm input criteria as being linked to operator perception and trust. We focus on adding a time-varying spatial projection of areas in the ROI that have been unseen for more than a set duration (STEC). Studies utilize a custom test bed that allows users to interact with a surveillance team to find a target in the region of interest. Results show that while algorithm training had an adverse effect, projecting salient team/surveillance state had a statistically significant impact on trust and did not negatively affect workload or performance. This result may point at a mechanism for improving trust through visualizing states as used in the autonomous algorithm.
Keywords: autonomous aerial vehicles; mobile robots; multi-robot systems; national security; surveillance; ROI; adverse effect; autonomous team algorithms; autonomous teammates; autonomous unmanned vehicles; homeland safety; homeland security; intelligent multirobot surveillance system; operator in the loop; operator perception; operator trust; region of interest; salient team projection; situational awareness; state visualization; surveillance state projection; team autonomy; time-varying spatial projection; Automation; Robots; Standards; Streaming media; Surveillance; Training; User interfaces (ID#: 15-8541)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7139654&isnumber=7138973
Kuntze, N.; Rudolph, C.; Brisbois, G.B.; Boggess, M.; Endicott-Popovsky, B.; Leivesley, S., "Security vs. Safety: Why Do People Die Despite Good Safety?," in Integrated Communication, Navigation, and Surveillance Conference (ICNS), 2015, pp. A4-1-A4-10, 21-23 April 2015. doi: 10.1109/ICNSURV.2015.7121213
Abstract: This paper will show in detail the differences between safety and security. An argument is made for new system design requirements based on a threat sustainable system (TSS) drawing on threat scanning, flexibility, command and control, system of systems, human factors and population dependencies. Principles of sustainability used in historical design processes are considered alongside the complex changes of technology and emerging threat actors. The paper recognises that technologies and development methods for safety do not work for security. Safety has the notion of a one or two event protection, but cyber-attacks are multi-event situations. The paper recognizes that the behaviour of interconnected systems and modern systems requirements for national sustainability. System security principles for sustainability of critical systems are considered in relation to failure, security architecture, quality of service, authentication and trust and communication of failure to operators. Design principles for operators are discussed along with recognition of human factors failures. These principles are then applied as the basis for recommended changes in systems design and discuss system control dominating the hierarchy of design decisions but with harmonization of safety requirements up to the level of sustaining security. These new approaches are discussed as the basis for future research on adaptive flexible systems that can sustain attacks and the uncertainty of fast-changing technology.
Keywords: national security; protection; safety systems; security of data; sustainable development; authentication; cyberattacks; failure; national sustainability; protection; safety; system security principles; threat scanning; threat sustainable system; trust; Buildings; Control systems; Safety; Software; Terrorism; Transportation (ID#: 15-8542)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7121213&isnumber=7121207
Bianchi, A.; Corbetta, J.; Invernizzi, L.; Fratantonio, Y.; Kruegel, C.; Vigna, G., "What the App is That? Deception and Countermeasures in the Android User Interface," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 931-948, 17-21 May 2015. doi: 10.1109/SP.2015.62
Abstract: Mobile applications are part of the everyday lives of billions of people, who often trust them with sensitive information. These users identify the currently focused app solely by its visual appearance, since the GUIs of the most popular mobile OSes do not show any trusted indication of the app origin. In this paper, we analyze in detail the many ways in which Android users can be confused into misidentifying an app, thus, for instance, being deceived into giving sensitive information to a malicious app. Our analysis of the Android platform APIs, assisted by an automated state-exploration tool, led us to identify and categorize a variety of attack vectors (some previously known, others novel, such as a non-escapable full screen overlay) that allow a malicious app to surreptitiously replace or mimic the GUI of other apps and mount phishing and click-jacking attacks. Limitations in the system GUI make these attacks significantly harder to notice than on a desktop machine, leaving users completely defenseless against them. To mitigate GUI attacks, we have developed a two-layer defense. To detect malicious apps at the market level, we developed a tool that uses static analysis to identify code that could launch GUI confusion attacks. We show how this tool detects apps that might launch GUI attacks, such as ransom ware programs. Since these attacks are meant to confuse humans, we have also designed and implemented an on-device defense that addresses the underlying issue of the lack of a security indicator in the Android GUI. We add such an indicator to the system navigation bar, this indicator securely informs users about the origin of the app with which they are interacting (e.g., The Pay Pal app is backed by "Pay Pal, Inc."). We demonstrate the effectiveness of our attacks and the proposed on-device defense with a user study involving 308 human subjects, whose ability to detect the attacks increased significantly when using a system equipped with our defense.
Keywords: Android (operating system);graphical user interfaces; invasive software; program diagnostics; smart phones; Android platform API; Android user interface; GUI confusion attacks; app origin; attack vectors; automated state-exploration tool; click-jacking attacks; desktop machine; malicious app; mobile OS; mobile applications; on-device defense; phishing attacks; ransomware programs; security indicator; sensitive information; static analysis; system navigation bar; trusted indication; two-layer defense; visual appearance; Androids; Graphical user interfaces; Humanoid robots; Navigation; Security; Smart phones; mobile-security; static-analysis; usable-security (ID#: 15-8543)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163069&isnumber=7163005
Anneken, Mathias; Fischer, Yvonne; Beyerer, Jurgen, "Evaluation and Comparison of Anomaly Detection Algorithms in Annotated Datasets from the Maritime Domain," in SAI Intelligent Systems Conference (IntelliSys), 2015, pp. 169-178, 10-11 Nov. 2015. doi: 10.1109/IntelliSys.2015.7361141
Abstract: Anomaly detection supports human decision makers in their surveillance tasks to ensure security. To gain the trust of the operator, it is important to develop a robust system, which gives the operator enough insight to take a rational choice about future steps. In this work, the maritime domain is investigated. Here, anomalies occur in trajectory data. Hence, a normal model for the trajectories has to be estimated. Despite the goal of anomaly detection in real life operations, until today, mostly simulated anomalies have been evaluated to measure the performance of different algorithms. Therefore, an annotation tool is developed to provide a ground truth on a non-simulative dataset. The annotated data is used to compare different algorithms with each other. For the given dataset, first experiments are conducted with the Gaussian Mixture Model (GMM) and the Kernel Density Estimator (KDE). For the evaluation of the algorithms, precision, recall, and f1-score are compared.
Keywords: Clustering algorithms; Gaussian mixture model; Hidden Markov models; Intelligent systems; Sea measurements; Surveillance; Trajectory; Anomaly detection; Gaussian Mixture Model; Kernel Density Estimation; Maritime domain (ID#: 15-8544)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7361141&isnumber=7361074
Renaud, K.; Hoskins, A.; von Solms, R., "Biometric Identification: Are We Ethically Ready?," in Information Security for South Africa (ISSA), 2015, pp. 1-8, 12-13 Aug. 2015. doi: 10.1109/ISSA.2015.7335051
Abstract: “Give us your fingerprint, your Iris print, your photograph. Trust us; we want to make your life easier!” This is the implicit message behind many corporations' move towards avid collection and use of biometrics, and they expect us to accept their assurances at face value. Despite their attempts to sell this as a wholly philanthropic move, the reality is that it is often done primarily to ease their own processes or to increase profit. They offer no guarantees, allow no examination of their processes, and treat detractors with derision or sanction. The current biometric drive runs counter to emergent wisdom about the futility of a reductionist approach to humanity. Ameisen et al. (2007) point out that the field of integrative biology is moving towards a more holistic approach, while biometrics appear to be moving in the opposite direction, reducing humans to sets of data with cartographic locators: a naïve over-simplification of the uniqueness that characterizes humanity. They argue that biometrics treat the body as an object to be measured, but in fact the body is a subject, the instantiation of the individual's self, subject to vulnerability and mortality. Treating it merely as a measured and recorded object denies the body's essential right to dignity. Here we explore various concerning aspects of the global move towards widespread biometric use.
Keywords: biometrics (access control); biometric identification; cartographic locators; holistic approach; integrative biology; reductionist approach; Databases; Fingerprint recognition; Iris recognition; Physiology; biometrics; ethics; security (ID#: 15-8545)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335051&isnumber=7335039
Khatoun, R.; Gut, P.; Doulami, R.; Khoukhi, L.; Serhrouchni, A., "A Reputation System for Detection of Black Hole Attack in Vehicular Networking," in Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC), 2015 International Conference on, pp. 1-5, 5-7 Aug. 2015. doi: 10.1109/SSIC.2015.7245328
Abstract: In recent years, vehicular networks has drawn special attention as it has significant potential to play an important role in future smart city to improve the traffic efficiency and guarantee the road safety. Safety in vehicular networks is crucial because that it affects the life of humans. It is essential like that the vital information cannot be modified or deleted by an attacker and must be also determine the responsibility of drivers while maintaining their privacy. The Black hole attack is a well-known and critical threaten of network availability in vehicular environment. In this paper we present a new reputation system for vehicular networks, where each vehicle reports the packet transmission with its neighbours and the Trust Authority (TA) classifies the reliability of players based on the reports. This reputation system can quickly detect the malicious players in the network, prevent the damage caused by the Black hole attack and improve the effectiveness of routing process.
Keywords: mobile radio; road safety; smart cities; telecommunication network routing; black hole attack detection; malicious player detection; packet transmission; reputation system; road safety; routing process; smart city; trust authority; vehicular networking; Ad hoc networks; Packet loss; Protocols; Routing; Vehicles; Black hole attack; Intrusion detection; Smart City; Vehicular Networking (ID#: 15-8546)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245328&isnumber=7245317
Gazis, V.; Goertz, M.; Huber, M.; Leonardi, A.; Mathioudakis, K.; Wiesmaier, A.; Zeiger, F., "Short Paper: IoT: Challenges, Projects, Architectures," in Intelligence in Next Generation Networks (ICIN), 2015 18th International Conference on, pp. 145-147, 17-19 Feb. 2015. doi: 10.1109/ICIN.2015.7073822
Abstract: Internet of Things (IoT) is a socio-technical phenomena with the power to disrupt our society such as the Internet before. IoT promises the (inter-) connection of myriad of things proving services to humans and machines. It is expected that by 2020 tens of billions of things will be deployed worldwide. It became evident that the traditional centralized computing and analytic approach does not provide a sustainable model this new type of data. A new kind of architecture is needed as a scalable and trusted platform underpinning the expansion of IoT. The data gathered by the things will be often noisy, unstructured and real-time requiring a decentralized structure storing and analysing the vast amount of data. In this paper, we provide an overview of the current IoT challenges, will give a summary of funded IoT projects in Europe, USA, and China. Additionally, it will provide detailed insights into three IoT architectures stemming from such projects.
Keywords: Internet of Things; internetworking; China; Europe; Internet of Things; IoT project; USA; centralized computing; data gathering; decentralized structure; sociotechnical phenomena; Computer architecture; Europe; Interoperability; Reliability; Security; Semantics; Technological innovation (ID#: 15-8547)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7073822&isnumber=7073795
Bullee, Jan-Willem H.; Montoya, Lorena; Pieters, Wolter; Junger, Marianne; Hartel, Pieter H., "Regression Nodes: Extending Attack Trees with Data from Social Sciences," in Socio-Technical Aspects in Security and Trust (STAST), 2015 Workshop on, pp. 17-23, 13-13 July 2015. doi: 10.1109/STAST.2015.11
Abstract: In the field of security, attack trees are often used to assess security vulnerabilities probabilistically in relation to multi-step attacks. The nodes are usually connected via AND-gates, where all children must be executed, or via OR-gates, where only one action is necessary for the attack step to succeed. This logic, however, is not suitable for including human interaction such as that of social engineering, because the attacker may combine different persuasion principles to different degrees, with different associated success probabilities. Experimental results in this domain are typically represented by regression equations rather than logical gates. This paper therefore proposes an extension to attack trees involving a regression-node, illustrated by data obtained from a social engineering experiment. By allowing the annotation of leaf nodes with experimental data from social science, the regression-node enables the development of integrated socio-technical security models.
Keywords: Context; Fault trees; Logic gates; Open wireless architecture; Regression tree analysis; Safety; Security (ID#: 15-8548)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7351972&isnumber=7351960
Parkin, Simon; Epili, Sanket, "A Technique for Using Employee Perception of Security to Support Usability Diagnostics," in Socio-Technical Aspects in Security and Trust (STAST), 2015 Workshop on, pp. 1-8, 13-13 July 2015. doi: 10.1109/STAST.2015.9
Abstract: Problems of unusable security in organisations are widespread, yet security managers tend not to listen to employees' views on how usable or beneficial security controls are for them in their roles. Here we provide a technique to drive management of security controls using end-user perceptions of security as supporting data. Perception is structured at the point of collection using Analytic Hierarchy Process techniques, where diagnostic rules filter user responses to direct remediation activities, based on recent research in the human factors of information security. The rules can guide user engagement, and support identification of candidate controls to maintain, remove, or learn from. The methodology was incorporated into a prototype dashboard tool, and a preliminary validation conducted through a walk-through consultation with a security manager in a large organisation. It was found that user feedback and suggestions would be useful if they can be structured for review, and that categorising responses would help when revisiting security policies and identifying problem controls.
Keywords: Analytic hierarchy process; Human factors; Information security; Interviews; Measurement; Usability; analytic hierarchy process; human factors of security; information security; security policies (ID#: 15-8549)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7351970&isnumber=7351960
Pirinen, R.; Rajamaki, J., "Mechanism of Critical and Resilient Digital Services for Design Theory," in Computer Science, Computer Engineering, and Social Media (CSCESM), 2015 Second International Conference on, pp. 90-95, 21-23 Sept. 2015. doi: 10.1109/CSCESM.2015.7331874
Abstract: This study discusses design theory with focus on critical digital information services that support collective service design targets in international over border environments. Designing security for such digital systems has been challenging because of the technologies that make up the systems for digital information sharing. More specifically, new advances in hardware, networking, information, and human interface need new ways of thinking and realization to, for example, design, build, evaluate, and conceptualize a resilient digital service or system. This study focuses on the methodological contribution of design theory to resilient digital systems and further serves as a continuum to existing studies.
Keywords: critical infrastructures; information services; security of data; border environments; collective service design; critical digital information services; critical infrastructure protection; cyber security; design theory; digital information sharing; digital systems security; hardware; human interface; international environments; networking; resilient digital services; resilient digital systems; Security; Servers; critical infrastructure protection; cyber security; design science research; design theory; digital systems; resilient digital services; trust building (ID#: 15-8550)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7331874&isnumber=7331817
Papadakis, M.; Yue Jia; Harman, M.; Le Traon, Y., "Trivial Compiler Equivalence: A Large Scale Empirical Study of a Simple, Fast and Effective Equivalent Mutant Detection Technique," in Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, vol.1, pp. 936-946, 16-24 May 2015. doi: 10.1109/ICSE.2015.103
Abstract: Identifying equivalent mutants remains the largest impediment to the widespread uptake of mutation testing. Despite being researched for more than three decades, the problem remains. We propose Trivial Compiler Equivalence (TCE) a technique that exploits the use of readily available compiler technology to address this long-standing challenge. TCE is directly applicable to real-world programs and can imbue existing tools with the ability to detect equivalent mutants and a special form of useless mutants called duplicated mutants. We present a thorough empirical study using 6 large open source programs, several orders of magnitude larger than those used in previous work, and 18 benchmark programs with hand-analysis equivalent mutants. Our results reveal that, on large real-world programs, TCE can discard more than 7% and 21% of all the mutants as being equivalent and duplicated mutants respectively. A human- based equivalence verification reveals that TCE has the ability to detect approximately 30% of all the existing equivalent mutants.
Keywords: formal verification; program compilers ;program testing; TCE technique; duplicated mutants; human-based equivalence verification; mutant detection technique; mutation testing; trivial compiler equivalence technology; Benchmark testing; Java; Optimization; Scalability; Syntactics (ID#: 15-8551)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194639&isnumber=7194545
Mitchell, M.; Patidar, R.; Saini, M.; Singh, P.; An-I Wang; Reiher, P., "Mobile Usage Patterns and Privacy Implications," in Pervasive Computing and Communication Workshops (PerCom Workshops), 2015 IEEE International Conference on, pp. 457-462, 23-27 March 2015. doi: 10.1109/PERCOMW.2015.7134081
Abstract: Privacy is an important concern for mobile computing. Users might not understand the privacy implications of their actions and therefore not alter their behavior depending on where they move, when they do so, and who is in their surroundings. Since empirical data about the privacy behavior of users in mobile environments is limited, we conducted a survey study of ~600 users recruited from Florida State University and Craigslist. Major findings include: (1) People often exercise little caution preserving privacy in mobile computing environments; they perform similar computing tasks in public and private. (2) Privacy is orthogonal to trust; people tend to change their computing behavior more around people they know than strangers. (3) People underestimate the privacy threats of mobile apps, and comply with permission requests from apps more often than operating systems. (4) Users' understanding of privacy is different from that of the security community, suggesting opportunities for additional privacy studies.
Keywords: data privacy; human factors; mobile computing; operating systems (computers);Craigslist; Florida State University; empirical data; mobile applications; mobile computing environments; mobile usage patterns; operating systems; permission requests; privacy threats; security community; user computing behavior; users privacy behavior; Encryption; IEEE 802.11 Standards; Mobile communication; Mobile computing; Mobile handsets; Portable computers; Privacy; human factors;mobile computing; privacy; security (ID#: 15-8552)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7134081&isnumber=7133953
Boyes, H., "Best Practices in an ICS Environment," in Cyber Security for Industrial Control Systems, pp. 1-36, 2-3 Feb. 2015. doi: 10.1049/ic.2015.0006
Abstract: Presents a collection of slides covering the following topics: software trustworthiness; insecure building control system; prison system glitch; cyber security; ICS; vulnerability assessment; dynamic risks handling; situational awareness; human factor; industrial control systems and system connectivity.
Keywords: control engineering computing; human factors; industrial control; security of data; trusted computing; ICS; cybersecurity; dynamic risks handling; human factor; industrial control systems; insecure building control system; prison system glitch; situational awareness; software trustworthiness; system connectivity; vulnerability assessment (ID#: 15-8553)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332808&isnumber=7137498
Aggarwal, A.; Kumaraguru, P., "What They Do in Shadows: Twitter Underground Follower Market," in Privacy, Security and Trust (PST), 2015 13th Annual Conference on, pp. 93-100, 21-23 July 2015
doi: 10.1109/PST.2015.7232959
Abstract: Internet users and businesses are increasingly using online social networks (OSN) to drive audience traffic and increase their popularity. In order to boost social presence, OSN users need to increase the visibility and reach of their online profile, like - Facebook likes, Twitter followers, Instagram comments and Yelp reviews. For example, an increase in Twitter followers not only improves the audience reach of the user but also boosts the perceived social reputation and popularity. This has led to a scope for an underground market that provides followers, likes, comments, etc. via a network of fraudulent and compromised accounts and various collusion techniques. In this paper, we landscape the underground markets that provide Twitter followers by studying their basic building blocks - merchants, customers and phony followers. We charecterize the services provided by merchants to understand their operational structure and market hierarchy. Twitter underground markets can operationalize using a premium monetary scheme or other incentivized freemium schemes. We find out that freemium market has an oligopoly structure with few merchants being the market leaders. We also show that merchant popularity does not have any correlation with the quality of service provided by the merchant to its customers. Our findings also shed light on the characteristics and quality of market customers and the phony followers provided by underground market. We draw comparison between legitimate users and phony followers, and find out key identifiers to separate such users. With the help of these differentiating features, we build a supervised learning model to predict suspicious following behaviour with an accuracy of 89.2%.
Keywords: human factors; learning (artificial intelligence); oligopoly; social networking (online);Facebook likes; Instagram comments; OSN users; Twitter followers; Yelp reviews; customers; fraudulent network; incentivized freemium schemes; market hierarchy; market leaders; merchant popularity; oligopoly structure; online profile; online social networks; operational structure; perceived social popularity; perceived social reputation; phony followers; premium monetary scheme; quality of service; social presence; supervised learning model; suspicious following behaviour prediction; underground follower market; Business; Data collection; Facebook; Measurement; Media; Quality of service; Twitter (ID#: 15-8554)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232959&isnumber=7232940
Nishioka, D.; Murayama, Y., "The Influence of User Attribute onto the Factors of Anshin for Online Shopping Users," in System Sciences (HICSS), 2015 48th Hawaii International Conference on, pp. 382-391, 5-8 Jan. 2015. doi: 10.1109/HICSS.2015.53
Abstract: In this research, we investigate users' subjective sense of security, which we call Anshin in Japanese. The research goal is to create a guideline of Anshin in information security for users. In this paper, we report how the user security knowledge level could influence the factors of Anshin for online shopping users. Traditional studies on security are based on the assumption that users would feel the sense of security with objectively secure systems. We conducted a Web survey using questionnaire with 920 subjects. In this paper, we report how the user attributes (age, sex, knowledge, experience) influence the factors of An shin for online shopping users. As a result, we showed that woman and low experience level group feel An shin when provided with secure systems and services. Moreover, we showed that other group does not feel An shin when provided with secure systems and services.
Keywords: Internet; human factors; retail data processing; security of data; Anshin; Japanese language; Web survey; information security; low-experience level group; online shopping users; secure services; secure systems; user age; user attributes; user experience; user knowledge; user security knowledge level; user sex; user subjective sense-of-security; woman group; Companies; Information security; Information technology; Psychology; Usability; Anshin; Factor analysis; Questionnaire; Trust (ID#: 15-8555)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069702&isnumber=7069647
Bhargava, M.; Sheikh, K.; Mai, K., "Robust True Random Number Generator Using Hot-Carrier Injection Balanced Metastable Sense Amplifiers," in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, pp. 7-13, 5-7 May 2015. doi: 10.1109/HST.2015.7140228
Abstract: Hardware true random number generators are an essential functional block in many secure systems. Current designs that use bi-stable elements balanced in the metastable region are capable of both high randomness and high bitrate. However, these designs require extensive support circuits to maintain balance in the metastable region, complex built-in self test loops to configure the support circuits, and suffer from sensitivity to environmental conditions. We propose a true random number generator design based around sense amplifier circuits that are balanced in the metastable region using hot carrier injection, rather than complex support circuits. Further, we show an architecture that maintains high entropy output across a range of ± 20% voltage variation. Experimental results from a prototype design in a 65nm bulk CMOS process demonstrate the efficacy of the proposed TRNG architecture, which passes all NIST tests.
Keywords: CMOS integrated circuits; amplifiers; built-in self test; entropy; hot carriers; integrated circuit design; integrated circuit testing; random number generation; NIST tests; TRNG architecture; bi-stable elements; built-in self test loops; bulk CMOS process; entropy output; environmental conditions; functional block; hardware true random number generators; hot carrier injection; metastable region; metastable sense amplifiers; robust true random number generator design; secure systems; sense amplifier circuits; size 65 nm; support circuits; voltage variation; Entropy; Generators; Hardware; Hot carrier injection; Human computer interaction; MOSFET; Stress (ID#: 15-8556)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140228&isnumber=7140225
Ferreira, Ana; Lenzini, Gabriele, "An Analysis of Social Engineering Principles in Effective Phishing," in Socio-Technical Aspects in Security and Trust (STAST), 2015 Workshop on, pp. 9-16, 13-13 July 2015
doi: 10.1109/STAST.2015.10
Abstract: Phishing is a widespread practice and a lucrative business. It is invasive and hard to stop: a company needs to worry about all emails that all employees receive, while an attacker only needs to have a response from a key person, e.g., a finance or human resources' responsible, to cause a lot of damages. Some research has looked into what elements make phishing so successful. Many of these elements recall strategies that have been studied as principles of persuasion, scams and social engineering. This paper identifies, from the literature, the elements which reflect the effectiveness of phishing, and manually quantifies them within a phishing email sample. Most elements recognised as more effective in phishing commonly use persuasion principles such as authority and distraction. This insight could lead to better automate the identification of phishing emails and devise more appropriate countermeasures against them.
Keywords: Decision making; Electronic mail; Internet; Psychology; Security; Social network services; classification; phishing emails; principles of persuasion; social engineering (ID#: 15-8557)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7351971&isnumber=7351960
Martinelli, F.; Santini, F.; Yautsiukhin, A., "Network Security Supported by Arguments," in Privacy, Security and Trust (PST), 2015 13th Annual Conference on, pp. 165-172, 21-23 July 2015. doi: 10.1109/PST.2015.7232969
Abstract: Argumentation has been proved as a simple yet powerful approach to manage conflicts in reasoning with the purpose to find subsets of “surviving” arguments. Our intent is to exploit such form of resolution to support the administration of security in complex systems, e.g., in case threat countermeasures are in conflict with non-functional requirements. The proposed formalisation is able to find the required security controls and explicitly provide arguments supporting this selection. Therefore, an explanation automatically comes as part of the suggested solution, facilitating human comprehension.
Keywords: graph theory; inference mechanisms; network theory (graphs) ;security of data; AAF; abstract argumentation framework; network security; reasoning conflict management; Internet; Network topology; Quality of service; Security; Semantics; Topology; Virtual private networks}, (ID#: 15-8558)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232969&isnumber=7232940
Yanli Pei; Shan Wang; Jing Fan; Min Zhang, "An Empirical Study on the Impact of Perceived Benefit, Risk and Trust on E-Payment Adoption: Comparing Quick Pay and Union Pay in China," in Intelligent Human-Machine Systems and Cybernetics (IHMSC), 2015 7th International Conference on, vol. 2, pp. 198-202, 26-27 Aug. 2015. doi: 10.1109/IHMSC.2015.148
Abstract: To explain the adoption of two online payment tools (Quick Pay, Union Pay Online), we use TAM and Trust Theories to extend the Valence Framework. Then we designed a questionnaire in accordance with the proposed model. With the data collected, we have discovered that perceived benefit and trust are the key factors determining users' adoption of e-payment tools, and users pay much less attention to perceived risk. Using the validated and modified model, we explained the adoption of the mentioned two e-payment tools. Quick Pay is more popular than Union Pay because Quick Pay has better performance in ease access, usability, reputation and secure protection.
Keywords: electronic commerce; human factors; risk management; trusted computing; China; Quick Pay; TAM; UnionPay Online; perceived benefit; perceived risk; trust theories; user e-payment tool adoption; valence framework; Context; Instruments; Online banking; Privacy; Security; Uncertainty; Usability; Adoption; E-payment; Perceived Benefit; Perceived Risk; Trust (ID#: 15-8559)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7334950&isnumber=7334774
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Insider Threats 2015 |
Insider threats are a difficult problem. The research cited here looks at both intentional and accidental threats, including the effects of social engineering, and methods of identifying potential threats. For the Science of Security, insider threat relates to human behavior, as well as metrics, policy-based governance and resilience. These works were presented in 2015.
Maasberg, M.; Warren, J.; Beebe, N.L., "The Dark Side of the Insider: Detecting the Insider Threat through Examination of Dark Triad Personality Traits," in System Sciences (HICSS), 2015 48th Hawaii International Conference on, pp. 3518-3526, 5-8 Jan. 2015. doi: 10.1109/HICSS.2015.423
Abstract: Efforts to understand what goes on in the mind of an insider have taken a back seat to developing technical controls, yet insider threat incidents persist. We examine insider threat incidents with malicious intent and propose an explanation through a relationship between Dark Triad personality traits and the insider threat. Although Dark Triad personality traits have emerged in insider threat cases and deviant workplace behavior studies, they have not been labeled as such and little empirical research has examined this phenomenon. This paper builds on previous research on insider threat and introduces ten propositions concerning the relationship between Dark Triad personality traits and insider threat behavior. We include behavioral antecedents based on the Theory of Planned Behavior and Capability Means Opportunity (CMO) model and the factors affecting those antecedents. This research addresses the behavioral aspect of the insider threat and provides new information in support of academics and practitioners.
Keywords: behavioural sciences; security of data; CMO model; behavioral antecedents; capability means opportunity; dark triad personality traits; insider threat behavior; insider threat detection; theory of planned behavior; Correlation; Employment; Information systems; Law; Organizations; Security (ID#: 15-8324) (ID#: 15-8377)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070238&isnumber=7069647
Legg, P.A., "Visualizing the Insider Threat: Challenges and Tools for Identifying Malicious User Activity," in Visualization for Cyber Security (VizSec), 2015 IEEE Symposium on, pp. 1-7, 25-25 Oct. 2015. doi: 10.1109/VIZSEC.2015.7312772
Abstract: One of the greatest challenges for managing organisational cyber security is the threat that comes from those who operate within the organisation. With entitled access and knowledge of organisational processes, insiders who choose to attack have the potential to cause serious impact, such as financial loss, reputational damage, and in severe cases, could even threaten the existence of the organisation. Security analysts therefore require sophisticated tools that allow them to explore and identify user activity that could be indicative of an imminent threat to the organisation. In this work, we discuss the challenges associated with identifying insider threat activity, along with the tools that can help to combat this problem. We present a visual analytics approach that incorporates multiple views, including a user selection tool that indicates anomalous behaviour, an interactive Principal Component Analysis (iPCA) tool that aids the analyst to assess the reasoning behind the anomaly detection results, and an activity plot that visualizes user and role activity over time. We demonstrate our approach using the Carnegie Mellon University CERT Insider Threat Dataset to show how the visual analytics workflow supports the Information-Seeking mantra.
Keywords: principal component analysis; security of data; CERT insider threat dataset; Carnegie Mellon University; anomaly detection; iPCA tool; information-seeking mantra; malicious user activity; organisational cyber security; principal component analysis; security analyst; visual analytics workflow; Data visualization; Electronic mail; Feature extraction; Principal component analysis; Security; Visual analytics; Insider threat; behavioural analysis; model visualization (ID#: 15-8325) (ID#: 15-8378)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312772&isnumber=7312757
Padayachee, K., "A Framework of Opportunity-Reducing Techniques to Mitigate the Insider Threat," in Information Security for South Africa (ISSA), 2015, pp. 1-8, 12-13 Aug. 2015. doi: 10.1109/ISSA.2015.7335064
Abstract: This paper presents a unified framework derived from extant opportunity-reducing techniques employed to mitigate the insider threat leveraging best practices. Although both motive and opportunity are required to commit maleficence, this paper focuses on the concept of opportunity. Opportunity is more tangible than motive; hence, it is more pragmatic to reflect on opportunity-reducing measures. Situational Crime Prevention theory is the most evolved criminology theory with respect to opportunity-reducing techniques. Hence, this theory will be the basis of the theoretical framework. The derived framework highlights several areas of research and may assist organizations in implementing controls that are situationally appropriate to mitigate insider threat.
Keywords: computer crime; criminology theory; extant opportunity-reducing techniques; insider threat mitigation; situational crime prevention theory; unified framework; Computer crime; Computers; Mobile communication; Monitoring; Abuse; Insider Threat; crime involving computers (ID#: 15-8326) (ID#: 15-8379)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335064&isnumber=7335039
Pengfei Hu; Hongxing Li; Hao Fu; Cansever, D.; Mohapatra, P., "Dynamic Defense Strategy Against Advanced Persistent Threat with Insiders," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 747-755, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218444
Abstract: The landscape of cyber security has been reformed dramatically by the recently emerging Advanced Persistent Threat (APT). It is uniquely featured by the stealthy, continuous, sophisticated and well-funded attack process for long-term malicious gain, which render the current defense mechanisms inapplicable. A novel design of defense strategy, continuously combating APT in a long time-span with imperfect/incomplete information on attacker's actions, is urgently needed. The challenge is even more escalated when APT is coupled with the insider threat (a major threat in cyber-security), where insiders could trade valuable information to APT attacker for monetary gains. The interplay among the defender, APT attacker and insiders should be judiciously studied to shed insights on a more secure defense system. In this paper, we consider the joint threats from APT attacker and the insiders, and characterize the fore-mentioned interplay as a two-layer game model, i.e., a defense/attack game between defender and APT attacker and an information-trading game among insiders. Through rigorous analysis, we identify the best response strategies for each player and prove the existence of Nash Equilibrium for both games. Extensive numerical study further verifies our analytic results and examines the impact of different system configurations on the achievable security level.
Keywords: game theory; security of data; APT; Nash equilibrium; advanced persistent threat; attack process; cyber security; defense/attack game; dynamic defense strategy; information-trading game; malicious gain; two-layer game model; Computer security; Computers; Cost function; Games; Joints; Nash equilibrium (ID#: 15-8327) (ID#: 15-8380)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218444&isnumber=7218353
Mayhew, Michael; Atighetchi, Michael; Adler, Aaron; Greenstadt, Rachel, "Use of Machine Learning in Big Data Analytics for Insider Threat Detection," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp.915-922, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357562
Abstract: In current enterprise environments, information is becoming more readily accessible across a wide range of interconnected systems. However, trustworthiness of documents and actors is not explicitly measured, leaving actors unaware of how latest security events may have impacted the trustworthiness of the information being used and the actors involved. This leads to situations where information producers give documents to consumers they should not trust and consumers use information from non-reputable documents or producers. The concepts and technologies developed as part of the Behavior-Based Access Control (BBAC) effort strive to overcome these limitations by means of performing accurate calculations of trustworthiness of actors, e.g., behavior and usage patterns, as well as documents, e.g., provenance and workflow data dependencies. BBAC analyses a wide range of observables for mal-behavior, including network connections, HTTP requests, English text exchanges through emails or chat messages, and edit sequences to documents. The current prototype service strategically combines big data batch processing to train classifiers and real-time stream processing to classifier observed behaviors at multiple layers. To scale up to enterprise regimes, BBAC combines clustering analysis with statistical classification in a way that maintains an adjustable number of classifiers.
Keywords: Access control; Big data; Computer security; Electronic mail; Feature extraction; Monitoring; HTTP; TCP; big data; chat; documents; email; insider threat; machine learning; support vector machine; trust; usage patterns (ID#: 15-8328) (ID#: 15-8381)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357562&isnumber=7357245
Feng, Xiaotao; Zheng, Zizhan; Hu, Pengfei; Cansever, Derya; Mohapatra, Prasant, "Stealthy Attacks Meets Insider Threats: A Three-Player Game Model," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp. 25-30, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357413
Abstract: Advanced persistent threat (APT) is becoming a major threat to cyber security. As APT attacks are often launched by well funded entities that are persistent and stealthy in achieving their goals, they are highly challenging to combat in a cost-effective way. The situation becomes even worse when a sophisticated attacker is further assisted by an insider with privileged access to the inside information. Although stealthy attacks and insider threats have been considered separately in previous works, the coupling of the two is not well understood. As both types of threats are incentive driven, game theory provides a proper tool to understand the fundamental tradeoffs involved. In this paper, we propose the first three-player attacker-defender-insider game to model the strategic interactions among the three parties. Our game extends the two-player FlipIt game model for stealthy takeover by introducing an insider that can trade information to the attacker for a profit. We characterize the subgame perfect equilibria of the game with the defender as the leader and the attacker and the insider as the followers, under two different information trading processes. We make various observations and discuss approaches for achieving more efficient defense in the face of both APT and insider threats.
Keywords: Computational modeling; Computer security; Face; Games; Numerical models; Real-time systems (ID#: 15-8329) (ID#: 15-8382)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357413&isnumber=7357245
Elmrabit, N.; Shuang-Hua Yang; Lili Yang, "Insider Threats in Information Security Categories and Approaches," in Automation and Computing (ICAC), 2015 21st International Conference on, pp. 1-6, 11-12 Sept. 2015. doi: 10.1109/IConAC.2015.7313979
Abstract: The main concern of most security experts in the last years is the need to mitigate insider threats. However, leaking and selling data these days is easier than before; with the use of the invisible web, insiders can leak confidential data while remaining anonymous. In this paper, we give an overview of the various basic characteristics of insider threats. We also consider current approaches and controls to mitigating the level of such threats by broadly classifying them into two categories.
Keywords: Internet; data privacy; security of data; confidential data; information security; insider threats; invisible Web; security experts; Authorization; Cloud computing; Companies; Databases; Information security; Intellectual property; Insider threats; data leaking; insider attacks; insider predictions; privileged user abuse (ID#: 15-8330) (ID#: 15-8383)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313979&isnumber=7313638
Zhuo Lu; Sagduyu, Y.E.; Li, J.H., "Queuing the Trust: Secure Backpressure Algorithm Against Insider Threats in Wireless Networks," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 253-261, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218389
Abstract: The backpressure algorithm is known to provide throughput optimality in routing and scheduling decisions for multi-hop networks with dynamic traffic. The essential assumption in the backpressure algorithm is that all nodes are benign and obey the algorithm rules governing the information exchange and underlying optimization needs. Nonetheless, such an assumption does not always hold in realistic scenarios, especially in the presence of security attacks with intent to disrupt network operations. In this paper, we propose a novel mechanism, called virtual trust queuing, to protect backpressure algorithm based routing and scheduling protocols from various insider threats. Our objective is not to design yet another trust-based routing to heuristically bargain security and performance, but to develop a generic solution with strong guarantees of attack resilience and throughput performance in the backpressure algorithm. To this end, we quantify a node's algorithm-compliance behavior over time and construct a virtual trust queue that maintains deviations from expected algorithm outcomes. We show that by jointly stabilizing the virtual trust queue and the real packet queue, the backpressure algorithm not only achieves resilience, but also sustains the throughput performance under an extensive set of security attacks.
Keywords: queueing theory; radio networks; routing protocols; telecommunication scheduling; telecommunication security; telecommunication traffic; dynamic traffic; heuristic bargain security; information exchange; multihop wireless network threat; routing protocol; scheduling protocol; secure backpressure algorithm; virtual trust queuing; Algorithm design and analysis; Heuristic algorithms; Optimization; Queueing analysis; Routing; Scheduling; Throughput (ID#: 15-8331) (ID#: 15-8384)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218389&isnumber=7218353
Clark, J.W.; Collins, M.; Strozer, J., "Malicious Insiders with Ties to the Internet Underground Community," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 374-381, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.63
Abstract: In this paper, we investigate insider threat cases in which the insider had relationships with the Internet under-ground community. To this end, we begin by explaining our insider threat corpus and the current state of Internet underground forums. Next, we provide a discussion of each of the 17 cases that blend insider threat with the use of malicious Internet underground forums. Based on those cases, we provide an in-depth analysis to include:1) who the insiders are, 2) why they strike, 3) how they strike, 4) what sectors are most at risk, and 5) how the insiders were identified. Lastly, we describe our aggregated results and provide best practices to help mitigate the type of insider threat we describe.
Keywords: Internet; security of data; Internet underground community insider threat corpus; malicious Internet underground forum; malicious insider; Computers; Credit cards; Electronic mail; Internet; Organizations; Security; Servers; IRC; Internet Underground; best practices; case studies; cybercrime; forums; insider threat (ID#: 15-8332) (ID#: 15-8385)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299939&isnumber=7299862
Bertino, E.; Hartman, N.W., "Cybersecurity for Product Lifecycle Management a Research Roadmap," in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, pp. 114-119, 27-29 May 2015. doi: 10.1109/ISI.2015.7165949
Abstract: This paper introduces a research agenda focusing on cybersecurity in the context of product lifecycle management. The paper discusses research directions on critical protection techniques, including protection techniques from insider threat, access control systems, secure supply chains and remote 3D printing, compliance techniques, and secure collaboration techniques. The paper then presents an overview of DBSAFE, a system for protecting data from insider threat.
Keywords: authorisation; groupware; product life cycle management; supply chain management; three-dimensional printing; DBSAFE; access control systems; compliance techniques; critical protection techniques; cybersecurity; insider threat; product lifecycle management; remote 3D printing; research roadmap; secure collaboration techniques; secure supply chains; Access control; Collaboration; Companies; Computer security; Encryption; PLM; access control systems; data security; embedded systems; insider threat (ID#: 15-8333) (ID#: 15-8386)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165949&isnumber=7165923
Mohan, R.; Vaidehi, V.; Krishna A, A.; Mahalakshmi, M.; Chakkaravarthy, S.S., "Complex Event Processing based Hybrid Intrusion Detection System," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-6, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219827
Abstract: Insider threats are evolving constantly and misuse the granted resource access for various malicious activities. These insider threats make use of internal network flaws as the loop holes and are the root cause for data exfiltration and infiltration (Data leakage). Organizations are devising and deploying new solutions for analyzing, monitoring and predicting these insider threats. However data leakage and network breach problems still exist and are increasing day by day. This is due to multiple root accounts, top priority privileges, shared root access, shared file system privileges etc. In this paper a new Hybrid Intrusion Detection System (IDS) is developed to overcome the above stated problem. The objective of this research is to develop a Complex Event Processing (CEP) based Hybrid IDS that integrates the output of the Host IDS and Network IDS into the CEP Module and produces a consolidated output with higher accuracy. The overall deployment protects the internal information system without any data leakage by Stateful Packet Inspection. Multivariate Correlation Analysis (MCA) is used to estimate and characterize the normal behavior of the network and send the values to the CEP Engine which alerts in case of any deviation from the normal pattern. The performance of the proposed Hybrid IDS is examined using test bed with normal and various attack scenarios.
Keywords: computer network security; peer-to-peer computing; CEP engine; CEP module; complex event processing; data exfiltration; data infiltration; data leakage problem; file system privilege sharing; file system sharing; host IDS; hybrid IDS; hybrid intrusion detection system; internal information system; internal network flaw; loop hole; multivariate correlation analysis; network IDS; network breach problem; root access sharing; stateful packet inspection; threat analysis; threat monitoring; threat prediction; Covariance matrices; Feature extraction; Linux; Random access memory; Servers; Standards; Testing; CEP; Hybrid IDS;IDS; Insider Threat; MCA; Multivariate Correlation Analysis (ID#: 15-8334) (ID#: 15-8387)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219827&isnumber=7219823
Rizvi, S.; Razaque, A.; Cover, K., "Cloud Data Integrity Using a Designated Public Verifier," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1361-1366, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.277
Abstract: Cloud computing presents many advantages over previous computing paradigms such as rapid data processing and increased storage capacity. In addition, there are many cloud service providers (CSPs) that ensures for easy and an efficient migration and provide varying levels of security with respect to information or assets contained in the storage. However, the average cloud service user (CSU) may not have the auditing expertise and sufficient computing power to perform the necessary auditing of cloud data storage and an accurate security evaluation which facilitates to maintain the trust deficit between CSUs and CSPs. Therefore, the use of a trusted third party (TTP) to perform the required auditing tasks is inevitable since it provides several advantages to both CSUs and CSPs in terms of efficiency, fairness, trust, etc. -- which is essential to achieve the economies of scale for the cloud computing. Motivated with this, we present a new data security scheme which allows a CSU to enable a public verifier (e.g., a third-party auditor) to perform the necessary auditing tasks at the cloud data. Our proposed scheme is the extension of the TTP based encryption scheme proposed in [7]. Specifically, the auditing tasks include the checking of cloud data integrity on cloud user's request employing a public verifier. The simulation results demonstrate the effectiveness and the efficiency of our proposed scheme when auditing the cloud data integrity in terms of reliability of CSPs and the trust-level between the CSUs and a public verifier.
Keywords: cloud computing; data integrity; trusted computing; CSP; CSU; TTP based encryption scheme; cloud computing; cloud data integrity; cloud data storage; cloud service providers; cloud service user; public verifier; trusted third party; Cloud computing; Data privacy; Encryption; Memory; Protocols; Reliability; Cloud computing; authentication; cloud auditing; data privacy; insider threats; integrity; public verifier (ID#: 15-8335) (ID#: 15-8388)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336357&isnumber=7336120
Gunasekhar, T.; Rao, K.T.; Basu, M.T., "Understanding Insider Attack Problem and Scope in Cloud," in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, pp. 1-6, 19-20 March 2015. doi: 10.1109/ICCPCT.2015.7159380
Abstract: The malicious insider can be an employees, user and/or third party business partner. The insiders can have legitimate access to their organization data centers. In organizations, the security related aspects are based on insider's behaviors, the malicious insiders may theft sensitive data and no protection mechanisms are addressed till now to completely defend against the attacks. Such that organizational data could be so vulnerable from insider threat attacks. The malicious insiders of an organization can perform stealing on sensitive data at cloud storage as well as at organizational level. The insiders can misuse their credentials in order to perform malicious tasks on sensitive information as they agreed with the competitors of that organization. By doing this, the insiders may get financial benefits from the competitors. The damages of insider threat are: IT sabotages, theft of confidential information, trade secrets and Intellectual properties (IP). It is very important for the nation to start upgrading it's IT infrastructure and keep up with the latest security guidelines and practices.
Keywords: cloud computing; industrial property; organisational aspects; security of data; storage management; IT infrastructure; IT sabotages; cloud storage; confidential information theft; insider behaviors; insider threat attack problem; intellectual properties; malicious insider; organization data centers; organizational level; security related aspects; trade secrets; Cloud computing; Companies; Computers; Firewalls (computing);Confidential; Insider; Intellectual property; attacks; sabotage (ID#: 15-8336) (ID#: 15-8389)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159380&isnumber=7159156
El Masri, A.; Wechsler, H.; Likarish, P.; Grayson, C.; Pu, C.; Al-Arayed, D.; Kang, B.B., "Active Authentication Using Scrolling Behaviors," in Information and Communication Systems (ICICS), 2015 6th International Conference on, pp. 257-262, 7-9 April 2015. doi: 10.1109/IACS.2015.7103185
Abstract: This paper addresses active authentication using scrolling behaviors for biometrics and assesses different classification and clustering methods that leverage those traits. The dataset used contained event-driven temporal data captured through monitoring users' reading habits. The derived feature set is mainly composed of users' scrolling events and their derivatives (changes) and 5-gram sequencing of scrolling events to increase the number of feature extracted and their context. Classification performance in terms of both accuracy and Area under the Curve (AUC) for Receiver Operating Characteristic (ROC) curve is first reported using several classification methods including Random Forests (RF), RF with SMOTE (for unbalanced dataset) and AdaBoost with Decision Stump and ADTree. The best performance was obtained, however, using k-means clustering with two methods used to authenticate users: simple ranking and profile standard error filtering, with the latter achieving a success rate of 83.5%. Our use of k-means represents a novel non-intrusive approach of active and continuous re-authentication to counter insider-threat. Our main contribution comes from the features considered and their coupling to k-means to create a novel state-of-the art active user re-authentication method.
Keywords: biometrics (access control); feature extraction; learning (artificial intelligence); pattern classification; pattern clustering; ADTree; AUC; AdaBoost; RF; ROC curve SMOTE; active authentication; area under the curve; biometrics; classification methods; continuous re-authentication; decision stump; event-driven temporal data; feature extraction; insider-threat ;k-means clustering; profile standard error filtering; random forests; ranking; receiver operating characteristic scrolling behaviors; scrolling events 5-gram sequencing; user reading habits monitoring; Authentication; Biometrics (access control); Feature extraction; Radio frequency; Standards; Support vector machines; Active authentication; AdaBoost; Behavioral Biometrics; Random Forests; SMOTE; k-means clustering (ID#: 15-8337) (ID#: 15-8390)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7103185&isnumber=7103173
Walton, S.; Maguire, E.; Min Chen, "A Visual Analytics Loop for Supporting Model Development," in Visualization for Cyber Security (VizSec), 2015 IEEE Symposium on, pp. 1-8, 25-25 Oct. 2015. doi: 10.1109/VIZSEC.2015.7312767
Abstract: Threats in cybersecurity come in a variety of forms, and combating such threats involves handling a huge amount of data from different sources. It is absolutely necessary to use algorithmic models to defend against these threats. However, all models are sensitive to deviation from the original contexts in which the models were developed. Hence, it is not really an overstatement to say that `all models are wrong'. In this paper, we propose a visual analytics loop for supporting the continuous development of models during their deployment. We describe the roles of three types of operators (monitors, analysts and modelers), present the visualization techniques used at different stages of model development, and demonstrate the utility of this approach in conjunction with a prototype software system for corporate insider threat detection. In many ways, our environment facilitates an agile approach to the development and deployment of models in cybersecurity.
Keywords: business data processing; data analysis; data visualisation; security of data; agile approach; corporate insider threat detection; cybersecurity threats; model development; prototype software system; visual analytics loop; visualization techniques; Analytical models; Data models; Mathematical model; Monitoring; Reliability; Visual analytics (ID#: 15-8338) (ID#: 15-8391)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312767&isnumber=7312757
Chang, Sang-Yoon; Hu, Yih-Chun; Liu, Zhuotao, "Securing Wireless Medium Access Control Against Insider Denial-of-Service Attackers," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 370-378, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346848
Abstract: In a wireless network, users share a limited resource in bandwidth. To improve spectral efficiency, the network dynamically allocates channel resources and, to avoid collisions, has its users cooperate with each other using a medium access control (MAC) protocol. In a MAC protocol, the users exchange control messages to establish more efficient data communication, but such MAC assumes user compliance and can be detrimental when a user misbehaves. An attacker who compromised the network can launch a two-pronged denial-of-service (DoS) attack that is more devastating than an outsider attack: first, it can send excessive reservation requests to waste bandwidth, and second, it can focus its power on jamming those channels that it has not reserved. Furthermore, the attacker can falsify information to skew the network control decisions to its favor. To defend against such insider threats, we propose a resource-based channel access scheme that holds the attacker accountable for its channel reservation. Building on the randomization technology of spread spectrum to thwart outsider jamming, our solution comprises of a bandwidth allocation component to nullify excessive reservations, bandwidth coordination to resolve over-reserved and under-reserved spectrum, and power attribution to determine each node's contribution to the received power. We analyze our scheme theoretically and validate it with WARP-based testbed implementation and MATLAB simulations. Our results demonstrate superior performance over the typical solutions that bypass MAC control when faced against insider adversary, and our scheme effectively nullifies the insider attacker threats while retaining the MAC benefits between the collaborative users.
Keywords: Bandwidth; Communication system security; Data communication; Jamming; Media Access Protocol; Wireless communication (ID#: 15-8339) (ID#: 15-8392)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346848&isnumber=7346791
Daubert, J.; Grube, T.; Muhlhauser, M.; Fischer, M., "Internal Attacks in Anonymous Publish-Subscribe P2P Overlays," in Networked Systems (NetSys), 2015 International Conference and Workshops on, pp. 1-8, 9-12 March 2015. doi: 10.1109/NetSys.2015.7089074
Abstract: Privacy, in particular anonymity, is desirable in Online Social Networks (OSNs) like Twitter, especially when considering the threat of political repression and censorship. P2P-based publish-subscribe is a well suited paradigm for OSN scenarios as users can publish and follow topics of interest. However, anonymity in P2P-based publish-subscribe (pub-sub) has been hardly analyzed so far. Research on add-on anonymization systems such as Tor mostly focuses on large scale traffic analysis rather than malicious insiders. Therefore, we analyze colluding insider attackers in more detail that operate on the basis of timing information. For that, we model a generic anonymous pub-sub system, present an attacker model, and discuss timing attacks. We analyze these attacks by a realistic simulation model and discuss potential countermeasures. Our findings indicate that even few malicious insiders are capable to disclose a large number of participants, while an attacker using large amounts of colluding nodes achieves only minor additional improvements.
Keywords: data privacy; overlay networks; peer-to-peer computing; social networking (online); OSN;P2P-based publish-subscribe; Twitter; add-on anonymization system; anonymous publish-subscribe P2P overlays; colluding insider attackers; generic anonymous pub-sub system; internal attacks; online social networks; peer-to-peer overlay; timing information; Delays; Mathematical model; Protocols; Publish-subscribe; Subscriptions; Topology (ID#: 15-8340) (ID#: 15-8393)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7089074&isnumber=7089054
Filipek, J.; Hudec, L., "Distributed firewall in Mobile Ad Hoc Networks," in Applied Machine Intelligence and Informatics (SAMI), 2015 IEEE 13th International Symposium on, pp. 233-238, 22-24 Jan. 2015. doi: 10.1109/SAMI.2015.7061882
Abstract: Mobile Ad-hoc Networks (MANET) are increasingly employed in tactical military and civil rapid-deployment networks, including emergency rescue operations and ad hoc disaster-relief networks. When compared to wired and base station-based wireless networks: MANETs are susceptible to both insider and outsider attacks. This is mainly because of the lack of well-defined defense perimeter. In this paper, we define distributed firewall architecture that is designed specifically for MANET networks. Our design is using the concept of network capabilities and is especially suited for environment which lacks centralized structure and is composed of different devices. Our model denies all communication by default and nodes can access only services and other nodes that they are authorized to. Every node contains a firewall mechanism which includes intrusion prevention system and compromised node will not necessarily compromise whole secured network. Our approach should add security features for MANETs and help them withstand security threats which would otherwise damage, if not shutdown unsecured MANET network. Our simulation shows, that our solution has minimal overhead in terms of bandwidth and latency, works well even in the presence of routing changes due to mobile nodes and is effective in containing misbehaving nodes.
Keywords: computer network reliability; firewalls; military communication; mobile ad hoc networks; telecommunication network routing; base station-based wireless network routing; civil rapid-deployment network security; distributed firewall architecture; emergency rescue operations; intrusion prevention system security threats; mobile ad hoc disaster-relief network; mobile node fault; tactical military MANET; well-defined defense perimeter; Databases; Firewalls (computing);Mobile ad hoc networks; Peer-to-peer computing; Public key; Ad hoc; firewall; mobile network; network capability (ID#: 15-8341) (ID#: 15-8394)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7061882&isnumber=7061844
Varghese, S.; Vigila, S.M.C., "A Comparative Analysis on Cloud Data Security," in Communication Technologies (GCCT), 2015 Global Conference on, pp. 507-510, 23-24 April 2015. doi: 10.1109/GCCT.2015.7342713
Abstract: Cloud computing a distributed network for sharing data over internet, serves as an online data backup with scalability. The paper describes various categories of clouds depending on the usage of cloud and also on the services provided by the cloud. Data security is one of the major challenges faced by cloud providers and cloud users. Cryptography is suggested as the appropriate solution for securing the cloud data. Review on some of the existing cryptographic methods for securing the data stored in the cloud is also included in this paper. The data owners can upload data on to the cloud; can also create permissions on the uploaded data to control its access by various types of users. Cryptographic techniques incorporated along with the traditional access control policies really enhances the security of data. The analysis on the review points to the insider threats in cloud security as one of the greatest issue in cloud computing.
Keywords: authorisation; cloud computing; cryptography; Internet; access control policies ;cloud computing; cloud providers; cloud users; cryptographic methods; data sharing; distributed network; online data backup; uploaded data; Access control; Cloud computing; Computational modeling; Encryption; Servers; access control; cloud computing; cryptography; security (ID#: 15-8342) (ID#: 15-8395)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342713&isnumber=7342608
Jana, D.; Bandyopadhyay, D., "Controlled Privacy in Mobile Cloud," in Recent Trends in Information Systems (ReTIS), 2015 IEEE 2nd International Conference on, pp. 98-103, 9-11 July 2015. doi: 10.1109/ReTIS.2015.7232860
Abstract: Mobile devices face restrictions due to limitation of resources like life of battery, capacity of memory, power of processor and communication bandwidth specially during mobility and handover. Mobile based cloud computing is getting greater plea amid mobile users to lessen limitations of resource in mobile devices. The extensive espousal of programmable smart mobile handsets and communicating or exchanging data to Internet remaining in public domain leads to newer privacy and security challenges across enterprises. Smartphones and Tablets are not only storing users' private data but also the private data of the involvers - be it friends, family members, customers, vendors or any other individual. Denial of services, data leakage, account confiscation, exposure to insecure application program interface, isolation of virtual machine, mischievous attacks from insider, losing the key used in encryption give rise to several added threats related to privacy and security. We have attempted to compute a number of threats pertaining to privacy and security and commend best practices and endorsements to counter and prevent occurrence.
Keywords: cloud computing; data privacy; mobile computing; cloud computing; controlled privacy; mobile cloud; security; Cloud computing; Data privacy; Mobile communication; Mobile handsets; Privacy; Security; AAA Vulnerabilities; Cloud Computing; Mobile Cloud Computing; STRIDE (ID#: 15-8343) (ID#: 15-8396)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232860&isnumber=7232836
Geiger, Christopher; Hale, Robert; VanDerPol, Mathew; Borowski, Kyle, "Hardware-Based Whitelisting for Automated Test System Cybersecurity and Configuration Management," in IEEE AUTOTESTCON, 2015, pp. 33-37, 2-5 Nov. 2015. doi: 10.1109/AUTEST.2015.7356462
Abstract: To reap the benefits of prognostic health management, intelligent Test Program Set (TPS) diagnostic reasoning, and remote TPS configuration management Automated Test Systems (ATSs) must be networked in spite of increasing cybersecurity concerns. Traditional cybersecurity tools such as Intrusion Prevention Systems (IPS), firewalls and antivirus software are continuously proven vulnerable to the increasing sophistication of bad actors and insider threats. In addition, these software security appliances and their recurring updates can be burdensome to TPS development and interfere with TPS performance.
Keywords: Computer security; Cryptography; Hardware; Information systems; Operating systems (ID#: 15-8344) (ID#: 15-8397)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7356462&isnumber=7356451
Chaisiri, S.; Ko, R.K.L.; Niyato, D., "A Joint Optimization Approach to Security-as-a-Service Allocation and Cyber Insurance Management," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 426-433, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.403
Abstract: Security-as-a-Service (SECaaS), pay-per-use cloud-based services that provides information security measures via the cloud, are increasingly used by corporations to maintain their systems' security posture. Customers often have to provision these SECaaS services based on the potential subscription costs incurred. However, these security services are unable to deal with all possible types of threats. A single threat (e.g. malicious insiders) can result in the loss of valuable data and revenue. Hence, it is also common to see corporations (i.e. cloud customers) manage their risks by purchasing cyber insurance to cover costs and liabilities due to unforeseen losses. A balance between service allocation cost and insurance is often required but not well studied. In this paper, we propose an optimized SECaaS provisioning framework that enables customers to optimally allocate security services from SECaaS providers to their applications, while managing risks from information security breaches via purchasing cyber insurance policies. Finding the right balance is a great challenge, and the solutions of the security service allocation and insurance management are obtained through solving an optimization model derived from stochastic programming with a three-stage recourse. Simulations were conducted to evaluate this optimization model. We exposed our model to several uncertain information parameters and the results are promising -- demonstrating an effective approach to balance customers' security requirements while keeping service subscription and insurance policy costs low.
Keywords: cloud computing; costing; insurance; resource allocation; risk management; security of data; stochastic programming; SECaaS; cyber insurance management; cyber insurance policy purchasing; information security breach; information security measures; insurance management; joint optimization approach; optimized SECaaS provisioning framework; pay-per-use cloud-based services; risk management; security-as-a-service allocation; service allocation cost; stochastic programming; three-stage recourse; uncertain information parameters; Cloud computing; Electronic mail; Insurance; Optimization; Resource management; Security; Uncertainty; cloud security; cloud security economics; cyber insurance; optimization; resource allocation; stochastic programming (ID#: 15-8345) (ID#: 15-8398)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345311&isnumber=7345233
Fernandez-Aleman, J.L.; Belen Sanchez Garcia, A.; Garcia-Mateos, G.; Toval, A., "Technical Solutions for Mitigating Security Threats Caused by Health Professionals in Clinical Settings," in Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, pp. 1389-1392, 25-29 Aug. 2015. doi: 10.1109/EMBC.2015.7318628
Abstract: The objective of this paper is to present a brief description of technical solutions for health information system security threats caused by inadequate security and privacy practices in healthcare professionals. A literature search was carried out in ScienceDirect, ACM Digital Library and IEEE Digital Library to find papers reporting technical solutions for certain security problems in information systems used in clinical settings. A total of 17 technical solutions were identified: measures for password security, the secure use of e-mail, the Internet, portable storage devices, printers and screens. Although technical safeguards are essential to the security of healthcare organization's information systems, good training, awareness programs and adopting a proper information security policy are particularly important to prevent insiders from causing security incidents.
Keywords: authorisation; digital libraries; health care; medical computing; medical information systems; professional aspects; security of data; ACM Digital Library; IEEE Digital Library; Internet; ScienceDirect; e-mail; health information system security threats; health professionals; healthcare organization information systems; healthcare professionals; mitigating security threats; password security; portable storage devices; technical safeguards; Authentication; Cryptography; Information systems; Medical services; Printers; Privacy (ID#: 15-8346) (ID#: 15-8399)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7318628&isnumber=7318236
Figueroa, M.; Uttecht, K.; Rosenberg, J., "A SOUND Approach to Security in Mobile and Cloud-Oriented Environments," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-7, 14-16 April 2015. doi: 10.1109/THS.2015.7225266
Abstract: Ineffective legacy practices have failed to counter contemporary information security and privacy threats. Modern IT operates on large, heterogeneous, distributed sets of computing resources, from small mobile devices to large cloud environments that manage millions of connections and petabytes of data. Protection must often span organizations with varying reliability, trust, policies, and legal restrictions. Centrally managed, host-oriented trust systems are not flexible enough to meet the challenge. New research in distributed and adaptive trust frameworks shows promise to better meet modern needs, but lab constraints make realistic implementations impractical. This paper describes our experience transitioning technology from the research lab to an operational environment. As our case study, we introduce Safety on Untrusted Network Devices (SOUND), a new platform built from the ground up to protect mobile and cloud network communications against persistent adversaries. Initially based on three founding technologies- Accountable Virtual Machines (AVM), Quantitative Trust Management (QTM), and Introduction-Based Routing (IBR)- our research efforts extended those technologies to develop a more powerful and practical SOUND implementation.
Keywords: cloud computing; data privacy; law; mobile computing; trusted computing; virtual machines; AVM; IBR; QTM; SOUND approach accountable virtual machines; adaptive trust framework; cloud-oriented environment; distributed trust framework; host-oriented trust systems; information security; introduction-based routing; legacy practices; legal restriction; mobile environment; policy restriction; privacy threats; quantitative trust management; reliability restriction; safety on untrusted network devices; trust restriction; Context; Measurement; Ports (Computers); Resilience; Security; Servers; Virtual private networks; cyber security; digital immune system; incident response; insider attack; multistage attack; reputation; trust (ID#: 15-8347) (ID#: 15-8400)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225266&isnumber=7190491
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Internet of Things Security 2015 |
\
The term Internet of Things (IT) refers to advanced connectivity of the Internet with devices, systems and services that include both machine-to-machine communications (M2M) and a variety of protocols, domains and applications. Since the concept incorporates literally billions of devices, the security implications are huge. The articles presented here identify and discuss broad security problems that the IoT engenders. The bibliography was compiled on December 23, 2015.
Mbarek, B.; Meddeb, A.; Ben Jaballah, W.; Mosbah, M., "Enhanced LEAP Authentication Delay for Higher Immunity Against Dos Attack," in Protocol Engineering (ICPE) and International Conference on New Technologies of Distributed Systems (NTDS), 2015 International Conference on, pp. 1-6, 22-24 July 2015. doi: 10.1109/NOTERE.2015.7293497
Abstract: Broadcast authentication is crucial for civil and military applications related to the Internet of things such as wide-area protection, and target tracking. Authentication in these scenarios is time sensitive, and needs to take into account the characteristics of the deployed devices. Even various smart communication protocols in the literature address the problem of broadcast authentication, however there is still a lack on providing practical and secure authentication solutions. In this paper, we point out the security concerns of current state of the art protocol LEAP. In particular, we address the vulnerability of LEAP to a severe denial of service attack. We propose a new authentication process in μTESLA mechanism that defeats the drawback of LEAP protocol in terms of authentication delay, and resilience to DoS attack, by managing with a simple and effective way to reduce the delay of forged packets in the receivers buffer. Furthermore, we assess the feasibility of our solution with a thorough simulation study, taking into account the authentication delay, the energy consumption and the delay of forged packets.
Keywords: Internet of Things; computer network security; cryptographic protocols; delays; telecommunication power management; μTESLA mechanism; DoS attack immunity; Internet of things; LEAP protocol; LEAP vulnerability; authentication delay; broadcast authentication; denial of service attack; energy consumption; enhanced LEAP authentication delay; receiver buffer; secure authentication solutions; service attack; smart communication protocols; target tracking; wide-area protection; Authentication; Computer crime; Delays; Protocols; Receivers; Sensors; Wireless sensor networks; Authentication Protocols; Broadcast authentication; Key Disclosure Mechanism (ID#: 15-8348)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293497&isnumber=7293442
Januario, F.; Santos, A.; Palma, L.; Cardoso, A.; Gil, P., "A Distributed Multi-Agent Approach for Resilient Supervision over a IPv6 WSAN Infrastructure," in Industrial Technology (ICIT), 2015 IEEE International Conference on, pp. 1802-1807, 17-19 March 2015. doi: 10.1109/ICIT.2015.7125358
Abstract: Wireless Sensor and Actuator Networks has become an important area of research. They can provide flexibility, low operational and maintenance costs and they are inherently scalable. In the realm of Internet of Things the majority of devices is able to communicate with one another, and in some cases they can be deployed with an IP address. This feature is undoubtedly very beneficial in wireless sensor and actuator networks applications, such as monitoring and control systems. However, this kind of communication infrastructure is rather challenging as it can compromise the overall system performance due to several factors, namely outliers, intermittent communication breakdown or security issues. In order to improve the overall resilience of the system, this work proposes a distributed hierarchical multi-agent architecture implemented over a IPv6 communication infrastructure. The Contiki Operating System and RPL routing protocol were used together to provide a IPv6 based communication between nodes and an external network. Experimental results collected from a laboratory IPv6 based WSAN test-bed, show the relevance and benefits of the proposed methodology to cope with communication loss between nodes and the server.
Keywords: Internet of Things; multi-agent systems; routing protocols; wireless sensor networks; Contiki operating system; IP address;IPv6 WSAN infrastructure;IPv6 communication infrastructure; Internet of Things; RPL routing protocol; distributed hierarchical multiagent architecture; distributed multiagent approach; external network; intermittent communication; resilient supervision; wireless sensor and actuator networks; Actuators; Electric breakdown; Monitoring; Peer-to-peer computing; Routing protocols; Security (ID#: 15-8349)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7125358&isnumber=7125066
Fugini, M.; Teimourikia, M., "RAMIRES: Risk Adaptive Management in Resilient Environments with Security," in Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), 2015 IEEE 24th International Conference on, pp. 218-223, 15-17 June 2015. doi: 10.1109/WETICE.2015.26
Abstract: This paper describes the cooperative interface of RAMIRES, a prototype web application where environmental risks are reported in a dashboard for the risk management team. It shows monitored areas, supports risk managers in understanding the risk and its consequences, and supports decision making so empowering risk managers to mitigate risks improving the environment resilience. To treat risks, RAMIRES is adaptive regarding risk and security. For risk, it adapts the information towards the environment to obtain more data about the observed area to understand the risk and its consequences. It also adapts the user interface according to the involved actor. For security, RAMIRES is adaptive in that security rules determine the data views to different actors. The tool interaction with the environment and with risk mangers is presented using storyboards of interactions.
Keywords: Internet; Internet of Things; environmental science computing; human computer interaction; risk management; security of data; user interfaces; IoT; RAMIRES cooperative interface; Risk Adaptive Management in Resilient Environments with Security; decision making support; environment resilience; environmental risk reporting; interaction storyboard; prototype Web application; risk management team; risk mitigation; user interface adaptation; Adaptation models; Hazards; Monitoring; Resilience; Risk management; Security; User interfaces; Adaptive Security; Resilence Engineering; Risk Management; User Interface (ID#: 15-8350)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194364&isnumber=7194298
Kypus, L.; Vojtech, L.; Hrad, J., "Security of ONS Service for Applications of the Internet of Things and Their Pilot Implementation in Academic Network," in Carpathian Control Conference (ICCC), 2015 16th International, pp. 271-276, 27-30 May 2015. doi: 10.1109/CarpathianCC.2015.7145087
Abstract: The aim of the Object name services (ONS) project was to find a robust and stable way of automated communication to utilize name and directory services to support radio-frequency identification (RFID) ecosystem, mainly in the way that can leverage open source and standardized services and capability to be secured. All this work contributed to the new RFID services and Internet of Things (IoT) heterogeneous environments capabilities presentation. There is an increasing demand of transferred data volumes associated with each and every IP or non-IP discoverable objects. For example RFID tagged objects and sensors, as well as the need to bridge remaining communication compatibility issues between these two independent worlds. RFID and IoT ecosystems require sensitive implementation of security approaches and methods. There are still significant risks associated with their operations due to the content nature. One of the reasons of past failures could be lack of security as the integral part of design of each particular product, which is supposed to build ONS systems. Although we focused mainly on the availability and confidentiality concerns in this paper, there are still some remaining areas to be researched. We tried to identify the hardening impact by metrics evaluating operational status, resiliency, responsiveness and performance of managed ONS solution design. Design of redundant and hardened testing environment under tests brought us the visibility into the assurance of the internal communication security and showed behavior under the load of the components in such complex information service, with respect to an overall quality of the delivered ONS service.
Keywords: Internet of Things; radiofrequency identification; telecommunication security; Internet of Things; ONS service; RFID; academic network; object name services; radio-frequency identification; Operating systems; Protocols; Radiofrequency identification; Security; Servers; Standards; Virtual private networks IPv6; Internet of Things; ONS; RFID; security hardening (ID#: 15-8351)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145087&isnumber=7145033
Savola, R.M.; Savolainen, P.; Evesti, A.; Abie, H.; Sihvonen, M., "Risk-Driven Security Metrics Development For An E-Health Iot Application," in Information Security for South Africa (ISSA), 2015, pp. 1-6, 12-13 Aug. 2015. doi: 10.1109/ISSA.2015.7335061
Abstract: Security and privacy for e-health Internet-of-Things applications is a challenge arising due to the novelty and openness of the solutions. We analyze the security risks of an envisioned e-health application for elderly persons' day-to-day support and chronic disease self-care, from the perspectives of the service provider and end-user. In addition, we propose initial heuristics for security objective decomposition aimed at security metrics definition. Systematically defined and managed security metrics enable higher effectiveness of security controls, enabling informed risk-driven security decision-making.
Keywords: Internet of Things; data privacy; decision making; diseases; geriatrics; health care; risk management; security of data; chronic disease self-care; e-health Internet-of-Things applications; e-health IoT application; elderly person day-to-day support; privacy; risk-driven security decision-making; risk-driven security metrics development; security controls; security objective decomposition; Artificial intelligence; Android; risk analysis; security effectiveness; security metrics (ID#: 15-8352)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335061&isnumber=7335039
Xinkai Yang, "One Methodology for Spam Review Detection Based on Review Coherence Metrics," in Intelligent Computing and Internet of Things (ICIT), 2014 International Conference on, pp. 99-102, 17-18 Jan. 2015. doi: 10.1109/ICAIOT.2015.7111547
Abstract: In this paper, we propose an iterative computation framework to detect spam reviews based on coherent examination. We first define some reviews' coherent metrics to analyze review coherence in the granularity of sentence. Then the framework and its evaluation process are discussed in details.
Keywords: Internet; iterative methods; retail data processing; security of data; software metrics; unsolicited e-mail; consumer online shopping; e-business Web site; iterative computation framework; product review; review coherence metrics; sentence granularity; spam review detection; coherence metric; spam review detection; word concurrence probability; word transition probability (ID#: 15-8353)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7111547&isnumber=7111523
Peretti, G.; Lakkundi, V.; Zorzi, M., "BlinkToSCoAP: An end-to-end security framework for the Internet of Things," in Communication Systems and Networks (COMSNETS), 2015 7th International Conference on, pp. 1-6, 6-10 Jan. 2015. doi: 10.1109/COMSNETS.2015.7098708
Abstract: The emergence of Internet of Things and the availability of inexpensive sensor devices and platforms capable of wireless communications enable a wide range of applications such as intelligent home and building automation, mobile healthcare, smart logistics, distributed monitoring, smart grids, energy management, asset tracking to name a few. These devices are expected to employ Constrained Application Protocol for the integration of such applications with the Internet, which includes User Datagram Protocol binding with Datagram Transport Layer Security protocol to provide end-to-end security. This paper presents a framework called BlinkToSCoAP, obtained through the integration of three software libraries implementing lightweight versions of DTLS, CoAP and 6LoWPAN protocols over TinyOS. Furthermore, a detailed experimental campaign is presented that evaluates the performance of DTLS security blocks. The experiments analyze BlinkToSCoAP messages exchanged between two Zolertia Z1 devices, allowing evaluations in terms of memory footprint, energy consumption, latency and packet overhead. The results obtained indicate that securing CoAP with DTLS in Internet of Things is certainly feasible without incurring much overhead.
Keywords: Internet; Internet of Things; computer network reliability; computer network security;protocols;6LoWPAN protocol; BlinkToSCoAP; CoAP protocol; DTLS protocol; Internet of Things; TinyOS; Zolertia Zl device; asset tracking; availability; building automation; constrained application protocol; datagram transport layer security protocol; distributed monitoring; end-to-end security framework; energy consumption; energy management; intelligent home; latency overhead; memory footprint; message exchange; mobile healthcare; packet overhead; sensor device; smart grid; smart logistics; user datagram protocol; wireless communication; Computer languages; Logic gates; Payloads; Performance evaluation; Random access memory;Security;Servers;6LoWPAN;CoAP;DTLS;Internet of Things;M2M; end-to-end security (ID#: 15-8354)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098708&isnumber=7098633
Basu, S.S.; Tripathy, S.; Chowdhury, A.R., "Design Challenges and Security Issues in the Internet of Things," in Region 10 Symposium (TENSYMP), 2015 IEEE, pp. 90-93, 13-15 May 2015. doi: 10.1109/TENSYMP.2015.25
Abstract: The world is rapidly getting connected. Commonplace everyday things are providing and consuming software services exposed by other things and service providers. A mash up of such services extends the reach of the current Internet to potentially resource constrained "Things", constituting what is being referred to as the Internet of Things (IoT). IoT is finding applications in various fields like Smart Cities, Smart Grids, Smart Transportation, e-health and e-governance. The complexity of developing IoT solutions arise from the diversity right from device capability all the way to the business requirements. In this paper we focus primarily on the security issues related to design challenges in IoT applications and present an end-to-end security framework.
Keywords: Internet; Internet of Things; security of data; Internet of Things; IoT; e-governance; e-health; end-to-end security framework; service providers; smart cities; smart grids; smart transportation; software services; Computer crime; Encryption; Internet of things; Peer-to-peer computing; Protocols; End-to-end (E2E) security; Internet of Things (IoT); Resource constrained devices; Security (ID#: 15-8355)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166245&isnumber=7166213
Inshil Doh; Jiyoung Lim; Kijoon Chae, "Secure Authentication for Structured Smart Grid System," in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, pp. 200-204, 8-10 July 2015. doi: 10.1109/IMIS.2015.32
Abstract: An important application area for M2M (Machine to Machine) or IoT (Internet of Things) technology is smart grid system which plays an important role in electric power transmission, electricity distribution, and demand-driven control for the energy. To make the smart grid system more reliable and stable, security is the major issue to be provided with the main technologies. In this work, we propose an authentication mechanism between the utility system and the smart meters which gather the energy consumption data from electrical devices in layered smart grid system. Our proposal enhances the smart grid system integrity, availability and robustness by providing security with low overhead.
Keywords: Internet of Things; message authentication; smart power grids; telecommunication security; Internet of things technology; IoT; M2M;demand-driven control; electric power transmission; electrical devices; electricity distribution; energy consumption data; layered smart grid system; machine to machine; secure authentication; smart meters; structured smart grid system; utility system; Authentication; Proposals; Protocols; Servers; Smart grids; Smart meters; IoT; M2M; authentication; security; structured smart grid (ID#: 15-8356)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284948&isnumber=7284886
Golubovic, Edin; Sabanovic, Asif; Ustundag, Baris Can, "Internet of Things Inspired Photovoltaic Emulator Design for Smart Grid Applications," in Smart Grid Congress and Fair (ICSG), 2015 3rd International Istanbul, pp. 1-6, 29-30 April 2015. doi: 10.1109/SGCF.2015.7354936
Abstract: The future smart grid is considered to be solution for common problems associated with current electricity grid. Smart grid will incorporate renewable energy sources, intelligent sensors and controls, automated switches, robust communication technology, etc. Implementation of such smart grid requires the collective efforts from researchers from many fields of engineering and creation of reliable test platforms. This paper presents the PV emulator as a test platform for research of problems associated with the design of controller for PV sources, the design of energy management system, generation capacity prediction, wireless network integration and protocol issues, security and cloud based data management and analysis for smart grid applications.
Keywords: Cloud computing; Control systems; Hardware; Logic gates; Maximum power point trackers; Security; Smart grids; MPPT; internet of things; photovoltaic emulator; renewable energy sources; smart grid (ID#: 15-8357)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7354936&isnumber=7354913
Sparrow, R.D.; Adekunle, A.A.; Berry, R.J.; Farnish, R.J., "Study of Two Security Constructs on Throughput for Wireless Sensor Multi-Hop Networks," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1302-1307, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160476
Abstract: With the interconnection of devices becoming more widespread in society (e.g. internet of things), networked devices are used in a range of environments from smart grids to smart buildings. Wireless Sensor Networks (WSN) have commonly been utilised as a method of monitoring a set processes. In control networks WSN have been deployed to perform a variety of tasks (i.e. collate and distribute data from an event to an end device). However, the nature of the wireless broadcast medium enables attackers to conduct active and passive attacks. Cryptography is selected as a countermeasure to overcome these security vulnerabilities; however, a drawback of using cryptography is reduced throughput. This paper investigates the impact of two software authenticated encryption with associated data (AEAD) security constructs on packet throughput of multiple hop WSN, being counter with cipher block chaining and message authentication code (CCM) and TinyAEAD. Experiments were conducted in a simulated environment. A case scenario is also presented in this paper to emphasise the impact in a real world context. Results observed indicate that the security constructs examined in this paper affect the average throughput measurements up to three hops.
Keywords: Internet of Things; cryptography; telecommunication security; wireless sensor networks; AEAD security; Internet of Things; WSN; cipher block chaining; control networks WSN; cryptography; device interconnection; end device; message authentication code; networked devices; passive attacks; security construction; security vulnerabilities; simulated environment; software authenticated encryption with associated data; wireless broadcast medium; wireless sensor multihop networks; Communication system security; Mathematical model; Security; Simulation; Throughput; Wireless communication; Wireless sensor networks; AEAD constructs; Networked Control Systems; Wireless Sensor Networks (ID#: 15-8358)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160476&isnumber=7160221
Chi-Ming; Huai-Kuei Wu, "Study on the Effects of Self-Similar Traffic on the IEEE 802.15.4 Wireless Sensor Networks," in Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, pp. 410-415, 19-20 Aug. 2015. Doi: (not provided)
Abstract: A significant number of previous studies have shown, however, network traffic exhibited frequently large bursty traffic possesses self-similar properties. For the future applications of wireless sensor networks (WSNs) with large number of cluster structures, such as Internet of Things (IoT) and smart grid, the network traffic should not be assumed as conventional Poisson process. We thus employ ON/OFF traffic source with the duration of heavy-tailed distribution in one or both of the states instead of Poisson traffic to be as the asymptotically self-similar traffic for experimenting on the performance of IEEE 802.15.4 WSNs. In this paper, we will show the impact on the performance of IEEE 802.15.4 WSNs in different traffic sources such as Poisson and Pareto ON/OFF distribution by ns2 simulator. For the Pareto ON/OFF distribution traffic, we demonstrate that the packet delay and throughput appear bursty-like high value in some certain time scales, especially for the low traffic load; and the throughput will be no longer bursty-like while the traffic load increases. Intuitively, the bursty-like high delay may result in loss of some important real-time packets. For the Poisson traffic, both the throughput and packet delay appear non-bursty, especially for the high traffic load.
Keywords: Pareto distribution; Poisson distribution; Zigbee; delays; pattern clustering; telecommunication traffic; wireless sensor networks; IEEE 802.15.4 wireless sensor network; Internet of Things; IoT; Pareto ON-OFF traffic source distribution; Poisson traffic process; WSN; heavy-tailed distribution;ns2 simulator; packet delay; self-similar network traffic effect; smart grid; Delays; IEEE 802.15 Standard; Load modeling; Media Access Protocol; Telecommunication traffic; Throughput; Wireless sensor networks; IEEE 802.15.4;self-similar traffic; wireless sensor network (WSN) (ID#: 15-8359)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332604&isnumber=7332527
Vijayalakshmi, V.; Sharmila, R.; Shalini, R., "Hierarchical Key Management Scheme Using Hyper Elliptic Curve Cryptography in Wireless Sensor Networks," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-5, 26-28 March 2015.doi: 10.1109/ICSCN.2015.7219840
Abstract: Wireless Sensor Network (WSN) be a large scale network with thousands of tiny sensors moreover is of utmost importance as it is used in real time applications. Currently WSN is required for up-to-the-minute applications which include Internet of Things (IOT), Smart Card, Smart Grid, Smart Phone and Smart City. However the greatest issue in sensor network is secure communication for which key management is the primary objective. Existing key management techniques have many limitations such as prior deployment knowledge, transmission range, insecure communication and node captured by the adversary. The proposed novel Track-Sector Clustering (TSC) and Hyper Elliptic Curve Cryptography (HECC) provides better transmission range and secure communication. In TSC, the overall network is separated into circular tracks and triangular sectors. Power Aware Routing Protocol (PARP) was used for routing of data in TSC, which reduces the delay with increased packet delivery ratio. Further for secure routing HECC was implemented with 80 bits key size, which reduces the memory space and computational overhead than the existing Elliptic Curve Cryptography (ECC) key management scheme.
Keywords: pattern clustering; public key cryptography; routing protocols; telecommunication power management; telecommunication security; wireless sensor networks; ECC; IOT; Internet of Things; PARP; TSC; WSN; computational overhead reduction; data routing; hierarchical key management scheme; hyper elliptic curve cryptography; memory space reduction; packet delivery ratio; power aware routing protocol; secure communication; smart card; smart city; smart grid; smart phone; track-sector clustering; up-to-the-minute application; wireless sensor network; Convergence; Delays; Elliptic curve cryptography; Real-time systems; Throughput; Wireless sensor networks; Hyper Elliptic Curve Cryptography; Key Management Scheme; Power Aware Routing; Track-Sector Clustering; Wireless Sensor network (ID#: 15-8360)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219840&isnumber=7219823
Aris, A.; Oktug, S.F.; Yalcin, S.B.O., "Internet-Of-Things Security: Denial of Service Attacks," in Signal Processing and Communications Applications Conference (SIU), 2015 23th, pp. 903-906, 16-19 May 2015. doi: 10.1109/SIU.2015.7129976
Abstract: Internet of Things (IoT) is a network of sensors, actuators, mobile and wearable devices, simply things that have processing and communication modules and can connect to the Internet. In a few years time, billions of such things will start serving in many fields within the concept of IoT. Self-configuration, autonomous device addition, Internet connection and resource limitation features of IoT causes it to be highly prone to the attacks. Denial of Service (DoS) attacks which have been targeting the communication networks for years, will be the most dangerous threats to IoT networks. This study aims to analyze and classify the DoS attacks that may target the IoT environments. In addition to this, the systems that try to detect and mitigate the DoS attacks to IoT will be evaluated.
Keywords: Internet; Internet of Things; actuators; computer network security; mobile computing; sensors; wearable computers; DoS attacks; Internet connection; Internet-of-things security; IoT; actuator; autonomous device addition; communication modules; denial of service attack; mobile device; processing modules; resource limitation; self-configuration; sensor; wearable device; Ad hoc networks; Computer crime; IEEE 802.15 Standards; Internet of things; Wireless communication; Wireless sensor networks; DDoS; DoS; Internet of Things; IoT; network security (ID#: 15-8361)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129976&isnumber=7129794
Peresini, O.; Krajcovic, T., "Internet Controlled Embedded System for Intelligent Sensors and Actuators Operation," in Applied Electronics (AE), 2015 International Conference on, pp. 185-188, 8-9 Sept. 2015. Doi: (not provided)
Abstract: Devices compliant with Internet of Things concept are currently getting increased interest amongst users and numerous manufacturers. Our idea is to introduce intelligent household control system respecting this trend. Primary focus of this work is to propose a new solution of intelligent house actuators realization, which is less expensive, more robust and more secure against intrusion. The hearth of the system consists of the intelligent modules which are modular, autonomous, decentralized, cheap and easily extensible with support for encrypted network communication. The proposed solution is opened and therefore ready for the future improvements and application in the field of the Internet of Things.
Keywords: Internet; Internet of Things; cryptography; embedded systems; home automation; intelligent actuators; intelligent control; Internet controlled embedded system; Internet of Things; actuators operation; encrypted network communication; intelligent house actuators; intelligent household control system; intelligent modules; intelligent sensors; Actuators; Hardware; Protocols; Security; Sensors; Standards; User interfaces; Internet of Things; actuators; decentralized network; embedded hardware; intelligent household (ID#: 15-8362)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301084&isnumber=7301036
Unger, S.; Timmermann, D., "DPWSec: Devices profile for Web Services Security," in Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1-6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106961
Abstract: As cyber-physical systems (CPS) build a foundation for visions such as the Internet of Things (IoT) or Ambient Assisted Living (AAL), their communication security is crucial so they cannot be abused for invading our privacy and endangering our safety. In the past years many communication technologies have been introduced for critically resource-constrained devices such as simple sensors and actuators as found in CPS. However, many do not consider security at all or in a way that is not suitable for CPS. Also, the proposed solutions are not interoperable although this is considered a key factor for market acceptance. Instead of proposing yet another security scheme, we looked for an existing, time-proven solution that is widely accepted in a closely related domain as an interoperable security framework for resource-constrained devices. The candidate of our choice is the Web Services Security specification suite. We analysed its core concepts and isolated the parts suitable and necessary for embedded systems. In this paper we describe the methodology we developed and applied to derive the Devices Profile for Web Services Security (DPWSec). We discuss our findings by presenting the resulting architecture for message level security, authentication and authorization and the profile we developed as a subset of the original specifications. We demonstrate the feasibility of our results by discussing the proof-of-concept implementation of the developed profile and the security architecture.
Keywords: Internet; Internet of Things; Web services; ambient intelligence; assisted living; security of data; AAL; CPS; DPWSec; Internet of Things; IoT; ambient assisted living; communication security; cyber-physical system; devices profile for Web services security; interoperable security framework; message level security; resource-constrained devices; Authentication; Authorization; Cryptography; Interoperability; Web services; Applied Cryptography; Authentication; Cyber-Physical Systems (CPS);DPWS; Intelligent Environments; Internet of Things (IoT); Usability (ID#: 15-8363)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106961&isnumber=7106892
Hale, M.L.; Ellis, D.; Gamble, R.; Waler, C.; Lin, J., "Secu Wear: An Open Source, Multi-component Hardware/Software Platform for Exploring Wearable Security," in Mobile Services (MS), 2015 IEEE International Conference on, pp. 97-104, June 27 2015-July 2 2015. doi: 10.1109/MobServ.2015.23
Abstract: Wearables are the next big development in the mobile internet of things. Operating in a body area network around a smartphone user they serve a variety of commercial, medical, and personal uses. Whether used for fitness tracking, mobile health monitoring, or as remote controllers, wearable devices can include sensors that collect a variety of data and actuators that provide hap tic feedback and unique user interfaces for controlling software and hardware. Wearables are typically wireless and use Bluetooth LE (low energy) to transmit data to a waiting smartphone app. Frequently, apps forward this data onward to online web servers for tracking. Security and privacy concerns abound when wearables capture sensitive data or provide critical functionality. This paper develops a platform, called SecuWear, for conducting wearable security research, collecting data, and identifying vulnerabilities in hardware and software. SecuWear combines open source technologies to enable researchers to rapidly prototype security vulnerability test cases, evaluate them on actual hardware, and analyze the results to understand how best to mitigate problems. The paper includes two types of evaluation in the form of a comparative analysis and empirical study. The results reveal how several passive observation attacks present themselves in wearable applications and how the SecuWear platform can capture the necessary information needed to identify and combat such attacks.
Keywords: Bluetooth; Internet of Things; body area networks; mobile computing; security of data; Bluetooth LE; SecuWear platform; body area network; mobile Internet of Things; online Web servers; open source multicomponent hardware-software platform; security vulnerability test cases; smartphone user; wearable security; Biomedical monitoring; Bluetooth; Hardware; Mobile communication; Security; Sensors; Trade agreements; Bluetooth low energy; internet of things; man-in-the-middle; security; vulnerability discovery; wearables (ID#: 15-8364)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226677&isnumber=7226653
Youngchoon Park, "Connected Smart Buildings, a New Way to Interact with Buildings," in Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp.5-5, 9-13 March 2015. doi: 10.1109/IC2E.2015.57
Abstract: Summary form only given. Devices, people, information and software applications rarely live in isolation in modern building management. For example, networked sensors that monitor the performance of a chiller are common and collected data are delivered to building automation systems to optimize energy use. Detected possible failures are also handed to facility management staffs for repairs. Physical and cyber security services have to be incorporated to prevent improper access of not only HVAC (Heating, Ventilation, Air Conditioning) equipment but also control devices. Harmonizing these connected sensors, control devices, equipment and people is a key to provide more comfortable, safe and sustainable buildings. Nowadays, devices with embedded intelligences and communication capabilities can interact with people directly. Traditionally, few selected people (e.g., facility managers in building industry) have access and program the device with fixed operating schedule while a device has a very limited connectivity to an operating environment and context. Modern connected devices will learn and interact with users and other connected things. This would be a fundamental shift in ways in communication from unidirectional to bi-directional. A manufacturer will learn how their products and features are being accessed and utilized. An end user or a device on behalf of a user can interact and communicate with a service provider or a manufacturer without go though a distributer, almost real time basis. This will requires different business strategies and product development behaviors to serve connected customers' demands. Connected things produce enormous amount of data that result many questions and technical challenges in data management, analysis and associated services. In this talk, we will brief some of challenges that we have encountered In developing connected building solutions and services. More specifically, (1) semantic interoperability requirements among smart s- nsors, actuators, lighting, security and control and business applications, (2) engineering challenges in managing massively large time sensitive multi-media data in a cloud at global scale, and (3) security and privacy concerns are presented.
Keywords: HVAC; building management systems; intelligent sensors; HVAC; actuators; building automation systems; building management; business strategy; chiller performance; connected smart buildings; control devices; cyber security services; data management; facility management staffs; heating-ventilation-air conditioning equipment; lighting; networked sensors; product development behaviors; service provider; smart sensors; time sensitive multimedia data; Building automation; Business; Conferences; Intelligent sensors; Security; Building Management; Cloud; Internet of Things (ID#: 15-8365)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092892&isnumber=7092808
Srivastava, P.; Garg, N., "Secure and Optimized Data Storage for IoT Through Cloud Framework," in Computing, Communication & Automation (ICCCA), 2015 International Conference on, pp. 720-723, 15-16 May 2015. doi: 10.1109/CCAA.2015.7148470
Abstract: Internet of Things (IoT) is the future. With increasing popularity of internet, soon internet in routine devices will be a common practice by people. Hence we are writing this paper to encourage IoT accomplishment using cloud computing features with it. Basic setback of IoT is management of the huge quantity of data. In this paper, we have suggested a framework with several data compression techniques to store this large amount of data on cloud acquiring lesser space and using AES encryption techniques we have also improved the security of this data. Framework also shows the interaction of data with reporting and analytic tools through cloud. At the end, we have concluded our paper with some of the future scopes and possible enhancements of our ideas.
Keywords: Internet of Things; cloud computing; cryptography; data compression; optimisation; storage management; AES encryption technique; Internet of Things; IoT; cloud computing feature; data compression technique; data storage optimization; data storage security; Cloud computing; Encryption; Image coding; Internet of things; Sensors; AES; IoT; actuators; compression; encryption; sensors; trigger (ID#: 15-8366)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148470&isnumber=7148334
Tragos, E.Z.; Foti, M.; Surligas, M.; Lambropoulos, G.; Pournaras, S.; Papadakis, S.; Angelakis, V., "An IoT Based Intelligent Building Management System for Ambient Assisted Living," in Communication Workshop (ICCW), 2015 IEEE International Conference on, pp. 246-252, 8-12 June 2015. doi: 10.1109/ICCW.2015.7247186
Abstract: Ambient Assisted Living (AAL) describes an ICT based environment that exposes personalized and context-aware intelligent services, thus creating an appropriate experience to the end user to support independent living and improvement of the everyday quality of life of both healthy elderly and disabled people. The social and economic impact of AAL systems have boosted the research activities that combined with the advantages of enabling technologies such as Wireless Sensor Networks (WSNs) and Internet of Things (IoT) can greatly improve the performance and the efficiency of such systems. Sensors and actuators inside buildings can create an intelligent sensing environments that help gather realtime data for the patients, monitor their vital signs and identify abnormal situations that need medical attention. AAL applications might be life critical and therefore have very strict requirements for their performance with respect to the reliability of the devices, the ability of the system to gather data from heterogeneous devices, the timeliness of the data transfer and their trustworthiness. This work presents the functional architecture of SOrBet (Marie Curie IAPP project) that provides a framework for interconnecting efficiently smart devices, equipping them with intelligence that helps automating many of the everyday activities of the inhabitants. SOrBet is a paradigm shift of traditional AAL systems based on a hybrid architecture, including both distributed and centralized functionalities, extensible, self-organising, robust and secure, built on the concept of “reliability by design”, thus being capable of meeting the strict Quality of Service (QoS) requirements of demanding applications such as AAL.
Keywords: Internet of Things; assisted living; building management systems; patient monitoring; quality of service; wireless sensor networks; Internet of Things; IoT based intelligent building management system; SOrBet; ambient assisted living; hybrid architecture; quality of service; wireless sensor networks; Artificial intelligence; Automation; Buildings; Quality of service; Reliability; Security; Sensors (ID#: 15-8367)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247186&isnumber=7247062
Ozvural, G.; Kurt, G.K., "Advanced Approaches for Wireless Sensor Network Applications and Cloud Analytics," in Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1-5, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106979
Abstract: Although wireless sensor network applications are still at early stages of development in the industry, it is obvious that it will pervasively come true and billions of embedded microcomputers will become online for the purpose of remote sensing, actuation and sharing information. According to the estimations, there will be 50 billion connected sensors or things by the year 2020. As we are developing first to market wireless sensor-actuator network devices, we have chance to identify design parameters, define technical infrastructure and make an effort to meet scalable system requirements. In this manner, required research and development activities must involve several research directions such as massive scaling, creating information and big data, robustness, security, privacy and human-in-the-loop. In this study, wireless sensor networks and Internet of things concepts are not only investigated theoretically but also the proposed system is designed and implemented end-to-end. Low rate wireless personal area network sensor nodes with random network coding capability are used for remote sensing and actuation. Low throughput embedded IP gateway node is developed utilizing both random network coding at low rate wireless personal area network side and low overhead websocket protocol for cloud communications side. Service-oriented design pattern is proposed for wireless sensor network cloud data analytics.
Keywords: IP networks; Internet of Things; cloud computing; data analysis; microcomputers; network coding; personal area networks; protocols; random codes; remote sensing; service-oriented architecture; wireless sensor networks; Internet of things concept; actuation; cloud communications side; cloud data analytics; design parameter identification; embedded microcomputer; information sharing; low throughput embedded IP gateway; overhead websocket protocol; random network coding capability; remote sensing; service-oriented design pattern; wireless personal area network sensor node; wireless sensor-actuator network device; IP networks; Logic gates; Network coding; Protocols; Relays; Wireless sensor networks; Zigbee (ID#: 15-8368)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106979&isnumber=7106892
Zimmermann, A.; Schmidt, R.; Sandkuhl, K.; Wissotzki, M.; Jugel, D.; Mohring, M., "Digital Enterprise Architecture - Transformation for the Internet of Things," in Enterprise Distributed Object Computing Workshop (EDOCW), 2015 IEEE 19th International, pp. 130-138, 21-25 Sept. 2015. doi: 10.1109/EDOCW.2015.16
Abstract: Excellence in IT is both a driver and a key enabler of the digital transformation. The digital transformation changes the way we live, work, learn, communicate, and collaborate. The Internet of Things (IoT) fundamentally influences today's digital strategies with disruptive business operating models and fast changing markets. New business information systems are integrating emerging Internet of Things infrastructures and components. With the huge diversity of Internet of Things technologies and products organizations have to leverage and extend previous Enterprise Architecture efforts to enable business value by integrating Internet of Things architectures. Both architecture engineering and management of current information systems and business models are complex and currently integrating beside the Internet of Things synergistic subjects, like Enterprise Architecture in context with services & cloud computing, semantic-based decision support through ontologies and knowledge-based systems, big data management, as well as mobility and collaboration networks. To provide adequate decision support for complex business/IT environments, we have to make transparent the impact of business and IT changes over the integral landscape of affected architectural capabilities, like directly and transitively impacted IoT-objects, business categories, processes, applications, services, platforms and infrastructures. The paper describes a new metamodel-based approach for integrating Internet of Things architectural objects, which are semi-automatically federated into a holistic Digital Enterprise Architecture environment.
Keywords: Internet of Things; business data processing; cloud computing; information systems; knowledge based systems; ontologies (artificial intelligence);software architecture; Big Data management; IT changes; Internet of Things architectures; Internet of Things components; Internet of Things infrastructures; Internet of Things technologies; IoT-objects; architectural capabilities; architectural objects; architecture engineering; business applications; business categories; business information systems; business infrastructures; business models; business platforms; business processes; business services; business value; cloud computing; collaboration networks; complex business/IT environments; digital enterprise architecture; digital strategies; digital transformation; information systems management; knowledge-based systems; metamodel-based approach; mobility; ontologies; products organizations; semantic-based decision support; Business; Cloud computing; Computational modeling; Computer architecture; Information systems; Internet of things; Security; Digital; Digital Transformation; Internet of Things (ID#: 15-8369)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7310681&isnumber=7310651
Gamundani, A.M., "An Impact Review on Internet of Things Attacks," in Emerging Trends in Networks and Computer Communications (ETNCC), 2015 International Conference on, pp. 114-118, 17-20 May 2015. doi: 10.1109/ETNCC.2015.7184819
Abstract: The heterogeneity of devices that can seamlessly connect to each other and be attached to human beings has given birth to a new computing epitome referred to as the Internet of Things. The connectivity and scalability of such technological waves could be harnessed to improve service delivery in many application areas as revealed by recent studies on the Internet of Things' interoperability. However, for the envisaged benefits to be yielded from Internet of Things there are many security issues to be addressed, which range from application environments security concerns, connection technology inbuilt security issues, scalability and manageability issues. Given the increasing number of objects or “things” that can connect to each other unsupervised, the complexity of such a network is presenting a great concern both for the future internet's security and reliable operation. The focus of this paper was to review the impact of some of the attacks attributable to internet of things. A desktop review of work done under this area, using the qualitative methodology was employed. This research may contribute towards a roadmap for security design and future research on internet of things scalability. The deployment of future applications around Internet of Things may receive valuable insight as the nature of attacks and their perceived impacts will be unveiled and possible solutions could be developed around them.
Keywords: Internet of Things; computer network management; computer network security; open systems; Internet of Things attacks; application environment security; interoperability; manageability issues; network complexity; network connection technology; scalability; security issues; Authentication; Data privacy; Internet of things; Safety; Wireless sensor networks; Attacks; Denial of Service; Internet of Things; Man in the middle; Replay; Security (ID#: 15-8370)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184819&isnumber=7184793
Gendreau, A.A., "Situation Awareness Measurement Enhanced for Efficient Monitoring in the Internet of Things," in Region 10 Symposium (TENSYMP), 2015 IEEE, pp. 82-85, 13-15 May 2015. doi: 10.1109/TENSYMP.2015.13
Abstract: The Internet of Things (IoT) is a heterogeneous network of objects that communicate with each other and their owners over the Internet. In the future, the utilization of distributed technologies in combination with their object applications will result in an unprecedented level of knowledge and awareness, creating new business opportunities and expanding existing ones. However, in this paradigm where almost everything can be monitored and tracked, an awareness of the state of the monitoring systems' situation will be important. Given the anticipated scale of business opportunities resulting from new object monitoring and tracking capabilities, IoT adoption has not been as fast as expected. The reason for the slow growth of application objects is the immaturity of the standards, which can be partly attributed to their unique system requirements and characteristics. In particular, the IoT standards must exhibit efficient self-reliant management and monitoring capability, which in a hierarchical topology is the role of cluster heads. IoT standards must be robust, scalable, adaptable, reliable, and trustworthy. These criteria are predicated upon the limited lifetime, and the autonomous nature, of wireless personal area networks (WPANs), of which wireless sensor networks (WSNs) are a major technological solution and research area in the IoT. In this paper, the energy efficiency of a self-reliant management and monitoring WSN cluster head selection algorithm, previously used for situation awareness, was improved upon by sharing particular established application cluster heads. This enhancement saved energy and reporting time by reducing the path length to the monitoring node. Also, a proposal to enhance the risk assessment component of the model is made. We demonstrate through experiments that when benchmarked against both a power and randomized cluster head deployment, the proposed enhancement to the situation awareness metric used less power. Potentially, this approac- can be used to design a more energy efficient cluster-based management and monitoring algorithm for the advancement of security, e.g. Intrusion detection systems (IDSs), and other standards in the IoT.
Keywords: Internet of Things; personal area networks; security of data; wireless sensor networks; Internet of Things; WPAN; WSN; distributed technologies; efficient self-reliant management and monitoring capability; heterogeneous network; object monitoring and tracking capabilities; situation awareness measurement; situation awareness metric; wireless personal area networks; wireless sensor networks; Energy efficiency; Internet of things; Monitoring; Security; Standards; Wireless sensor networks; Internet of Things; Intrusion detection system; Situational awareness; Wireless sensor networks (ID#: 15-8371)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166243&isnumber=7166213
Kotenko, I.; Saenko, I.; Skorik, F.; Bushuev, S., "Neural Network Approach to Forecast the State of the Internet of Things Elements," in Soft Computing and Measurements (SCM), 2015 XVIII International Conference on, pp. 133-135, 19-21 May 2015. doi: 10.1109/SCM.2015.7190434
Abstract: The paper presents the method to forecast the states of elements of the Internet of Things based on using an artificial neural network. The offered architecture of the neural network is a combination of a multilayered perceptron and a probabilistic neural network. For this reason, it provides high efficiency of decision-making. Results of an experimental assessment of the offered neural network on the accuracy of forecasting the states of elements of the Internet of Things are discussed.
Keywords: Internet of Things; decision making; multilayer perceptrons; neural net architecture; probability; Internet of Things; artificial neural network; decision making; multilayered perceptron; probabilistic neural network; Artificial neural networks; Computer architecture; Forecasting; Internet of things; Probabilistic logic; Security; internet of things; multilayered perceptron; neural network; state monitoring (ID#: 15-8372)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7190434&isnumber=7190390
Minch, R.P., "Location Privacy in the Era of the Internet of Things and Big Data Analytics," in System Sciences (HICSS), 2015 48th Hawaii International Conference on, pp. 1521-1530, 5-8 Jan. 2015. doi: 10.1109/HICSS.2015.185
Abstract: Location information is generated in large quantities in the Internet of Things and becomes a major component of the big data phenomenon. This results in privacy issues involving sensing, identification, storage, processing, sharing, and use of this information in technical, social, and legal contexts. These issues must be addressed if the IoT is to be widely adopted and accepted. Theory will need to be developed and tested, and new research questions will need to be investigated. This exploratory research begins to identify, classify, and describe these issues and questions.
Keywords: Big Data; Internet of Things; data privacy; law; mobile computing; social aspects of automation; Internet of Things; IoT; big data analytics; legal context; location privacy; social context; technical context; Big data; Context; Data privacy; Internet of things; Privacy; Security; Sensors; Big Data; Data Analytics; Internet of Things; Location Privacy (ID#: 15-8373)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069994&isnumber=7069647
Zawoad, S.; Hasan, R., "FAIoT: Towards Building a Forensics Aware Eco System for the Internet of Things," in Services Computing (SCC), 2015 IEEE International Conference on, pp. 279-284, June 27 2015-July 2 2015. doi: 10.1109/SCC.2015.46
Abstract: The Internet of Things (IoT) involves numerous connected smart things with different technologies and communication standards. While IoT opens new opportunities in various fields, it introduces new challenges in the field of digital forensics investigations. The existing tools and procedures of digital forensics cannot meet the highly distributed and heterogeneous infrastructure of the IoT. Forensics investigators will face challenges while identifying necessary pieces of evidence from the IoT environment, and collecting and analyzing those evidence. In this article, we propose the first working definition of IoT forensics and systematically analyze the IoT forensics domain to explore the challenges and issues in this special branch of digital forensics. We propose a Forensics-aware IoT (FAIoT) model for supporting reliable forensics investigations in the IoT environment.
Keywords: Internet of Things; digital forensics; FAIoT; IoT forensics; digital forensics; forensics aware Eco system for the Internet of Things; reliable forensics; Digital forensics; Hospitals; Internet of things; Object recognition; Security; Forensic Investigation; IoT Forensics; IoT Security (ID#: 15-8374)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207364&isnumber=7207317
Zegzhda, D.; Stepanova, T., "Achieving Internet of Things Security Via Providing Topological Sustainability," in Science and Information Conference (SAI), 2015, pp. 269-276, 28-30 July 2015. doi: 10.1109/SAI.2015.7237154
Abstract: Internet of things is fast-paced global phenomenon, based on the concept of heterogeneous networks. Modern heterogeneous networks are characterized by hardly predictable behaviour, hundreds of parameters of network nodes and connections, and lack of single basis for development of control methods and algorithms. In this paper authors propose basic theoretical framework that will allow achieving IoT security via providing its topological sustainability in order to confront security threats, aimed at disrupting, degrading or destroying IoT components and services.
Keywords: Internet of Things; security of data topology; Internet of Things security; IoT security; security threat; topological sustainability; Automata; Internet of things; Network topology; Security; Sensors; Standards; Topology; controllability; internet of things; security modeling; topological sustainability (ID#: 15-8375)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237154&isnumber=7237120
Panwar, M.; Kumar, A., "Security for IoT: An Effective DTLS with Public Certificates," in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, pp. 163-166, 19-20 March 2015. doi: 10.1109/ICACEA.2015.7164688
Abstract: The IoT (Internet of Things) is a scenario in which things, people, animal or any other object can be identified uniquely and have the ability to send or receive data over a network. With the IPV6 the address space has been increased enormously, favors allocation of IP address to a wide range of objects. In near future the number of things that would be connected to internet will be around 40 million. In this scenario it is expected that it will play a very vital role in business, data and social processes in which devices will interact among themselves and with the surrounding by interchanging information [5]. If this information carries sensitive data then security is an aspect that can never be ignored. This paper discusses some existing security mechanism for IoT and an effective DTLS mechanism that makes the DTLS security more robust by employing public certificates for authentication. We can use a Certificate authority that can give the digital certificates to both the client and server and can increase the effectiveness of this communication. This work aims to introduce a CA for the communication and to provide some results that can show its improved performance in contrast to the pre-shared key communication.
Keywords: IP networks; Internet of Things; computer network security; DTLS mechanism; DTLS security; IP address; IPV6; Internet of Things; IoT security; authentication; interchanging information; public certificates; receive data; security mechanism; Authentication; Internet of things; Protocols; Public key; Servers; Certificate Authority (CA); Datagram Transport Layer Security (DTLS); Internet of Things (IoT) (ID#: 15-8376)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164688&isnumber=7164643
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Intrusion Tolerance 2015 |
Intrusion tolerance refers to a fault-tolerant design approach to defending communications, computer and other information systems against malicious attack. Rather than detecting all anomalies, tolerant systems only identify those intrusions which lead to security failures. The topic relates to the Science of Security issues of resilience and composability. This collection cites publications of interest addressing new methods of building secure fault tolerant systems. All were presented in 2015.
Zuo Chen; Xue Li; Bin Lv; Mengyuan Jia, “A Self-Adaptive Wireless Sensor Network Coverage Method for Intrusion Tolerance Based on Particle Swarm Optimization and Cuckoo Search,” in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol.1, no., pp. 1298-1305, 20-22 Aug. 2015. doi:10.1109/Trustcom.2015.521
Abstract: The sensor network coverage optimization process is vulnerable to be attacked or invaded. Therefore, in the case of wireless sensor network under attacks but also able to ensure secure communications and efficient and reliable coverage is a major problem. In this paper, we through the combination of trust management model and heuristic optimization Particle Swarm Optimization and Cuckoo Search, proposed a sensor network security coverage method based on trust management of intrusion tolerance. This method evaluate the trust value of the nodes through their behavior at first, and then adjust the perception radius and decision-making radius. Finally, combine PSO and CS serial optimization in order to achieve the intrusion tolerance for efficient adaptive coverage. By comparing the simulation with a range WSN covering mechanism, this method has certain advantages over the performance of the algorithm, and in the case of the invasion can effectively protect the safety of the overlay network. The simulation results show the effectiveness of the algorithm.
Keywords: particle swarm optimisation; search problems; telecommunication network management; telecommunication security; trusted computing; wireless sensor networks; CS serial optimization; Cuckoo search; PSO; WSN covering mechanism; cuckoo search; decision-making radius; heuristic optimization particle swarm optimization; intrusion tolerance; overlay network; perception radius; reliable coverage; secure communications; self-adaptive wireless sensor network coverage method; sensor network coverage optimization process; sensor network security coverage method; trust management model; Approximation algorithms; Clustering algorithms; Monitoring; Optimization; Reliability; Security; Wireless sensor networks; invasive tolerant; network coverage; trust value; wireless sensor network (ID#: 15-8324)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345429&isnumber=7345233
Nascimento, D.; Correia, M., “Shuttle: Intrusion Recovery for PaaS,” in Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, vol., no., pp. 653-663, June 29 2015-July 2 2015. doi:10.1109/ICDCS.2015.72
Abstract: The number of applications being deployed using the Platform as a Service (PaaS) cloud computing model is increasing. Despite the security controls implemented by cloud service providers, we expect intrusions to strike such applications. We present Shuttle, a novel intrusion recovery service. Shuttle recovers from intrusions in applications deployed in PaaS platforms. Our approach allows undoing changes to the state of PaaS applications due to intrusions, without loosing the effect of legitimate operations performed after the intrusions take place. We combine a record-and-replay approach with the elasticity provided by cloud offerings to recover applications deployed on various instances and backed by distributed databases. The service loads a database snapshot taken before the intrusion and replays subsequent requests, as much in parallel as possible, while continuing to execute incoming requests. We present an experimental evaluation of Shuttle on Amazon Web Services. We show Shuttle can replay 1 million requests in 10 minutes and that it can duplicate the number of requests replayed per second by increasing the number of application servers from 1 to 3.
Keywords: Web services; cloud computing; distributed databases; security of data; Amazon Web services; PaaS platforms; Shuttle; application servers; cloud computing model; cloud service providers; database snapshot; distributed databases; intrusion recovery service; platform as a service; record-and-replay approach; security controls; time 10 min; Computational modeling; Distributed databases; Elasticity; Security; Servers; Software; Cloud Computing; Dependability; Distributed Database Systems; Intrusion Recovery; Intrusion Tolerance; Platform as a Service (ID#: 15-8325)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164950&isnumber=7164877
Ghabri, A.; Bellalouna, M., “Wireless Sensor Networks Modeling as a Probabilistic Combinatorial Optimization Problem,” in Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015 16th IEEE/ACIS International Conference on, vol., no., pp. 1-5, 1-3 June 2015. doi:10.1109/SNPD.2015.7176277
Abstract: The wireless sensor networks are considered as a new technology that has appeared due to technological advances in the field of development of powerful processors, wireless communication protocols and smart sensors. Because of their sensitivity, several research projects have been conducted for the purpose of finding solutions to wireless sensor networks in the presence of intrusions and failures. In fact, a sensor network must be able to maintain its functionality without interruptions caused by the failures of sensors. This problem of fault tolerance has seen a great significance among various fields of research in these networks. The main idea presented in this paper is that the combinatorial optimization provides applicable methods in the context of wireless sensor networks and the function to be optimized can be the function that calculates the consumed energy during communications, or the covered distance, or the routing path cost during data transmission to the sink. Fault tolerant protocols and approaches must then be employed to ensure reliability and to allow us choosing the best paths in order to route information from the source to the collector. In this paper, a theoretical modeling of a probabilistic combinatorial optimization problem through wireless sensors networks is explored.
Keywords: combinatorial mathematics; data communication; fault tolerance; optimisation; probability; routing protocols; telecommunication network reliability; telecommunication power management; wireless sensor networks; data transmission reliability; energy consumption; fault tolerance problem; fault tolerant protocol; probabilistic combinatorial optimization problem; routing path; sensor failure; smart sensor intrusion; wireless communication protocol; wireless sensor network model; Fault tolerance; Fault tolerant systems; Optimization; Probabilistic logic; Routing; Routing protocols; Wireless sensor networks; function; intrusions; modeling; optimization; (ID#: 15- 8326)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176277&isnumber=7176160
Godefroy, Erwan; Totel, Eric; Hurfin, Michel; Majorczyk, Frédéric, “Generation and Assessment of Correlation Rules to Detect Complex Attack Scenarios,” in Communications and Network Security (CNS), 2015 IEEE Conference on, vol., no., pp. 707-708, 28-30 Sept. 2015. doi:10.1109/CNS.2015.7346896
Abstract: Information systems can be targeted by different types of attacks. Some of them are easily detected (like an DDOS targeting the system) while others are more stealthy and consist in successive attacks steps that compromise different parts of the targeted system. The alarm referring to detected attack steps are often hidden in a tremendous amount of notifications that include false alarms. Alert correlators use correlation rules (that can be explicit, implicit or semi-explicit [3]) in order to solve this problem by extracting complex relationships between the different generated events and alerts. On the other hand, providing maintainable, complete and accurate correlation rules specifically adapted to an information system is a very difficult work. We propose an approach that, given proper input information, can build a complete and system dependant set of correlation rules derived from a high level attack scenario. We then evaluate the applicability of this method by applying it to a real system and assessing the fault tolerance in a simulated environment in a second phase.
Keywords: Correlation; Correlators; Intrusion detection; Knowledge based systems; Observers; Sensors; Software; Alert correlation; Security and protection (ID#: 15-8327)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346896&isnumber=7346791
Chen, Ing-Ray; Mitchell, Robert; Cho, Jin-Hee, “On Modeling of Adversary Behavior and Defense for Survivability of Military MANET Applications,” in Military Communications Conference, MILCOM 2015 - 2015 IEEE, vol., no., pp. 629-634, 26-28 Oct. 2015. doi:10.1109/MILCOM.2015.7357514
Abstract: In this paper we develop a methodology and report preliminary results for modeling attack/defense behaviors for achieving high survivability of military mobile ad hoc networks (MANETs). Our methodology consists of 3 steps. The first step is to model adversary behavior of capture attackers and inside attackers which can dynamically and adaptively trigger the best attack strategies while avoiding detection and eviction. The second step is to model defense behavior of defenders utilizing intrusion detection and tolerance strategies to reactively and proactively counter dynamic adversary behavior. We leverage game theory to model attack/defense dynamics with the players being the attackers/defenders, the actions being the attack/defense strategies identified, and the payoff for each outcome being related to system survivability. The 3rd and final step is to identify and apply proper solution techniques that can effectively and efficiently analyze attack/defense dynamics as modeled by game theory for guiding the creation of effective defense strategies for assuring high survivability in military MANETs. The end product is a tool that is capable of analyzing a myriad of attacker behaviors and seeing the effectiveness of countering adaptive defense strategies which incorporate attack/defense dynamics.
Keywords: Adaptation models; Analytical models; Game theory; Intrusion detection; Mathematical model; Mobile ad hoc networks; Vehicle dynamics; adversary modeling; defense behavior modeling; mobile ad hoc networks; reliability; survivability (ID#: 15-8328)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357514&isnumber=7357245
Eskandari, R.; Shajari, M.; Asadi, A., “Automatic Signature Generation for Polymorphic Worms by Combination of Token Extraction and Sequence Alignment Approaches,” in Information and Knowledge Technology (IKT), 2015 7th Conference on, vol., no., pp. 1-6, 26-28 May 2015. doi:10.1109/IKT.2015.7288733
Abstract: As modern worms spread quickly; any countermeasure based on human reaction is barely fast enough to thwart the threat. Moreover, because polymorphic worms could generate mutated instances, they are more complex than non-mutating ones. Currently, the content-based signature generation of polymorphic worms is a challenge for network security. Several signature classes have been proposed for polymorphic worms. Although previously proposed schemes consider patterns such as 1-byte invariants and distance restrictions, they could not handle neither large payloads nor the big size pool of worm instances. Moreover, they are prone to noise injection attack. We proposed a method to combine two approaches of creating a polymorphic worm signature in a new way that avoid the limitation of both approaches. The proposed signature generation scheme is based on token extraction and multiple sequence alignment, widely used in Bioinformatics. This approach provides speed, accuracy, and flexibility in terms of noise tolerance. The evaluations demonstrate these claims.
Keywords: invasive software; automatic signature generation scheme; content-based signature generation; noise injection attack; polymorphic worm signature; sequence alignment approach; token extraction; Bioinformatics; Biomedical monitoring; Computers; Grippers; Intrusion detection; Monitoring; Protocols; Automatic signature generation; Multiple sequence alignment; Polymorphic worm; Regular expressions (ID#: 15-8329)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288733&isnumber=7288662
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Policy Analysis 2015 |
Policy-based access controls and security policies are intertwined in most commercial systems. Analytics use abstraction and reduction to improve policy-based security. For the Science of Security community, policy-based governance is one of the five Hard Problems. The work cited here was presented in 2015.
Aldini, A.; Seigneur, J.-M.; Lafuente, C.B.; Titi, X.; Guislain, J., "Formal Modeling and Verification of Opportunity-enabled Risk Management," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 676-684, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.434
Abstract: With the advent of the Bring-Your-Own-Device (BYOD) trend, mobile work is achieving a widespread diffusion that challenges the traditional view of security standard and risk management. A recently proposed model, called opportunity-enabled risk management (OPPRIM), aims at balancing the analysis of the major threats that arise in the BYOD setting with the analysis of the potential increased opportunities emerging in such an environment, by combining mechanisms of risk estimation with trust and threat metrics. Firstly, this paper provides a logic-based formalization of the policy and metric specification paradigm of OPPRIM. Secondly, we verify the OPPRIM model with respect to the socio-economic perspective. More precisely, this is validated formally by employing tool-supported quantitative model checking techniques.
Keywords: formal specification; formal verification; mobile computing; risk management; security of data; BYOD trend; OPPRIM model; bring-your-own-device; formal modeling; formal verification; logic-based formalization; metric specification paradigm; mobile work; opportunity-enabled risk management; risk management; security standard; socio-economic perspective; threat metric; tool-supported quantitative model checking techniques; trust metric; Access control; Companies; Measurement; Mobile communication; Real-time systems; Risk management; BYOD; model checking; opportunity analysis; risk management (ID#: 15-8498)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345342&isnumber=7345233
Choudhury, S.; Bhowal, A., "Comparative Analysis of Machine Learning Algorithms Along with Classifiers for Network Intrusion Detection," in Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), 2015 International Conference on, pp. 89-95, 6-8 May 2015. doi: 10.1109/ICSTM.2015.7225395
Abstract: Intrusion detection is one of the challenging problems encountered by the modern network security industry. A network has to be continuously monitored for detecting policy violation or suspicious traffic. So an intrusion detection system needs to be developed which can monitor network for any harmful activities and generate results to the management authority. Data mining can play a massive role in the development of a system which can detect network intrusion. Data mining is a technique through which important information can be extracted from huge data repositories. In order to spot intrusion, the traffic created in the network can be broadly categorized into following two categories- normal and anomalous. In our proposed paper, several classification techniques and machine learning algorithms have been considered to categorize the network traffic. Out of the classification techniques, we have found nine suitable classifiers like BayesNet, Logistic, IBK, J48, PART, JRip, Random Tree, Random Forest and REPTree. Out of the several machine learning algorithms, we have worked on Boosting, Bagging and Blending (Stacking) and compared their accuracies as well. The comparison of these algorithms has been performed using WEKA tool and listed below according to certain performance metrics. Simulation of these classification models has been performed using 10-fold cross validation. NSL-KDD based data set has been used for this simulation in WEKA.
Keywords: data mining; learning (artificial intelligence); pattern classification; security of data; BayesNet classifiers; IBK classifiers; J48 classifiers; JRip classifiers; NSL-KDD based data set; PART classifiers; REPTree classifiers; WEKA tool; classification techniques; data mining; data repository; logistic classifiers; machine learning algorithms; management authority; network intrusion detection; network security industry; network traffic; policy violation detection; random forest classifiers; random tree classifiers; Accuracy; Classification algorithms; Intrusion detection; Logistics; Machine learning algorithms; Prediction algorithms; Training; classification; data mining; intrusion detection; machine learning; network (ID#: 15-8499)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225395&isnumber=7225373
Caramujo, J.; Rodrigues Da Silva, A.M., "Analyzing Privacy Policies Based on a Privacy-Aware Profile: The Facebook and LinkedIn Case Studies," in Business Informatics (CBI), 2015 IEEE 17th Conference on, vol. 1, pp. 77-84, 13-16 July 2015. doi: 10.1109/CBI.2015.44
Abstract: The regular use of social networking websites and applications encompasses the collection and retention of personal and very often sensitive information about users. This information needs to remain private and each social network owns a privacy policy that describes in-depth how users' information is managed and disclosed. Problems arise when the development of new systems and applications includes an integration with social networks. The lack of clear understanding and a precise mechanism to enforce the statements described in privacy policies can compromise the development and adaptation of these statements. This paper proposes the extension and validation of a UML profile for privacy-aware systems. The goal of this approach is to provide a better understanding of the different privacy-related requirements for improving privacy policies enforcement when developing systems or applications integrated with social networks. Additionally, to illustrate the potential of this profile, the paper presents and discusses its application with two real world case studies - the Facebook and Linked In policies - which are well structured and represented through two respective Excel files.
Keywords: Unified Modeling Language; computer network security; data privacy ;information management; social networking (online);Excel file; Facebook; LinkedIn; UML profile; privacy aware profile; privacy aware system; privacy profile analysis; social networking Websites; user information management; Business; Conferences; Informatics; Facebook; LinkedIn; Privacy; Requirements; System; UML profile; integration (ID#: 15-8500)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7264718&isnumber=7264698
Daoudagh, S.; Lonetti, F.; Marchetti, E., "Assessment of Access Control Systems Using Mutation Testing," in TEchnical and LEgal aspects of data pRivacy and SEcurity, 2015 IEEE/ACM 1st International Workshop on, pp. 8-13, 18-18 May 2015. doi: 10.1109/TELERISE.2015.10
Abstract: In modern pervasive applications, it is important to validate access control mechanisms that are usually defined by means of the standard XACML language. Mutation analysis has been applied on access control policies for measuring the adequacy of a test suite. In this paper, we present a testing framework aimed at applying mutation analysis at the level of the Java based policy evaluation engine. A set of Java based mutation operators is selected and applied to the code of the Policy Decision Point (PDP). A first experiment shows the effectiveness of the proposed framework in assessing the fault detection of XACML test suites and confirms the efficacy of the application of code-based mutation operators to the PDP.
Keywords: Java; authorisation; program diagnostics; program testing; ubiquitous computing; Java based mutation operators; Java based policy evaluation engine; PDP; access control system assessment; code-based mutation operators; fault detection; mutation testing analysis; policy decision point code; standard XACML language; Access control; Engines; Fault detection; Java; Proposals; Sun; Testing (ID#: 15-8501)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182463&isnumber=7182453
He-Ming Ruan; Ming-Hwa Tsai; Yen-Nun Huang; Yen-Hua Liao; Chin-Laung Lei, "Discovery of De-identification Policies Considering Re-identification Risks and Information Loss," in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, pp. 69-76, 24-26 May 2015. doi: 10.1109/AsiaJCIS.2015.23
Abstract: In data analysis, it is always a tough task to strike the balance between the privacy and the applicability of the data. Due to the demand for individual privacy, the data are being more or less obscured before being released or outsourced to avoid possible privacy leakage. This process is so called de-identification. To discuss a de-identification policy, the most important two aspects should be the re-identification risk and the information loss. In this paper, we introduce a novel policy searching method to efficiently find out proper de-identification policies according to acceptable re-identification risk while retaining the information resided in the data. With the UCI Machine Learning Repository as our real world dataset, the re-identification risk can therefore be able to reflect the true risk of the de-identified data under the de-identification policies. Moreover, using the proposed algorithm, one can then efficiently acquire policies with higher information entropy.
Keywords: data analysis; data privacy; entropy; learning (artificial intelligence); risk analysis; UCI machine learning repository; data analysis; deidentification policies; deidentified data; information entropy; information loss; privacy leakage; reidentification risks; Computational modeling; Data analysis; Data privacy; Lattices; Privacy; Synthetic aperture sonar; Upper bound; De-identification; HIPPA; Safe Harbor; data privacy (ID#: 15-8502)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153938&isnumber=7153836
Pengyan Shen; Kai Guo; Mingzhong Xiao; Quanqing Xu, "Spy: A QoS-Aware Anonymous Multi-Cloud Storage System Supporting DSSE," in Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on, pp. 951-960, 4-7 May 2015. doi: 10.1109/CCGrid.2015.88
Abstract: Constructing an overlay storage system based on multiple personal cloud storages is a desirable technique and novel idea for cloud storages. Existing designs provide the basic functions with some customized features. Unfortunately, some important issues have always been ignored including privacy protection, QoS and cipher-text search. In this paper, we present Spy, our design for an anonymous storage overlay network on multiple personal cloud storage, supporting a flexible QoS awareness and cipher-text search. We reform the original Tor protocol by extending the command set and adding a tail part to the Tor cell, which makes it possible for coordination among proxy servers and still keeps the anonymity. Based on which, we proposed a flexible user-defined QoS policy and employed a Dynamic Searchable Symmetric Encryption (DSSE) scheme to support secure cipher-text search. Extensive security analysis prove the security on privacy preserving and experiments show how different QoS policy work according to different security requirements.
Keywords: cloud computing; cryptography; data privacy; information retrieval; quality of service; storage management; DSSE; QoS-aware anonymous multicloud storage system; Spy; Tor cell; Tor protocol; anonymous storage overlay network; cipher-text search; dynamic searchable symmetric encryption scheme; flexible QoS awareness; flexible user-defined QoS policy; multiple personal cloud storage; multiple personal cloud storages; overlay storage system; privacy protection; security requirements; Cloud computing; Encryption; Indexes; Quality of service; Servers; Cipher-text search; DSSE; PCS; Privacy Preserving; QoS (ID#: 15-8503)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152581&isnumber=7152455
Catania, V.; La Torre, G.; Monteleone, S.; Panno, D.; Patti, D., "User-Generated Services: Policy Management and Access Control in a Cross-Domain Environment," in Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, pp. 668-673, 24-28 Aug. 2015. doi: 10.1109/IWCMC.2015.7289163
Abstract: The rapid evolution of mobile computing, together with the spread of social networks is increasingly moving the role of users from simple information and services consumers to actual producers. Currently, while most of the critical aspects related to User-Generated Contents (UGC) have been addressed, many issues related to service generation still must be faced and represent the next challenge. In this work, we focus on security issues raised by a particular kind of services: those generated by users. User-Generated Services (UGS) are characterized by a set of features that distinguish them from conventional services. To cope with UGS security problems we introduce three possible policy management models, analyzing benefits and drawbacks of each approach. Finally, we propose a cloud-based solution that enables the composition of multiple UGS and policy models, allowing user's devices to share features and services among them.
Keywords: authorisation; cloud computing; mobile computing; social networking (online);UGC;UGS; access control; cloud-based solution; conventional services; cross-domain environment; mobile computing; policy management; policy management models; policy models; social networks; user-generated contents ;user-generated services; Authorization; Context; Privacy; Smart phones; Synchronization; User-Generated Services; access control; cloud; mobile computing; policy management (ID#: 15-8504)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289163&isnumber=7288920
Hongwei Li; Dongxiao Liu; Kun Jia; Xiaodong Lin, "Achieving Authorized And Ranked Multi-Keyword Search Over Encrypted Cloud Data," in Communications (ICC), 2015 IEEE International Conference on, pp. 7450-7455, 8-12 June 2015. doi: 10.1109/ICC.2015.7249517
Abstract: In cloud computing, it is important to protect user data. Thus, data owners usually encrypt their data before outsourcing them to the cloud server for security and privacy concerns. At the same time, very often users need to find data for specific keywords of interest to them. This motivates the research on the searchable encryption technique, which allows the search user to search over the encrypted data. Many mechanisms have been proposed, and are mainly focusing on the symmetric searchable encryption (SSE) technique. However, they do not consider the search authorization problem that requires the cloud server only to return the search results to authorized users. In this paper, we propose an authorized and ranked multi-keyword search scheme (ARMS) over encrypted cloud data by leveraging the ciphertext policy attribute-based encryption (CP-ABE) and SSE techniques. Security analysis demonstrates that the proposed ARMS scheme can achieve confidentiality of documents, trapdoor unlinkability and collusion resistance. Extensive experiments show that the ARMS is more superior and efficient than existing approaches in terms of functionalities and computational overhead.
Keywords: authorisation; cloud computing; cryptography; data protection; search problems; ARMS scheme; CP-ABE scheme; SSE technique; authorized and ranked multikeyword search scheme; ciphertext policy attribute-based encryption scheme; cloud computing; cloud data encryption; cloud server; collusion resistance; computational overhead; data privacy; data security; document confidentiality; search authorization problem; symmetric searchable encryption technique; trapdoor unlinkability; user data protection;Authorization;Encryption;Indexes;Servers;Sun;Multi-keyword Ranked Search; Search Authorization; Searchable Encryption (ID#: 15-8505)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249517&isnumber=7248285
Breaux, T.D.; Smullen, D.; Hibshi, H., "Detecting Repurposing and Over-Collection in Multi-Party Privacy Requirements Specifications," in Requirements Engineering Conference (RE), 2015 IEEE 23rd International, pp. 166-175, 24-28 Aug. 2015. doi: 10.1109/RE.2015.7320419
Abstract: Mobile and web applications increasingly leverage service-oriented architectures in which developers integrate third-party services into end user applications. This includes identity management, mapping and navigation, cloud storage, and advertising services, among others. While service reuse reduces development time, it introduces new privacy and security risks due to data repurposing and over-collection as data is shared among multiple parties who lack transparency into third-party data practices. To address this challenge, we propose new techniques based on Description Logic (DL) for modeling multiparty data flow requirements and verifying the purpose specification and collection and use limitation principles, which are prominent privacy properties found in international standards and guidelines. We evaluate our techniques in an empirical case study that examines the data practices of the Waze mobile application and three of their service providers: Facebook Login, Amazon Web Services (a cloud storage provider), and Flurry.com (a popular mobile analytics and advertising platform). The study results include detected conflicts and violations of the principles as well as two patterns for balancing privacy and data use flexibility in requirements specifications. Analysis of automation reasoning over the DL models show that reasoning over complex compositions of multi-party systems is feasible within exponential asymptotic timeframes proportional to the policy size, the number of expressed data, and orthogonal to the number of conflicts found.
Keywords: Web services; data privacy; description logic; mobile computing; security of data; Amazon Web Services; DL models; Facebook login; Flurry.com; Waze mobile application; data use flexibility; description logic; exponential asymptotic timeframes; guidelines; international standards; multiparty data flow requirements; multiparty privacy requirements specifications; over-collection detection; repurposing detection; use limitation principles; Advertising; Data privacy; Facebook; Limiting; Privacy; Terminology; Data flow analysis; privacy principles; requirements validation (ID#: 15-8506)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7320419&isnumber=7320393
Chessa, M.; Grossklags, J.; Loiseau, P., "A Game-Theoretic Study on Non-monetary Incentives in Data Analytics Projects with Privacy Implications," in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, pp. 90-104, 13-17 July 2015. doi: 10.1109/CSF.2015.14
Abstract: The amount of personal information contributed by individuals to digital repositories such as social network sites has grown substantially. The existence of this data offers unprecedented opportunities for data analytics research in various domains of societal importance including medicine and public policy. The results of these analyses can be considered a public good which benefits data contributors as well as individuals who are not making their data available. At the same time, the release of personal information carries perceived and actual privacy risks to the contributors. Our research addresses this problem area. In our work, we study a game-theoretic model in which individuals take control over participation in data analytics projects in two ways: 1) individuals can contribute data at a self-chosen level of precision, and 2) individuals can decide whether they want to contribute at all (or not). From the analyst's perspective, we investigate to which degree the research analyst has flexibility to set requirements for data precision, so that individuals are still willing to contribute to the project, and the quality of the estimation improves. We study this tradeoffs scenario for populations of homogeneous and heterogeneous individuals, and determine Nash equilibrium that reflect the optimal level of participation and precision of contributions. We further prove that the analyst can substantially increase the accuracy of the analysis by imposing a lower bound on the precision of the data that users can reveal.
Keywords: data analysis; data privacy; game theory; incentive schemes; social networking (online);Nash equilibrium; data analytics; digital repositories; game theoretic study; nonmonetary incentives; personal information; privacy implications; social network sites; Data privacy; Estimation; Games; Noise; Privacy; Sociology; Statistics; Non-cooperative game; data analytics; non-monetary incentives; population estimate; privacy; public good (ID#: 15-8507)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243727&isnumber=7243713
Yukun Zhou; Dan Feng; Wen Xia; Min Fu; Fangting Huang; Yucheng Zhang; Chunguang Li, "SecDep: A User-Aware Efficient Fine-Grained Secure Deduplication Scheme With Multi-Level Key Management," in Mass Storage Systems and Technologies (MSST), 2015 31st Symposium on, pp. 1-14, May 30 2015-June 5 2015. doi: 10.1109/MSST.2015.7208297
Abstract: Nowadays, many customers and enterprises backup their data to cloud storage that performs deduplication to save storage space and network bandwidth. Hence, how to perform secure deduplication becomes a critical challenge for cloud storage. According to our analysis, the state-of-the-art secure deduplication methods are not suitable for cross-user finegrained data deduplication. They either suffer brute-force attacks that can recover files falling into a known set, or incur large computation (time) overheads. Moreover, existing approaches of convergent key management incur large space overheads because of the huge number of chunks shared among users. Our observation that cross-user redundant data are mainly from the duplicate files, motivates us to propose an efficient secure deduplication scheme SecDep. SecDep employs User-Aware Convergent Encryption (UACE) and Multi-Level Key management (MLK) approaches. (1) UACE combines cross-user file-level and inside-user chunk-level deduplication, and exploits different secure policies among and inside users to minimize the computation overheads. Specifically, both of file-level and chunk-level deduplication use variants of Convergent Encryption (CE) to resist brute-force attacks. The major difference is that the file-level CE keys are generated by using a server-aided method to ensure security of cross-user deduplication, while the chunk-level keys are generated by using a user-aided method with lower computation overheads. (2) To reduce key space overheads, MLK uses file-level key to encrypt chunk-level keys so that the key space will not increase with the number of sharing users. Furthermore, MLK splits the file-level keys into share-level keys and distributes them to multiple key servers to ensure security and reliability of file-level keys. Our security analysis demonstrates that SecDep ensures data confidentiality and key security. Our experiment results based on several large real-world datasets show that SecDep is mor- time-efficient and key-space-efficient than the state-of-the-art secure deduplication approaches.
Keywords: cloud computing ;cryptography; data privacy; MLK approaches; SecDep; UACE ;brute-force attacks; cloud storage; computation overheads; cross-user deduplication security; cross-user file-level deduplication; cross-user finegrained data deduplication; data confidentiality; inside-user chunk-level deduplication; key security; key space overhead reduction; multilevel key management approaches; security analysis; server-aided method; user-aided method; user-aware convergent encryption; user-aware efficient fine-grained secure deduplication scheme; Encryption; Protocols; Resists; Servers (ID#: 15-8508)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208297&isnumber=7208272
Namazifard, A.; Tousi, A.; Amiri, B.; Aminilari, M.; Hozhabri, A.A., "Literature Review of Different Contention of E-Commerce Security and the Purview of Cyber Law Factors," in e-Commerce in Developing Countries: With focus on e-Business (ECDC), 2015 9th International Conference on, pp. 1-14, 16-16 April 2015. doi: 10.1109/ECDC.2015.7156333
Abstract: Today, by widely spread of information technology (IT) usage, E-commerce security and its related legislations are very critical issue in information technology and court law. There is a consensus that security matters are the significant foundation of e-commerce, electronic consumers, and firms' privacy. While e-commerce networks need a policy for security privacy, they should be prepared for a simple consumer friendly infrastructure. Hence it is necessary to review the theoretical models for revision. In This theory review, we embody a number of former articles that cover security of e-commerce and legislation ambit at the individual level by assessing five criteria. Whether data of articles provide an effective strategy for secure-protection challenges in e-commerce and e-consumers. Whether provisions clearly remedy precedents or they need to flourish? This paper focuses on analyzing the former discussion regarding e-commerce security and existence legislation toward cyber-crime activity of e-commerce the article also purports recommendation for subsequent research which is indicate that through secure factors of e-commerce we are able to fill the vacuum of its legislation.
Keywords: computer crime; data privacy; electronic commerce; information systems; legislation; IT; cyber law factor; cyber-crime activity; e-commerce security; information technology; legislation; security privacy policy; Business; Electronic commerce; Information technology; Internet; Legislation; Privacy; Security; cyberspace security; e-commerce law; e-consumer protection; jurisdiction (ID#: 15-8509)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7156333&isnumber=7156307
Butin, D.; Le Metayer, D., "A Guide to End-to-End Privacy Accountability," in TEchnical and LEgal aspects of data pRivacy and SEcurity, 2015 IEEE/ACM 1st International Workshop on, pp. 20-25, 18-18 May 2015. doi: 10.1109/TELERISE.2015.12
Abstract: Accountability is considered a tenet of privacy management, yet implementing it effectively is no easy task. It requires a systematic approach with an overarching impact on the design and operation of IT systems. This article, which results from a multidisciplinary project involving lawyers, industry players and computer scientists, presents guidelines for the implementation of consistent sets of accountability measures in organisations. It is based on a systematic analysis of the Draft General Data Protection Regulation. We follow a systematic approach covering the whole life cycle of personal data and considering the three levels of privacy proposed by Bennett, namely accountability of policy, accountability of procedures and accountability of practice.
Keywords: data protection; IT systems; draft general data protection regulation; end-to-end privacy accountability; personal data life cycle; privacy management; systematic approach; Art; Data handling; Data protection; Law; Privacy; Accountability; Methodology; Privacy requirements (ID#: 15-8510)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182465&isnumber=7182453
Wagner, J.; Kuznetsov, V.; Candea, G.; Kinder, J., "High System-Code Security with Low Overhead," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 866-879, 17-21 May 2015. doi: 10.1109/SP.2015.58
Abstract: Security vulnerabilities plague modern systems because writing secure systems code is hard. Promising approaches can retrofit security automatically via runtime checks that implement the desired security policy, these checks guard critical operations, like memory accesses. Alas, the induced slowdown usually exceeds by a wide margin what system users are willing to tolerate in production, so these tools are hardly ever used. As a result, the insecurity of real-world systems persists. We present an approach in which developers/operators can specify what level of overhead they find acceptable for a given workload (e.g., 5%), our proposed tool ASAP then automatically instruments the program to maximize its security while staying within the specified "overhead budget." Two insights make this approach effective: most overhead in existing tools is due to only a few "hot" checks, whereas the checks most useful to security are typically "cold" and cheap. We evaluate ASAP on programs from the Phoronix and SPEC benchmark suites. It can precisely select the best points in the security-performance spectrum. Moreover, we analyzed existing bugs and security vulnerabilities in RIPE, Open SSL, and the Python interpreter, and found that the protection level offered by the ASAP approach is sufficient to protect against all of them.
Keywords: security of data; ASAP tool; Open SSL; Phoronix benchmark suites; Python interpreter; RIPE; SPEC benchmark suites; code writing ;high system-code security; runtime checks; security policy; security vulnerabilities; security-performance spectrum; Computer bugs; Instruments; Production; Safety; Security; Software; Memory Safety; Security; Software Hardening; Software Instrumentation (ID#: 15-8511)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163065&isnumber=7163005
Hsiao-Ying Huang; Bashir, M., "Is Privacy a Human Right? An Empirical Examination in a Global Context," in Privacy, Security and Trust (PST), 2015 13th Annual Conference on, pp. 77-84, 21-23 July 2015. doi: 10.1109/PST.2015.7232957
Abstract: Privacy has become an emergent concern in today's digital society. Although scholars have defined privacy from different perspectives, it is still a complex and ambiguous concept. The absence of a concrete concept of privacy impedes the development of privacy legislation and policies in a global context. Therefore, a cross-cultural/national understanding of privacy is urgently needed for establishing a global privacy protocol. This empirical study seeks to better understand privacy by exploring public beliefs of privacy in a global context and further investigating socio-cultural influences on these beliefs. First, we explored general global public beliefs of privacy and then analyzed associations among privacy beliefs and socio-cultural factors. We also investigated the important issue of whether the general global public sees privacy as a “human right.” Results show that most participants agreed with concepts of privacy as a right. However, people had more diverse views on privacy as a right not to be annoyed and social norm privacy concepts. Importantly, nearly eighty percent of people believed in privacy as a human right and nearly seventy percent disagreed with privacy as a concern only for those having something to hide. In the era of globalization, our study provides a bottom-up understanding of privacy beliefs that we believe is essential for the development of global privacy regulation and policies.
Keywords: cultural aspects; data privacy; cross-cultural understanding; digital society; general global public beliefs; global privacy protocol; global privacy regulation; human right; national understanding; privacy beliefs; privacy legislation; social norm privacy; socio-cultural factors; socio-cultural influences; Electromagnetic interference; IEC; IEC Standards; Privacy; Security; global privacy policy and regulation; privacy belief; public opinion (ID#: 15-8512)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232957&isnumber=7232940
Ouaddah, A.; Bouij-Pasquier, I.; Abou Elkalam, A.; Ait Ouahman, A., "Security Analysis and Proposal of New Access Control Model in the Internet of Thing," in Electrical and Information Technologies (ICEIT), 2015 International Conference on, pp. 30-35, 25-27 March 2015. doi: 10.1109/EITech.2015.7162936
Abstract: The Internet of Things (IoT) represents a concept where the barriers between the real world and the cyber-world are progressively annihilated through the inclusion of everyday physical objects combined with an ability to provide smart services. These services are creating more opportunities but at the same time bringing new challenges in particular security and privacy concerns. To address this issue, an access control management system must be implemented. This work introduces a new access control framework for IoT environment, precisely the Web of Things (WoT) approach, called “SmartOrBAC” Based on the OrBAC model. SmartOrBAC puts the context aware concern in a first position and deals with the constrained resources environment complexity. To achieve these goals, a list of detailed IoT security requirements and needs is drawn up in order to establish the guidelines of the “SmartOrBAC”. Then, The OrBAC model is analyzed and extended, regarding these requirements, to specify local as well as collaboration access control rules; on the other hand, these security policies are enforced by applying web services mechanisms mainly the RESTFUL approach. Finaly the most important works that emphasize access control in IoT environment are discussed.
Keywords: Internet of Things; Web services; authorisation; ubiquitous computing; Internet of Thing; RESTFUL approach; SmartOrBAC; Web of Things; Web services; collaboration access control rules; context aware concern; cyber-world; new access control model; security analysis; Access control; Biomedical monitoring; Monitoring; Organizations; Scalability; Usability; OrBAC; access control model; internet of things; privacy; security policy; web of things (ID#: 15-8513)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162936&isnumber=7162923
Jingquan Li, "Security Implications of Direct-to-Consumer Genetic Services," in Big Data Computing Service and Applications (BigDataService), 2015 IEEE First International Conference on, pp. 147-153, March 30 2015-April 2 2015. doi: 10.1109/BigDataService.2015.26
Abstract: Direct-to-consumer (DTC) genetic services refer to genetic tests sold directly to consumers via the Internet, television, and other marketing venues without involving healthcare providers such as physicians, genetic counselors, and other healthcare professionals. Companies such as 23andMe and Navigenics offer genetic tests using genome-wide technology direct to consumers over the Internet. Genetic data collected by DTC companies provide an opportunity for future personalized medicine programs that will significantly improve patient outcomes and preventive care. While this may be a promising development, DTC genetic testing raises important security and privacy concerns. This paper aims to identify the most important security threats to consumers of DTC genetic testing services, and explain how to use security technologies and policies to mitigate the threats. In this paper, we first analyze a leading DTC company that demonstrates how security concerns might be intrinsic to contemporary DTC genetic testing services. We then present a threat model and identify the most important security threats to consumers of DTC genetic testing services. Furthermore, we outline security and privacy implications of using DTC genetic services and how DTC companies should elaborate upon them to protect genetic privacy.
Keywords: Internet; data privacy; genetics; health care; security of data; television; DTC genetic testing services; Internet; direct-to-consumer genetic services; health care providers; marketing venues; privacy concerns; security implications; television; Bioinformatics; Companies; Genomics; Privacy; Security; Testing; cryptography; direct-to-consumer genetic testing ;genetic data; privacy; secondary use; security; security technology (ID#: 15-8514)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184875&isnumber=7184847
Yong Wang; Nepali, R.K., "Privacy Threat Modeling Framework for Online Social Networks," in Collaboration Technologies and Systems (CTS), 2015 International Conference on, pp. 358-363, 1-5 June 2015. doi: 10.1109/CTS.2015.7210449
Abstract: Online social networks (OSNs) provide services for people to connect and share information. Social networking sites contain huge amount of personal information such as user profiles, user relations, and user activities. Most of the information is personal and sensitive in nature and hence disclosure of this information may cause harassment, financial loss, and even identity theft. Thus, protecting user privacy in online social networks is essential. Many threats and attacks have been found in social networks. However, there is lack of a threat model to study privacy issues in online social networks. This paper presents a privacy threat model for online social networks. The threat model includes four components, online social networking sites, third party service providers, genuine social network users, and malicious users. Threats and vulnerabilities are analyzed from six security aspects, i.e., hardware, operating systems, OSN privacy policies, user privacy settings, user relations, and user data. The paper further summarizes and analyzes the existing threats and attacks using the proposed model.
Keywords: data protection; social networking (online); OSN privacy policies; financial loss; genuine social network users; hardware security aspects; identity theft; information sharing; malicious users; online social networks; operating systems; personal information; privacy threat modeling framework; sensitive information; social network threats; social network vulnerabilities; social networking sites; third party service providers; user activities; user data; user privacy protection; user privacy settings; user profiles; user relations; Data privacy; Facebook; Operating systems; Organizations; Privacy; Security; countermeasures; online social networks; privacy threat modeling; privacy threats and attacks (ID#: 15-8515)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210449&isnumber=7210375
Tripp, O.; Pistoia, M.; Centonze, P., "Application- and User-Sensitive Privacy Enforcement in Mobile Systems," in Mobile Software Engineering and Systems (MOBILESoft), 2015 2nd ACM International Conference on, pp. 162-163, 16-17 May 2015. doi: 10.1109/MobileSoft.2015.45
Abstract: The mobile era is marked by exciting opportunities for utilization of contextual information in computing. Applications from different categories-including commercial and enterprise email, instant messaging, social, banking, insurance and retail-access, process and transmit over the network numerous pieces of sensitive information, such as the user's geographical location, device ID, contacts, calendar events, passwords, and health records, as well as credit-card, social-security, and bank-account numbers. Understanding and managing how an application handles private data is a significant challenge. There are not only multiple sources of such data (including primarily social accounts, user inputs and platform libraries), but also different release targets (such as advertising companies and application servers) and different forms of release (for example, passwords transmitted in the clear, hashed or encrypted). To the end users, and particularly those who are not tech savvy, it is nontrivial to manage these complexities. In response, we have designed Labyrinth, a system for privacy enforcement. The unique features of Labyrinth are (i) an intuitive visual interface for configuration of the privacy policy, which consists of enriched app screen captures annotated with privacy-related information, combined with (ii) a lightweight mechanism to detect and suppress privacy threats that is completely decoupled from the host platform. Labyrinth supports both Android and iOS. In this paper, we describe the Labyrinth architecture and illustrate its flow steps.
Keywords: Android (operating system); data privacy; iOS (operating system); mobile computing; smart phones; user interfaces; Android; Labyrinth; Labyrinth architecture; iOS; mobile systems; privacy enforcement; private data; sensitive information; smartphones; Instruments; Mobile applications; Mobile communication; Privacy; Security; Visualization; Android; Dynamic Analysis; Mobile; Privacy; Security; Usable Security; iOS (ID#: 15-8516)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7283058&isnumber=7283013
Dev Raghuwanshi, K.; Tamrakar, S., "An Effective Access From Cloud Data Using Attribute Based Encryption," in Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), 2015 International Conference on, pp. 212-218, 25-27 Feb. 2015. doi: 10.1109/ABLAZE.2015.7154994
Abstract: Cloud Computing is an important way of communicating and share data over Internet. Cloud Computing enables transmission of data over Internet and resource utilization at data centers. But during data sharing and resource utilization security plays a vital role since the chances of attacks increases. The data to be stored at data centers needs to be retrieved without any data loss and attack. Hence a multi key based data retrieved with encryption is proposed previously but the techniques require more computational time and hence increase the overall cost. Here in this paper a new and efficient is implemented which uses the concept of Cipher text policy attribute based encryption using elliptic curve based key generation. The implementation is based on the concept of generating a new attribute for each and every data to be send and encrypt the data using the generated attribute and forms a tupple and stored at the storage site. The receiver then authenticates himself and enters the attribute and hence decrypts the data. The proposed methodology implemented here provides efficient retrieval of data over cloud as well as reduces computational time and cost.
Keywords: cloud computing; computer centres; cryptography; information retrieval; Internet; cipher text policy attribute based encryption; cloud computing; cloud data; data centers; data loss; data retrieval; data sharing; data transmission; elliptic curve based key generation; multikey based data; resource utilization security; storage site; Cloud computing; Data privacy; Encryption; Public key; Receivers; Attribute based encryption; Cloud computing; DOS; Virtualization; multi-keyword retrieval (ID#: 15-8517)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154994&isnumber=7154914
Jun Pang; Yang Zhang, "Cryptographic Protocols for Enforcing Relationship-Based Access Control Policies," in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 2, pp. 484-493, 1-5 July 2015. doi: 10.1109/COMPSAC.2015.9
Abstract: Relationship-based access control schemes have been studied to protect users' privacy in online social networks. In this paper, we propose cryptographic protocols for decentralized social networks to enforce relationship-based access control polices, i.e., K-common friends and k-depth. Our protocols are mainly built on pairing-based cryptosystems. We prove their security under the honest but curious adversary model, and we analyze their computation and communication complexities. Furthermore, we evaluate their efficiency through simulations on a real social network dataset.
Keywords: authorisation; cryptographic protocols; data privacy; social networking (online);communication complexities; computation complexities; cryptographic protocols; curious adversary model; decentralized social networks; k-common friends; k-depth; online social networks; pairing-based cryptosystems; relationship-based access control policies; user privacy; Access control; Computational modeling; Cryptography; Encoding; Protocols; Social network services (ID#: 15-8518)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273657&isnumber=7273573
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Power Grid Vulnerability Analysis 2015 |
Cyber-Physical Systems such as the power grid are complex networks linked with cyber capabilities. The complexity and potential consequences of cyber-attacks on the grid make them an important area for scientific research. Work cited here was presented in 2015.
Xisong Dong; Nyberg, T.R.; Hamalainen, P.; Gang Xiong; Yuan Liu; Jiachen Hou, "Vulnerability Analysis of Smart Grid Based on Complex Network Theory," in Information Science and Technology (ICIST), 2015 5th International Conference on, pp. 525-529, 24-26 April 2015. doi: 10.1109/ICIST.2015.7289028
Abstract: Smart grid has been widely acknowledged around the world. The rapid development of complex network theory provides a new perception into the research of smart grid. Based on the latest progress in the field of complex network theory, smart grid can be treated as small world networks. This paper examines the tolerance of smart grid against attacks to analyze its vulnerability, and proposes a technique to study the relationship between the electric betweenness and the reliability of smart grid. Based on these researches, the specific concept of vulnerability investigation to indicate smart grid is clarified. Furthermore, the proposed method will be investigated by an IEEE test system in contrast with the result from actual concept in power grid to indicate its effectiveness.
Keywords: IEEE standards; complex networks; power system protection; power system reliability; smart power grids; IEEE test system; complex network theory; smart grid electric betweenness; smart grid reliability; smart grid tolerance; smart grid vulnerability analysis; Context; Smart grids (ID#: 15-8475)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289028&isnumber=7288906
Jun Yan; Yufei Tang; Haibo He; Yan Sun, "Cascading Failure Analysis With DC Power Flow Model and Transient Stability Analysis," in Power Systems, IEEE Transactions on, vol. 30, no. 1, pp. 285-297, Jan. 2015. doi: 10.1109/TPWRS.2014.2322082
Abstract: When the modern electrical infrastructure is undergoing a migration to the Smart Grid, vulnerability and security concerns have also been raised regarding the cascading failure threats in this interconnected transmission system with complex communication and control challenge. The DC power flow-based model has been a popular model to study the cascading failure problem due to its efficiency, simplicity and scalability in simulations of such failures. However, due to the complex nature of the power system and cascading failures, the underlying assumptions in DC power flow-based cascading failure simulators (CFS) may fail to hold during the development of cascading failures. This paper compares the validity of a typical DC power flow-based CFS in cascading failure analysis with a new numerical metric defined as the critical moment (CM). The adopted CFS is first implemented to simulate system behavior after initial contingencies and to evaluate the utility of DC-CFS in cascading failure analysis. Then the DC-CFS is compared against another classic, more precise power system stability methodology, i.e., the transient stability analysis (TSA). The CM is introduced with a case study to assess the utilization of these two models for cascading failure analysis. Comparative simulations on the IEEE 39-bus and 68-bus benchmark reveal important consistency and discrepancy between these two approaches. Some suggestions are provided for using these two models in the power grid cascading failure analysis.
Keywords: load flow; power system reliability; power system simulation; power system transient stability; DC power flow model; cascading failure analysis; critical moment; Interconnected transmission system; power system stability; smart grid; transient stability analysis; Analytical models; Failure analysis; Mathematical model; Power system faults; Power system protection; Power system stability; Stability analysis; Cascading failure; DC power flow; contingency analysis; transient stability; vulnerability assessment (ID#: 15-8476)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6819069&isnumber=6991618
Deka, D.; Vishwanath, S., "Structural Vulnerability of Power Grids to Disasters: Bounds and Reinforcement Measures," in Innovative Smart Grid Technologies Conference (ISGT), 2015 IEEE Power & Energy Society, pp. 1-5, 18-20 Feb. 2015. doi: 10.1109/ISGT.2015.7131820
Abstract: Failures of power grid components during natural disasters like hurricanes can fragment the network and lead to creation of islands and blackouts. The propagation of failures in actual power grids following a catastrophic event differs significantly and thus is harder to analyze than on random networks. This paper studies the structural vulnerability of real power grids to natural disasters and presents improved bounds to quantify the size of the expected damage induced. The performance of the derived bounds are demonstrated through simulations on an IEEE test case and a real grid network. Further a framework based on the eigen-decomposition of the power grid network is used to study adversarial attacks aimed to minimize network resilience. The insights gained are used to design reinforcement measures to improve network resilience against such adversaries.
Keywords: disasters; eigenvalues and eigenfunctions; power grids; IEEE test case; eigen-decomposition; natural disasters; network resilience; power grid network; real grid network; Eigenvalues and eigenfunctions; Hurricanes; Power grids; Power transmission lines; Resilience; Transmission line matrix methods; Upper bound (ID#: 15-8477)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131820&isnumber=7131775
Liu, R.; Srivastava, A., "Integrated Simulation to Analyze the Impact of Cyber-Attacks on the Power Grid," in Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, pp. 1-6, 13-13 April 2015. doi: 10.1109/MSCPES.2015.7115395
Abstract: With the development of the smart grid technology, Information and Communication Technology (ICT) plays a significant role in the smart grid. ICT enables to realize the smart grid, but also brings cyber vulnerabilities. It is important to analyze the impact of possible cyber-attacks on the power grid. In this paper, a real-time, cyber-physical co-simulation testbed with hardware-in-the-loop capability is discussed. Real-time Digital Simulator (RTDS), Synchrophasor devices, DeterLab, and a wide- area monitoring application with closed-loop control are utilized in the developed testbed. Two different real life cyber-attacks, including TCP SYN flood attack, and man-in-the-middle attack, are simulated on an IEEE standard power system test case to analyze the the impact of these cyber-attacks on the power grid.
Keywords: closed loop systems; digital simulation; phasor measurement; power system simulation; smart power grids; DeterLab; ICT; IEEE standard power system test case; RTDS;TCP SYN flood attack; closed loop control; cyber vulnerability; cyber-attack impact analysis; hardware-in-the-loop capability; information and communication technology; integrated simulation; man-in-the-middle attack; real-time cyber-physical cosimulation testbed; real-time digital simulator; smart power grid technology; synchrophasor devices; wide-area monitoring application; Capacitors; Loading; Phasor measurement units; Power grids; Power system stability; Reactive power; Real-time systems; Cyber Security; Cyber-Physical; DeterLab; RTDS; Real-Time Co-Simulation; Synchrophasor Devices (ID#: 15-8478)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7115395&isnumber=7115373
Xingsi Zhong; Ahmadi, A.; Brooks, R.; Venayagamoorthy, G.K.; Lu Yu; Yu Fu, "Side Channel Analysis of Multiple PMU Data in Electric Power Systems," in Power Systems Conference (PSC), 2015 Clemson University, pp. 1-6, 10-13 March 2015. doi: 10.1109/PSC.2015.7101704
Abstract: The deployment of Phasor Measurement Units (PMUs) in an electric power grid will enhance real-time monitoring and analysis of grid operations. The PMU collects bus voltage phasors, branch current phasors, and bus frequency measurements and uses a communication network to transmit the measurements to the respective substation(s)/control center(s). PMU information is sensitive, since missing or incorrect PMU data could lead to grid failure and/or damage. It is important to use encrypted communicate channels to avoid cyber attacks. In this study, a side-channel attack using inter-packet delays to isolate the stream of packets of one PMU from an encrypted tunnel is shown. Also, encryption in power system VPNs and vulnerabilities due to side channel analysis is discussed.
Keywords: phasor measurement; power grids; security of data; branch current phasors; bus frequency measurements; bus voltage phasors; electric power grid; electric power systems; encrypted tunnel; inter-packet delays; multiple PMU data; phasor measurement units; real-time monitoring; side channel analysis; Cryptography; Delays; Hidden Markov models; Logic gates; Phasor measurement units; Cybersecurity; grid operations; phasor measurement units; power system; side channel analysis (ID#: 15-8479)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7101704&isnumber=7101673
Hahn, E.M.; Hermanns, H.; Wimmer, R.; Becker, B., "Transient Reward Approximation for Continuous-Time Markov Chains," in Reliability, IEEE Transactions on, vol. 64, no. 4, pp. 1254-1275, Dec. 2015. doi: 10.1109/TR.2015.2449292
Abstract: We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.
Keywords: Markov processes; approximation theory; binary decision diagrams; computational complexity; CTMC; CTMDP; abstraction techniques; accumulated rewards; bound computation; computational complexity; continuous-time Markov chains; continuous-time Markov decision processes; expected final rewards; multiterminal decision diagrams; partly-explicit analysis approach; partly-symbolic analysis approach; reliability analysis; symblicit analysis; transient reward approximation; Analytical models; Boolean functions; Computational modeling; Concrete; Data structures; Markov processes; Continuous-time Markov chains; abstraction; continuous-time Markov decision processes; ordered binary decision diagrams; symbolic methods (ID#: 15-8480)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163373&isnumber=7337501
Mohagheghi, S.; Javanbakht, P., "Power Grid and Natural Disasters: A Framework for Vulnerability Assessment," in Green Technologies Conference (GreenTech), 2015 Seventh Annual IEEE, pp. 199-205, 15-17 April 2015. doi: 10.1109/GREENTECH.2015.27
Abstract: As unexpected, large-scale and uncontrollable events, natural disasters can cause devastating damages to a society's infrastructure. The possible interruption in electric service is not simply a matter of inconvenience, since in our modern societies this could disrupt many services our everyday lives depend on. Any disturbance in critical municipal infrastructure such as water sanitation and sewage plants, hospitals and emergency services, telecommunication networks, and police stations will add to the devastation and distress during the event, and may severely hinder any post-disaster recovery efforts. The first step to reinforce the power grid against such hazards is to assess the vulnerability of different system components against disaster events scenarios. By identifying the weak links in the system, remedial actions can be undertaken in an attempt to strengthen the energy delivery network. The purpose of this paper is to provide a mathematical framework for analysis of the interaction between natural hazards and the power grid. The outcome of this study can be used in any mitigation technique during the design or operation stages.
Keywords: critical infrastructures; disasters; emergency services; power grids; safety; critical municipal infrastructure; disaster events; emergency services; energy delivery network; hospitals; mitigation technique; natural disasters; natural hazards; police stations; post-disaster recovery; power grid; sewage plants; society infrastructure; telecommunication networks; vulnerability assessment; water sanitation; Fires; Hurricanes; Poles and towers; Power grids; Substations; Wind speed; natural disasters; power grid resilience; power system security; vulnerability assessment (ID#: 15-8481)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150250&isnumber=7150207
Davis, K.R.; Davis, C.M.; Zonouz, S.A.; Bobba, R.B.; Berthier, R.; Garcia, L.; Sauer, P.W., "A Cyber-Physical Modeling and Assessment Framework for Power Grid Infrastructures," in Smart Grid, IEEE Transactions on, vol. 6, no. 5, pp. 2464-2475, Sept. 2015. doi: 10.1109/TSG.2015.2424155
Abstract: The integration of cyber communications and control systems into the power grid infrastructure is widespread and has a profound impact on the operation, reliability, and efficiency of the grid. Cyber technologies allow for efficient management of the power system, but they may contain vulnerabilities that need to be managed. One important possible consequence is the introduction of cyber-induced or cyber-enabled disruptions of physical components. In this paper, we propose an online framework for assessing the operational reliability impacts due to threats to the cyber infrastructure. This framework is an important step toward addressing the critical challenge of understanding and analyzing complex cyber-physical systems at scale.
Keywords: power engineering computing; power grids; security of data; assessment framework; attack trees; control system; cyber communications; cyber security; cyber-physical modeling; operational reliability impacts; power grid infrastructures; Analytical models; Object oriented modeling; Power system reliability; Reliability; Security; Topology; Attack trees; contingency analysis; cyber security; cyber-physical systems; cyber-physical topology; cyberphysical systems; operational reliability (ID#: 15-8482)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7103368&isnumber=7210244
Rawat, D.B.; Bajracharya, C., "Detection of False Data Injection Attacks in Smart Grid Communication Systems," in Signal Processing Letters, IEEE, vol. 22, no. 10, pp. 1652-1656, Oct. 2015. doi: 10.1109/LSP.2015.2421935
Abstract: The transformation of traditional energy networks to smart grids can assist in revolutionizing the energy industry in terms of reliability, performance and manageability. However, increased connectivity of power grid assets for bidirectional communications presents severe security vulnerabilities. In this letter, we investigate Chi-square detector and cosine similarity matching approaches for attack detection in smart grids where Kalman filter estimation is used to measure any deviation from actual measurements. The cosine similarity matching approach is found to be robust for detecting false data injection attacks as well as other attacks in the smart grids. Once the attack is detected, system can take preventive action and alarm the manager to take preventative action to limit the risk. Numerical results obtained from simulations corroborate our theoretical analysis.
Keywords: Kalman filters; power system reliability; smart power grids; Chi-square detector; Kalman filter estimation; bidirectional communications; cosine similarity matching approaches; energy industry; energy networks; false data injection attack detection; manageability; performance; power grid assets; preventive action; reliability; smart grid communication systems; Detectors; Estimation; Kalman filters; Security; Smart grids; Transmission line measurements; Attack detection; cyber-security; machine learning; power systems security; smart grid security (ID#: 15-8483)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7084114&isnumber=7059273
Wang, Y.; Gamage, T.T.; Hauser, C.H., "Security Implications of Transport Layer Protocols in Power Grid Synchrophasor Data Communication," in Smart Grid, IEEE Transactions on , vol. PP, no.99, pp.1-10, 03 December 2015. doi: 10.1109/TSG.2015.2499766
Abstract: Wide-area monitoring and control (WAMC) systems based on synchrophasor data streams are becoming more and more significant to the operation of the smart power grid. Reliable and secure communication, and higher quality of service (very low latency, high availability, etc.) of data are crucial to the success of WAMC systems. However, the IEEE standard for synchrophasor data communication (IEEE Standard C37.118.2-2011) does not place any restrictions on the choice of transport layer protocols. In light of this, we examine the communication between synchrophasors [phasor measurement units (PMUs)] and phasor data concentrators to analyze potential security vulnerabilities present at the transport layer, and investigate the advantages and disadvantages of both the TCP and UDP protocols, respectively, with an emphasis on security issues. Demonstrations of attacks related to these security vulnerabilities are shown in lab environment and underlying mechanisms are analyzed to determine the capabilities attackers to succeed with them.
Keywords: Data transfer; IP networks; Phasor measurement units; Protocols; Reliability; Security; Transport layer protocol; security; wide-area monitoring and control (ID#: 15-8484)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346493&isnumber=5446437
Nourian, A.; Madnick, S., "A Systems Theoretic Approach to the Security Threats in Cyber Physical Systems Applied to Stuxnet," in Dependable and Secure Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, 17 December, 2015. doi: 10.1109/TDSC.2015.2509994
Abstract: Cyber Physical Systems (CPSs) are increasingly being adopted in a wide range of industries such as smart power grids. Even though the rapid proliferation of CPSs brings huge benefits to our society, it also provides potential attackers with many new opportunities to affect the physical world such as disrupting the services controlled by CPSs. Stuxnet is an example of such an attack that was designed to interrupt the Iranian nuclear program. In this paper, we show how the vulnerabilities exploited by Stuxnet could have been addressed at the design level. We utilize a system theoretic approach, based on prior research on system safety, that takes both physical and cyber components into account to analyze the threats exploited by Stuxnet. We conclude that such an approach is capable of identifying cyber threats towards CPSs at the design level and provide practical recommendations that CPS designers can utilize to design a more secure CPS.
Keywords: Hazards; Process control; Reliability; Security; Sensors; Software; CPS; CPS security design; STAMP; Security and safety analysis; Stuxnet analysis (ID#: 15-8485)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7360168&isnumber=4358699
Darwish, I.; Igbe, O.; Saadawi, T., "Experimental and Theoretical Modeling of DNP3 Attacks in Smart Grids," in Sarnoff Symposium, 2015 36th IEEE, pp. 155-160, 20-22 Sept. 2015. doi: 10.1109/SARNOF.2015.7324661
Abstract: Security challenges have emerged in recent years facing smart-grids in the energy sector. Threats are arising every day that could cause great scale of damages in critical infrastructure. Our paper will address internal security threats associated with smart grid in a simulated virtual environment involving DNP3 protocol. We will analyze vulnerabilities and perform penetration testing involving Man-in-the-middle (MITM) type of attacks. Ultimately, by utilizing theoretical modeling of smart-grid attacks using game theory, we will optimize our detection and mitigation procedures to reduce cyber threats in DNP3 environment. The use of intrusion detection system will be necessary to identify attackers targeting different part of the smart grid infrastructure. Mitigation techniques will ensure a healthy check of the network. Performing DNP3 security attacks, detections, preventions and counter measures will be our goals to achieve in this research paper.
Keywords: game theory; power system security; safety systems; smart power grids; DNP3 attacks; game theory; internal security threats; intrusion detection system; man-in-the-middle; mitigation techniques; simulated virtual environment; smart grids; Delay effects; Game theory; Games; Payloads; Protocols; Security; Smart grids; DNP3; Game Theory; IED; MITM; Malicious Attacks; SCADA; Smart-Grid (ID#: 15-8486)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7324661&isnumber=7324628
Dayal, A.; Yi Deng; Tbaileh, A.; Shukla, S., "VSCADA: A Reconfigurable Virtual SCADA Test-Bed for Simulating Power Utility Control Center Operations," in Power & Energy Society General Meeting, 2015 IEEE, pp. 1-5, 26-30 July 2015. doi: 10.1109/PESGM.2015.7285822
Abstract: Complex large-scale cyber-physical systems, such as electric power grids, oil & gas pipeline systems, transportation systems, etc. are critical infrastructures that provide essential services for the entire nation. In order to improve systems' security and resilience, researchers have developed many Supervisory Control and Data Acquisition (SCADA) test beds for testing the compatibility of devices, analyzed the potential cyber threats/vulnerabilities, and trained practitioners to operate and protect these critical systems. In this paper, we describe a new test bed architecture for modeling and simulating power system related research. Since the proposed test bed is purely software defined and the communication is emulated, its functionality is versatile. It is able to reconfigure virtual systems for different real control/monitoring scenarios. The unified architecture can seamlessly integrate various kinds of system-level power system simulators (real-time/non real-time) with the infrastructure being controlled or monitored with multiple communication protocols. We depict the design methodology in detail. To validate the usability of the test bed, we implement an IEEE 39-bus power system case study with a power flow analysis and dynamics simulation mimicking a real power utility infrastructure. We also include a cascading failure example to show how system simulators such as Power System Simulator for Engineering (PSS/E), etc. can seamlessly interact with the proposed virtual test bed.
Keywords: SCADA systems; critical infrastructures; electricity supply industry; power system control; power system security; power system simulation; protocols; reconfigurable architectures; IEEE 39-bus power system; SCADA; communication protocol; complex large scale cyber-physical system; critical infrastructure; potential cyber threat; power system modelling; power utility control center operation simulation; reconfigurable virtual SCADA test bed architecture; reconfigure virtual system; supervisory control and data acquisition; system level power system simulation; system resilience; system security improvement; vulnerabilities; Computer architecture; Power system dynamics; Protocols; SCADA systems; Servers; Software; Cyber Physical Systems; Supervisory Control and Data Acquisition (SCADA) Systems; System Integration; Virtual Test bed (ID#: 15-8487)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7285822&isnumber=7285590
Yin Xu; Chen-Ching Liu; Schneider, K.P.; Ton, D.T., "Toward a Resilient Distribution System," in Power & Energy Society General Meeting, 2015 IEEE, pp. 1-5, 26-30 July 2015. doi: 10.1109/PESGM.2015.7286551
Abstract: Resiliency with respect to extreme events, such as a major hurricane, is considered one of the key features of smart distribution systems by the U.S. Department of Energy (DOE). In this paper, approaches to resilient distribution systems are reviewed and analyzed. Three important measures to enhance resiliency, i.e., utilization of microgrids, distribution automation (DA), and vulnerability analysis, are discussed. A 4-feeder 1069-node test system with microgrids is simulated to demonstrate the feasibility of these measures.
Keywords: distributed power generation; power distribution reliability; DOE; U.S. Department of Energy; distribution automation; microgrids; resilient distribution systems; smart distribution systems; vulnerability analysis; Automation; Hurricanes; Maintenance engineering; Microgrids; Power system reliability; Reliability; Smart grids; Distribution system; distribution automation; extreme event; microgrid; resiliency; service restoration (ID#: 15-8488)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286551&isnumber=7285590
Yamaguchi, Y.; Ogawa, A.; Takeda, A.; Iwata, S., "Cyber Security Analysis of Power Networks by Hypergraph Cut Algorithms," in Smart Grid, IEEE Transactions on, vol. 6, no. 5, pp.2189-2199, Sept. 2015. doi: 10.1109/TSG.2015.2394791
Abstract: This paper presents exact solution methods for analyzing vulnerability of electric power networks to a certain kind of undetectable attacks known as false data injection attacks. We show that the problems of finding the minimum number of measurement points to be attacked undetectably reduce to minimum cut problems on hypergraphs, which admit efficient combinatorial algorithms. Experimental results indicate that our exact solution methods run as fast as the previous methods, most of which provide only approximate solutions. We also present an algorithm for enumerating all small cuts in a hypergraph, which can be used for finding vulnerable sets of measurement points.
Keywords: directed graphs; power system security; combinatorial algorithms; cyber security analysis; electric power networks; false data injection attacks; hypergraph cut algorithms; Algorithm design and analysis; Computer security; Indexes; Power measurement; Power systems; Vectors; False data injection; hypergraph; minimum cut; power network; security index; state estimation (ID#: 15-8489)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7041192&isnumber=7210244
Procopiou, A.; Komninos, N., "Current and Future Threats Framework in Smart Grid Domain," in Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015 IEEE International Conference on, pp. 1852-1857, 8-12 June 2015. doi: 10.1109/CYBER.2015.7288228
Abstract: Due to smart grid's complex nature and criticality as an infrastructure, it is important to understand the key actors on each domain in depth so the potential vulnerabilities that can rise are identified. Furthermore, the correct identification of threats affecting the smart grid's normal functionality must be realised, as well as what impact these threats can have so appropriate countermeasures are implemented. In this paper a list of vulnerabilities that weaken the smart grid is outlined. Also structured analysis of attacks regarding the three key security objectives across the different layers is presented along with appropriate examples applicable to the smart grid infrastructure and what impact each of them has to the smart grid on each case. Finally, a set of new attack scenarios that focus on attacks being initiated from the smart home part of the smart grid is described targeting these security objectives with the potential consequences they can cause to the smart grid.
Keywords: power system security; smart power grids; attack scenarios; correct threat identification; future threats framework; key security objectives; normal functionality; potential vulnerability identification; smart grid domain; Density estimation robust algorithm; Floods; Least squares approximations; Protocols; Security; Smart grids; Smart meters; Attacks; Availability; Confidentiality; Information Security; Integrity; Smart Grid; Threats; Vulnerabilities (ID#: 15-8490)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288228&isnumber=7287893
Shipman, C.; Hopkinson, K.; Lopez, J., "Con-Resistant Trust for Improved Reliability in a Smart Grid Special Protection System," in Power & Energy Society General Meeting, 2015 IEEE, pp. 1-1, 26-30 July 2015. doi: 10.1109/PESGM.2015.7286475
Abstract: This article applies a con-resistant trust mechanism to improve the performance of a communications-based special protection system to enhance its effectiveness and resiliency. Smart grids incorporate modern information technologies to increase reliability and efficiency through better situational awareness. However, with the benefits of this new technology comes added risks associated with threats and vulnerabilities to the technology and to the critical infrastructure it supports. The research in this article uses con-resistant trust to quickly identify malicious or malfunctioning (untrustworthy) protection system nodes to mitigate instabilities. The con-resistant trust mechanism allows protection system nodes to make trust assessments based on the node's cooperative and defective behaviors. These behaviors are observed via frequency readings which are prediodically reported. The trust architecture is tested in experiments comparing a simulated special protection system with a con-resistant trust mechanism to one without the mechanism via an analysis of variance statistical model. Simulations result show promise for the proposed con-resistant trust mechanism.
Keywords: power system protection; power system reliability; smart power grids; conresistant trust; smart grid special protection system; variance statistical model; Computer architecture; Computers; Crystals; Information technology; Reliability engineering; Smart grids (ID#: 15-8491)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286475&isnumber=7285590
Basu, C.; Padmanaban, M.; Guillon, S.; de Montigny, M.; Kamwa, I., "Combining Multiple Sources Of Data For Situational Awareness Of Geomagnetic Disturbances," in Power & Energy Society General Meeting, 2015 IEEE, pp. 1-5, 26-30 July 2015. doi: 10.1109/PESGM.2015.7286179
Abstract: With the increasing complexity of the grid and increasing vulnerability to large-scale, natural events, control room operators need tools to enable them to react to events faster. This is especially true in the case of high impact events such as geomagnetic disturbances (GMDs). In this paper, we present a data-driven approach to building a predictive model of GMDs that combines information from multiple sources such as synchrophasors, magnetometers, etc. We evaluate the utility of our model on real GMD events and discuss some interesting results.
Keywords: geomagnetism; geophysical techniques; magnetometers; phasor measurement; power grids; power system control; GMD events; control room operators; geomagnetic disturbances; magnetometers; predictive model; situational awareness; synchrophasors; Delay effects; Earth; Harmonic analysis; Magnetometers; Monitoring; Power system harmonics; geomagnetic disturbances; synchrophasors; wide-area situational awareness (ID#: 15-8492)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286179&isnumber=7285590
Yihai Zhu; Jun Yan; Yufei Tang; Yan Sun; Haibo He, "Joint Substation-Transmission Line Vulnerability Assessment Against the Smart Grid," in Information Forensics and Security, IEEE Transactions on, vol. 10, no. 5, pp. 1010-1024, May 2015. doi: 10.1109/TIFS.2015.2394240
Abstract: Power grids are often run near the operational limits because of increasing electricity demand, where even small disturbances could possibly trigger major blackouts. The attacks are the potential threats to trigger large-scale cascading failures in the power grid. In particular, the attacks mean to make substations/transmission lines lose functionality by either physical sabotages or cyber attacks. Previously, the attacks were investigated from substation-only/transmission-line-only perspectives, assuming attacks can occur only on substations/transmission lines. In this paper, we introduce the joint substation-transmission line perspective, which assumes attacks can happen on substations, transmission lines, or both. The introduced perspective is a nature extension to substation-only and transmission-line-only perspectives. Such extension leads to discovering many joint substation-transmission line vulnerabilities. Furthermore, we investigate the joint substation-transmission line attack strategies. In particular, we design a new metric, the component interdependency graph (CIG), and propose the CIG-based attack strategy. In simulations, we adopt IEEE 30 bus system, IEEE 118 bus system, and Bay Area power grid as test benchmarks, and use the extended degree-based and load attack strategies as comparison schemes. Simulation results show the CIG-based attack strategy has stronger attack performance.
Keywords: IEEE standards; demand side management; failure analysis; graph theory; power engineering computing; power transmission lines; power transmission reliability; security of data; smart power grids; substation protection; CIG-based attack strategy; IEEE 118 bus system; component interdependency graph; cyber attacks; electricity demand; joint substation-transmission line vulnerability assessment; large-scale cascading failure; load attack strategy; physical sabotages; smart power grid blackouts; Load modeling; Measurement; Power system faults; Power system protection; Power transmission lines; Smart grids; Attack; Cascading Failures; Security; The Smart Grid; The smart grid; Vulnerability Analysis; attack; cascading failures; security; vulnerability analysis (ID#: 15-8493)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7015564&isnumber=7073680
Zeng, W.; Zhang, Y.; Chow, Mo-Yuen, "Resilient Distributed Energy Management Subject to Unexpected Misbehaving Generation Units," in Industrial Informatics, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, 30 October 2015. doi: 10.1109/TII.2015.2496228
Abstract: Distributed energy management algorithms are being developed for the smart grid to efficiently and economically allocate electric power among connected distributed generation units and loads. The use of such algorithms provides flexibility, robustness, and scalability, while it also increases the vulnerability of smart grid to unexpected faults and adversaries. The potential consequences of compromising the power system can be devastating to public safety and economy. Thus, it is important to maintain the acceptable performance of distributed energy management algorithms in a smart grid environment under malicious cyberattacks. In this paper, a neighborhood-watch based distributed energy management algorithm is proposed to guarantee the accurate control computation in solving the economic dispatch problem in the presence of compromised generation units. The proposed method achieves the system resilience by performing a reliable distributed control without a central coordinator and allowing all the well-behaving generation units to reach the optimal operating point asymptotically. The effectiveness of the proposed method is demonstrated through case studies under several different adversary scenarios.
Keywords: Algorithm design and analysis; Energy management; Integrated circuits; Resilience; Security; Smart grids; Economic dispatch; neighborhood-watch; resilient distributed energy management (ID#: 15-8494)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312956&isnumber=4389054
Shipman, C.M.; Hopkinson, K.M.; Lopez, J., "Con-Resistant Trust for Improved Reliability in a Smart-Grid Special Protection System," in Power Delivery, IEEE Transactions on, vol. 30, no. 1, pp. 455-462, Feb. 2015. doi: 10.1109/TPWRD.2014.2358074
Abstract: This paper applies a con-resistant trust mechanism to improve the performance of a communications-based special protection system to enhance its effectiveness and resiliency. Smart grids incorporate modern information technologies to increase reliability and efficiency through better situational awareness. However, with the benefits of this new technology come the added risks associated with threats and vulnerabilities to the technology and to the critical infrastructure it supports. The research in this paper uses con-resistant trust to quickly identify malicious or malfunctioning (untrustworthy) protection system nodes to mitigate instabilities. The con-resistant trust mechanism allows protection system nodes to make trust assessments based on the node's cooperative and defective behaviors. These behaviors are observed via frequency readings which are prediodically reported. The trust architecture is tested in experiments by comparing a simulated special protection system with a con-resistant trust mechanism to one without the mechanism via an analysis of the variance statistical model. Simulation results show promise for the proposed con-resistant trust mechanism.
Keywords: power system protection; power system reliability; smart power grids; statistical analysis; con-resistant trust mechanism; critical infrastructure; frequency readings; improved reliability; malfunctioning protection system; malicious protection system; modern information technology; situational awareness; smart grid; special protection system; trust assessments; untrustworthy protection system; variance statistical model; Generators; Government; Load modeling; Peer-to-peer computing; Resistance; Smart grids; Time-frequency analysis; Con-resistant trust; critical infrastructure; reputation-based trust; smart grid; special protection systems (ID#: 15-8495)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6898851&isnumber=7017601
Seokcheol Lee; Hyunwoo Lim; Woong Go; Haeryong Park; Taeshik Shon, "Logical Architecture of HAN-Centric Smartgrid Model," in Platform Technology and Service (PlatCon), 2015 International Conference on, pp. 41-42, 26-28 Jan. 2015. doi: 10.1109/PlatCon.2015.18
Abstract: Home area network, which is located at the closest position to customer, handles private information of customer in smart grid. Thus, it is considered as security sensitive area in smart grid. And there could be undiscovered cyber security threat and vulnerability of systems. Therefore, it is required to develop the reference model in order to analyze security requirements and enhance the security of home area network. In this paper, home area network centric smart grid logical architecture is proposed to research for security enhancement through analyzing previous reference models. The proposed logical architecture focuses on communication routes and customer affinity.
Keywords: computer network security; data privacy; home networks; power engineering computing; smart power grids; HAN-centric smartgrid model; communication routes; customer affinity; customer private information; cyber security threat; home area network centric smartgrid logical architecture; home area network security enhancement; security requirements; security sensitive area; system vulnerability; Analytical models; Computer architecture; Electricity; Energy management; NIST; Security; Smart meters; Home area network; Smartgird; communication; customer domain}, (ID#: 15-8496)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7079632&isnumber=7079537
Qiu, J.; Yang, H.; Dong, Z.Y.; Zhao, J.; Luo, F.; Lai, M.; Wong, K.P., "A Probabilistic Transmission Planning Framework for Reducing Network Vulnerability to Extreme Events," in Power Systems, IEEE Transactions on , vol. PP, no. 99, pp. 1-11, 03 December 2015. doi: 10.1109/TPWRS.2015.2498611
Abstract: The restructuring of electric power industry has brought in plenty of challenges for transmission expansion planning (TEP), mainly due to uncertainties. The commonly used probabilistic TEP approach requires the network to meet an acceptable risk criterion. However, a series of blackouts in recent years caused by extreme weather-related events have raised the concerns about network vulnerability through calculating the expected risk value. In this paper, we have proposed the concept that TEP should be economically adjusted in order to make network less vulnerable to extreme events (EEs) caused by climate change, e.g., floods or ice storms. We firstly give the explicit definitions of economic adjustment (EA) index and adjusted risk value. Then we formulate our model as a risk-based decision making process while satisfying the deterministic ${rm N}-1$ criterion. The proposed approach is tested on the IEEE 118-bus system. Results based on various risk aversion levels are given and comparison studies with other risk-based TEP approaches have been done. Also, sensitivity analysis of parameter setting has been conducted. According to the numerical results, the proposed risk-based TEP model is a flexible decision-making tool, which can help decision makers make a tradeoff between economy and security.
Keywords: Economics; Indexes; Load modeling; Planning; Probability density function; Uncertainty; Wind power generation; Power system planning; extreme events; risk management; wind power (ID#: 15-8497)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346515&isnumber=4374138
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications.
![]() |
Provenance 2015 |
Provenance refers to information about the origin and activities of system data and processes. With the growth of shared services and systems, including social media, cloud computing, and service-oriented architectures, finding tamperproof methods for tracking files is a major challenge. Research into the security of software of unknown provenance (SOUP) is also included. Provenance is important to the Science of Security relative to human behavior, metrics, resilience, and composability. The work cited here was presented in 2015.
Jiun Yi Yap; Tomlinson, A., "Provenance-Based Attestation for Trustworthy Computing," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol.1, pp. 630-637, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.428
Abstract: We present a new approach to the attestation of a computer's trustworthiness that is founded on provenance data of its key components. The prevailing method of attestation relies on comparing integrity measurements of the key components of a computer against a reference database of trustworthy integrity measurements. An integrity measurement is obtained by passing a software binary or any component through a hash function but this value carries little information unless there is a reference database. On the other hand, the semantics of provenance contain more details. There are expressive information such as the component's history and its causal dependencies with other elements of a computer. Hence, we argue that provenance data can be used as evidence of trustworthiness during attestation. In this paper, we describe a complete design for provenance-based attestation. The design development is guided by goals and it covers all the phases of this approach. We discuss about collecting provenance data and using the PROV data model to represent provenance data. To determine if provenance data of a component can provide evidence of its trustworthiness, we have developed a rule specification grammar and provided a discourse on using the rules. We then build the key mechanisms of this form of attestation by exploring approaches to capture provenance data and look at transforming the trust evaluation rules to XQuery language before running the rules against an XML based record of provenance data. Finally, the design is analyzed using threat modelling.
Keywords: XML; data models; trusted computing; PROV data model; XML based provenance data record; XQuery language; attestation prevailing method; computer trustworthiness attestation; hash function; key components; provenance data representation; provenance semantics; provenance-based attestation; rule specification grammar; software binary; threat modelling; trust evaluation rules; trustworthiness; trustworthy computing; trustworthy integrity measurements; Computational modeling; Computers; Data models; Databases; Semantics; Software; Software measurement; attestation; provenance; trustworthy computing (ID#: 15-8519)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345336&isnumber=7345233
Bany Taha, M.M.; Chaisiri, S.; Ko, R.K.L., "Trusted Tamper-Evident Data Provenance," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 646-653, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.430
Abstract: Data provenance, the origin and derivation history of data, is commonly used for security auditing, forensics and data analysis. While provenance loggers provide evidence of data changes, the integrity of the provenance logs is also critical for the integrity of the forensics process. However, to our best knowledge, few solutions are able to fully satisfy this trust requirement. In this paper, we propose a framework to enable tamper-evidence and preserve the confidentiality and integrity of data provenance using the Trusted Platform Module (TPM). Our framework also stores provenance logs in trusted and backup servers to guarantee the availability of data provenance. Tampered provenance logs can be discovered and consequently recovered by retrieving the original logs from the servers. Leveraging on TPM's technical capability, our framework guarantees data provenance collected to be admissible, complete, and confidential. More importantly, this framework can be applied to capture tampering evidence in large-scale cloud environments at system, network, and application granularities. We applied our framework to provide tamper-evidence for Progger, a cloud-based, kernel-space logger. Our results demonstrate the ability to conduct remote attestation of Progger logs' integrity, and uphold the completeness, confidential and admissible requirements.
Keywords: cloud computing; data analysis; digital forensics; file servers; trusted computing; Progger log integrity; TPM; backup server; cloud environments; cloud-based logger; data analysis; data provenance confidentiality; data provenance integrity; forensic process analysis; kernel-space logger; provenance logger integrity; security auditing; trusted platform module; trusted server; trusted tamper-evident data provenance; Cloud computing; Generators; Kernel; Reliability; Runtime; Servers; Virtual machining; Accountability in Cloud Computing; Cloud Computing; Data Provenance; Data Security; Remote Attestation; Tamper Evidence; Trusted Computing; Trusted Platform Module (ID#: 15-8520)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345338&isnumber=7345233
Liang Chen; Edwards, P.; Nelson, J.D.; Norman, T.J., "An Access Control Model for Protecting Provenance Graphs," in Privacy, Security and Trust (PST), 2015 13th Annual Conference on, pp. 125-132, 21-23 July 2015. doi: 10.1109/PST.2015.7232963
Abstract: Securing provenance has recently become an important research topic, resulting in a number of models for protecting access to provenance. Existing work has focused on graph transformation mechanisms that supply a user with a provenance view that satisfies both access control policies and validity constraints of provenance. However, it is not always possible to satisfy both of them simultaneously, because these two conditions are often inconsistent which require sophisticated conflict resolution strategies to be put in place. In this paper we develop a new access control model tailored for provenance. In particular, we explicitly take into account validity constraints of provenance when specifying certain parts of provenance to which access is restricted. Hence, a provenance view that is granted to a user by our authorisation mechanism would automatically satisfy the validity constraints. Moreover, we propose algorithms that allow provenance owners to deploy fine-grained access control for their provenance data.
Keywords: authorisation; graph theory; access control model; access control policy; authorisation mechanism; fine-grained access control; graph transformation mechanism; provenance graph; provenance security; Authorization; Computers; Data models; Object recognition; Transforms (ID#: 15-8521)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232963&isnumber=7232940
Taotao Ma; Hua Wang; Jianming Yong; Yueai Zhao, "Causal Dependencies of Provenance Data in Healthcare Environment," in Computer Supported Cooperative Work in Design (CSCWD), 2015 IEEE 19th International Conference on, pp. 643-648, 6-8 May 2015. doi: 10.1109/CSCWD.2015.7231033
Abstract: Open Provenance Model (OPM) is a provenance model that can capture provenance data in terms of causal dependencies among the provenance data model components. Causal dependencies are relationships between an event (the cause) and a second event (the effect), where the second event is understood as a physical consequence of the first. Causal dependencies can represent a set of entities that are necessary and sufficient to explain the presence of another entity. A provenance model is able to describe the provenance of any data at an abstract layer, but does not explicitly capture causal dependencies that are a vital challenge since the lacks of the relations in OPM, especially in healthcare environment. In this paper, we analyse the causal dependencies between entities in a medical workflow system with OPM graphs.
Keywords: authorisation; causality; graph theory; health care; medical information systems; open systems; OPM graph; access control; causal dependency; health care environment; medical workflow system; open provenance model; provenance data; Artificial intelligence; Blood pressure; Kidney; Lifting equipment; Medical services; Registers; access control; causal dependencies; provenance; security (ID#: 15-8522)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7231033&isnumber=7230917
Mohy, N.N.; Mokhtar, H.M.O.; El-Sharkawi, M.E., "Delegation Enabled Provenance-based Access Control Model," in Science and Information Conference (SAI), 2015, pp. 1374-1379, 28-30 July 2015. doi: 10.1109/SAI.2015.7237321
Abstract: Any organization aims to achieve its business objectives, secure its information, and conforms to policies and regulations. Provenance can help organizations achieve these goals. As provenance stores the history of the organization's workflow, it can be used for auditing, compliance, checking errors and securing the business. Provenance Based Access Control (PBAC) is one of the new access control models that used to secure data based on its provenance. This paper introduces Delegation Provenance based Access Control (DPBAC) model that accounts for the delegation of access rights and also introduce an extension to the Open Provenance Model (OPM) in order to store the history of the delegation to be used for auditing purposes.
Keywords: authorisation; open systems; DBPBAC model; OPM; access control model; auditing purpose; delegation provenance based access control; information security; open provenance model; Access control; Data models; History; Organizations; Permission; Process control; Standards organizations; OPM; Provenance; access control; delegation (ID#: 15-8523)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237321&isnumber=7237120
Cuzzocrea, A., "Provenance Research Issues and Challenges in the Big Data Era," in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3, pp. 684-686, 1-5 July 2015. doi: 10.1109/COMPSAC.2015.345
Abstract: Provenance of Big Data is a hot-topic in the database and data mining research communities. Basically, provenance is the process of detecting the lineage and the derivation of data and data objects, and it plays a major role in database management systems as well as in workflow management systems and distributed systems. Despite this, provenance of big data research is still in its embryonic phase, and a lot of efforts must still be done in this area. Inspired by these considerations, in this paper we provide an overview of relevant issues and challenges in the context of big data provenance research, by also highlighting possible future efforts within these research directions.
Keywords: Big Data; data mining; database management systems; distributed processing; big data era; big data provenance; data mining research communities; database management systems; distributed systems; embryonic phase; provenance research issues; workflow management systems; Big data; Computational modeling; Conferences; Context; Data privacy; Databases; Security; Data Provenance; Provenance of Big Data (ID#: 15-8524)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273464&isnumber=7273299
Katilu, V.M.; Franqueira, V.N.L.; Angelopoulou, O., "Challenges of Data Provenance for Cloud Forensic Investigations," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 312-317, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.54
Abstract: Cloud computing has gained popularity due to its efficiency, robustness and cost effectiveness. Carrying out digital forensic investigations in the cloud is currently a relevant and open issue. The root of this issue is the fact that servers cannot be physically accessed, coupled with the dynamic and distributed nature of cloud computing with regards to data processing and storage. This renders traditional methods of evidence collection impractical. The use of provenance data in cloud forensics is critical as it provides forensic investigators with data history in terms of people, entities and activities involved in producing related data objects. Therefore, cloud forensics requires effective provenance collection mechanisms. This paper provides an overview of current provenance challenges in cloud computing and identifies limitations of current provenance collection mechanisms. Recommendations for additional research in digital provenance for cloud forensics are also presented.
Keywords: cloud computing; digital forensics; cloud computing; cloud digital forensic investigation; data history; data objects; data processing; data provenance; data storage; evidence collection; provenance collection mechanism; Cloud computing; Forensics; Kernel; Monitoring; Reliability; Security; Servers; Cloud Computing; Cloud Forensics; Provenance (ID#: 15-8525)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299931&isnumber=7299862
Cong Liao; Squicciarini, A., "Towards Provenance-Based Anomaly Detection in MapReduce," in Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on, pp. 647-656, 4-7 May 2015. doi: 10.1109/CCGrid.2015.16
Abstract: MapReduce enables parallel and distributed processing of vast amount of data on a cluster of machines. However, such computing paradigm is subject to threats posed by malicious and cheating nodes or compromised user submitted code that could tamper data and computation since users maintain little control as the computation is carried out in a distributed fashion. In this paper, we focus on the analysis and detection of anomalies during the process of MapReduce computation. Accordingly, we develop a computational provenance system that captures provenance data related to MapReduce computation within the MapReduce framework in Hadoop. In particular, we identify a set of invariants against aggregated provenance information, which are later analyzed to uncover anomalies indicating possible tampering of data and computation. We conduct a series of experiments to show the efficiency and effectiveness of our proposed provenance system.
Keywords: data analysis; parallel processing; security of data; Hadoop; MapReduce computation; computational provenance system; data tampering; provenance-based anomaly detection; Access control; Cloud computing; Containers; Distributed databases; Monitoring; Yarn; MapReduce; computation integrity; logging; provenance (ID#: 15-8526)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152530&isnumber=7152455
Khan, R.; Hasan, R., "Fuzzy Authentication Using Interaction Provenance in Service Oriented Computing," in Services Computing (SCC), 2015 IEEE International Conference on, pp. 170-177, June 27 2015-July 2 2015. doi: 10.1109/SCC.2015.32
Abstract: In service oriented computing, authentication factors have their vulnerabilities when considered exclusively. Cross-platform and service composition architectures require a complex integration procedure and limit adoptability of newer authentication models. Authentication is generally based on a binary success or failure and relies on credentials proffered at the present moment without considering how or when the credentials were obtained by the subject. The resulting access control engines suffer from rigid service policies and complexity of management. In contrast, social authentication is based on the nature, quality, and length of previous encounters with each other. We posit that human-to-machine authentication is a similar causal effect of an earlier interaction with the verifying party. We use this notion to propose interaction provenance as the only unified representation model for all authentication factors in service oriented computing. Interaction provenance uses the causal relationship of past events to leverage service composition, cross-platform integration, timeline authentication, and easier adoption of newer methods. We extend our model with fuzzy authentication using past interactions and linguistic policies. The paper presents an interaction provenance recording and authentication protocol and a proof-of-concept implementation with extensive experimental evaluation.
Keywords: fuzzy set theory; security of data; service-oriented architecture; authentication factors; authentication protocol; complex integration procedure; cross-platform architectures; cross-platform integration; fuzzy authentication; human-to-machine authentication; interaction provenance; interaction provenance recording; leverage service composition; proof-of-concept implementation; service composition architectures; service oriented computing; social authentication; timeline authentication; Access control; Authentication; Computational modeling; IP networks; Pragmatics; Protocols; Servers; Access Control; Authentication; Events; Fuzzy; Interaction Provenance; Persona; Security (ID#: 15-8527)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207350&isnumber=7207317
Levchuk, G.; Blasch, E., "Probabilistic Graphical Models for Multi-Source Fusion from Text Sources," in Computational Intelligence for Security and Defense Applications (CISDA), 2015 IEEE Symposium on, pp.1-10, 26-28 May 2015. doi: 10.1109/CISDA.2015.7208640
Abstract: In this paper we present probabilistic graph fusion algorithms to support information fusion and reasoning over multi-source text media. Our methods resolve misinformation by combining knowledge similarity analysis and conflict identification with source characterization. For experimental purposes, we used the dataset of the articles about current military conflict in Eastern Ukraine. We show that automated knowledge fusion and conflict detection is feasible and high accuracy of detection can be obtained. However, to correctly classify mismatched knowledge fragments as misinformation versus additionally reported facts, the knowledge reliability and credibility must be assessed. Since the true knowledge must be reported by many reliable sources, we compute knowledge frequency and source reliability by incorporating knowledge provenance and analyzing historical consistency between the knowledge reported by the sources in our dataset.
Keywords: information dissemination; pattern classification; probability; reliability; sensor fusion; Eastern Ukraine; information fusion; knowledge credibility; knowledge fusion; knowledge reliability; knowledge similarity analysis; mismatched knowledge fragment classification; multisource fusion; multisource text media; probabilistic graph fusion algorithm; probabilistic graphical model; source characterization; source reliability; Data mining; Government; Information retrieval; Joints; Media; Probabilistic logic; Semantics; graphical fusion; information wars; knowledge graph; misinformation detection; multi-source fusion; open source exploitation; situation assessment (ID#: 15-8528)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208640&isnumber=7208613
Christou, C.T.; Jacyna, G.M.; Goodman, F.J.; Deanto, D.G.; Masters, D., "Geolocation Analysis Using Maxent And Plant Sample Data," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-6, 14-16 April 2015. doi: 10.1109/THS.2015.7225273
Abstract: A study was conducted to assess the feasibility of geolocation based on correctly identifying pollen samples found on goods or people for purposes of compliance with U.S. import laws and criminal forensics. The analysis was based on Neotropical plant data sets from the Global Biodiversity Information Facility. The data were processed through the software algorithm Maxent that calculates plant probability geographic distributions of maximum entropy, subject to constraints. Derivation of single and joint continuous probability densities of geographic points, for single and multiple taxa occurrences, were performed. Statistical metrics were calculated directly from the output of Maxent for single taxon probabilities and were mathematically derived for joint taxa probabilities. Predictions of likeliest geographic regions at a given probability percentage level were made, along with the total corresponding geographic ranges. We found that joint probability distributions greatly restrict the areas of possible provenance of pollen samples.
Keywords: entropy; geographic information systems; law; sampled data systems; statistical distributions; Maxent; Neotropical plant data sets; U.S. import laws; criminal forensics; geolocation analysis; global biodiversity information facility; joint probability distributions; maximum entropy; plant sample data; pollen samples; probability geographic distributions; software algorithm; statistical metrics; Geology; Joints; Logistics; Measurement; Probability distribution; Standards; Neotropics; environmental variables; forensics geolocation; marginal and joint probability distributions; maximum entropy; plant occurrences; pollen analytes; statistical metrics (ID#: 15-8529)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225273&isnumber=7190491
Dogan, G.; Avincan, K.; Brown, T., "Provenance and Trust as New Factors for Self-Organization in a Wireless Sensor Network," in Signal Processing and Communications Applications Conference (SIU), 2015 23th, pp. 544-547, 16-19 May 2015. doi: 10.1109/SIU.2015.7129881
Abstract: Trust can be an important component of wireless sensor networks for believability of the produced data and trust history is a crucial asset in deciding trust of the data. In our previous work, we developed an architecture called ProTru and we showed how provenance can be used for registering previous trust records and other information such as node type, data type, node location, average of historical data. We designed a distributed trust enhancing architecture using only local provenance during sensor fusion with a low communication overhead. Our network is cognitive in the sense that our system reacts automatically upon detecting low trust and restructures itself. In this work, we are extending our previous architecture by storing dataflow provenance graphs. This feature will enhance the cognitive abilities of our system by giving the network the capability of remembering past network snapshots.
Keywords: graph theory; sensor fusion; telecommunication security; wireless sensor networks; ProTru architecture; cognitive abilities; data type; dataflow provenance graphs; distributed trust enhancing architecture; historical data; local provenance; low communication overhead; node location; node type; past network snapshots; self-organization; sensor fusion; trust records; wireless sensor network; Cities and towns; Conferences; History; Military communication; Mobile communication; Security; Wireless sensor networks; Distributed Intelligence; Provenance; Self Organization; Trust; Wireless Sensor Networks (ID#: 15-8530)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129881&isnumber=7129794
Xin Li; Joshi, C.; Tan, A.Y.S.; Ko, R.K.L., "Inferring User Actions from Provenance Logs," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 742-749, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.442
Abstract: Progger, a kernel-spaced cloud data provenance logger which provides fine-grained data activity records, was recently developed to empower cloud stakeholders to trace data life cycles within and across clouds. Progger logs have the potential to allow analysts to infer user actions and create a data-centric behaviour history in a cloud computing environment. However, the Progger logs are complex and noisy and therefore, currently this potential can not be met. This paper proposes a statistical approach to efficiently infer the user actions from the Progger logs. Inferring logs which capture activities at kernel-level granularity is not a straightforward endeavour. This paper overcomes this challenge through an approach which shows a high level of accuracy. The key aspects of this approach are identifying the data preprocessing steps and attribute selection. We then use four standard classification models and identify the model which provides the most accurate inference on user actions. To our best knowledge, this is the first work of its kind. We also discuss a number of possible extensions to this work. Possible future applications include the ability to predict an anomalous security activity before it occurs.
Keywords: cloud computing; data loggers; data mining; inference mechanisms; pattern classification; security of data; Progger logs; anomalous security activity prediction; attribute selection; classification models; cloud computing environment; data activity records; data life cycle tracing; data preprocessing step identification; data-centric behaviour history; kernel-spaced cloud data provenance logger; log mining; provenance logs; user action inference; Cloud computing; Data models; Data preprocessing; Data security; Kernel; Testing; Training data; Cloud Computing; Data Provenance; Data Security; Data-centric Logger; Log Mining; Progger; Provenance Mining; User Actions (ID#: 15-8531)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345350&isnumber=7345233
Meera, G.; Geethakumari, G., "A Provenance Auditing Framework for Cloud Computing Systems," in Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, pp. 1-5, 19-21 Feb. 2015. doi: 10.1109/SPICES.2015.7091427
Abstract: Cloud computing is a service oriented paradigm that aims at sharing resources among a massive number of tenants and users. This sharing facility that it provides coupled with the sheer number of users make cloud environments susceptible to major security risks. Hence, security and auditing of cloud systems is of great relevance. Provenance is a meta-data history of objects which aid in verifiability, accountability and lineage tracking. Incorporating provenance to cloud systems can help in fault detection. This paper proposes a framework which aims at performing secure provenance audit of clouds across applications and multiple guest operating systems. For integrity preservation and verification, we use established cryptographic techniques. We look at it from the cloud service providers' perspective as improving cloud security can result in better trust relations with customers.
Keywords: auditing; cloud computing; cryptography; data integrity; fault diagnosis; meta data; resource allocation; service-oriented architecture; trusted computing; accountability; cloud computing systems; cloud environments; cloud security; cloud service providers; cryptographic techniques; fault detection; integrity preservation; integrity verification; lineage tracking; metadata history; operating systems; provenance auditing framework; resource sharing; security risks; service oriented paradigm; sharing facility; trust relations; verifiability; Cloud computing; Cryptography; Digital forensics; Monitoring; Virtual machining; Auditing; Cloud computing; Provenance (ID#: 15-8532)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091427&isnumber=7091354
Donghoon Kim; Vouk, M.A., "Securing Scientific Workflows," in Software Quality, Reliability and Security - Companion (QRS-C), 2015 IEEE International Conference on, pp. 95-104, 3-5 Aug. 2015. doi: 10.1109/QRS-C.2015.25
Abstract: This paper investigates security of Kepler scientific workflow engine. We are especially interested in Kepler-based scientific workflows that may operate in cloud environments. We find that (1) three security properties (i.e., input validation, remote access validation, and data integrity) are essential for making Kepler-based workflows more secure, and (2) that use of the Kepler provenance module may help secure Kepler based workflows. We implemented a prototype security enhanced Kepler engine to demonstrate viability of use of the Kepler provenance module in provision and management of the desired security properties.
Keywords: authorisation; cloud computing; data integrity; scientific information systems; workflow management software; Kepler provenance module; Kepler scientific workflow engine security; cloud environment; data integrity; input validation; remote access validation; Cloud computing; Conferences; Databases; Engines; Security; Software quality; Uniform resource locators; Cloud; Kepler; Provenance; Scientific workflow; Vulnerability (ID#: 15-8533)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322130&isnumber=7322103
Kalaivani, K.; Suguna, C., "Efficient Botnet Detection Based on Reputation Model and Content Auditing in P2P Networks," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp. 1-4, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282358
Abstract: Botnet is a number of computers connected through internet that can send malicious content such as spam and virus to other computers without the knowledge of the owners. In peer-to-peer (p2p) architecture, it is very difficult to identify the botnets because it does not have any centralized control. In this paper, we are going to use a security principle called data provenance integrity. It can verify the origin of the data. For this, the certificate of the peers can be exchanged. A reputation based trust model is used for identifying the authenticated peer during file transmission. Here the reputation value of each peer can be calculated and a hash table is used for efficient file searching. The proposed system can also verify the trustworthiness of transmitted data by using content auditing. In this, the data can be checked against trained data set and can identify the malicious content.
Keywords: authorisation; computer network security; data integrity; information retrieval; invasive software; peer-to-peer computing; trusted computing;P2P networks; authenticated peer; botnet detection; content auditing; data provenance integrity; file searching; file transmission; hash table; malicious content; peer-to-peer architecture; reputation based trust model; reputation model; reputation value; security principle; spam; transmitted data trustworthiness; virus; Computational modeling; Cryptography; Measurement; Peer-to-peer computing; Privacy; Superluminescent diodes; Data provenance integrity; content auditing; reputation value; trained data set (ID#: 15-8534)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282358&isnumber=7282219
Ashwin Kumar, T.K.; Hong Liu; Thomas, J.P.; Mylavarapu, G., "Identifying Sensitive Data Items within Hadoop," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1308-1313, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.293
Abstract: Recent growth in big-data is raising security and privacy concerns. Organizations that collect data from various sources are at a risk of legal or business liabilities due to security breach and exposure of sensitive information. Only file-level access control is feasible in current Hadoop implementation and the sensitive information can only be identified manually or from the information provided by the data owner. The problem of identifying sensitive information manually gets complicated due to different types of data. When sensitive information is accessed by an unauthorized user or misused by an authorized person, they can compromise privacy. This paper is the first part of our intended access control framework for Hadoop and it automates the process of identifying sensitive data items manually. To identify such data items, the proposed framework harnesses data context, usage patterns and data provenance. In addition to this the proposed framework can also keep track of the data lineage.
Keywords: Big Data; authorisation; data handling; data privacy; parallel processing; Big-Data; Hadoop; access control framework; authorized person; business liabilities; data collection; data context; data lineage; data privacy; data provenance; data security; file-level access control; information misuse; legal liabilities; security breach; sensitive data item identification; sensitive information access; sensitive information exposure; sensitive information identification; unauthorized user; usage patterns; Access control; Context; Electromyography; Generators; Metadata; Neural networks; Sensitivity; Hadoop; data context; data lineage; data provenance; file-level access control; privacy; sensitive information; usage patterns (ID#: 15-8535)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336348&isnumber=7336120
Mayhew, Michael; Atighetchi, Michael; Adler, Aaron; Greenstadt, Rachel, "Use of Machine Learning in Big Data Analytics for Insider Threat Detection," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp. 915-922, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357562
Abstract: In current enterprise environments, information is becoming more readily accessible across a wide range of interconnected systems. However, trustworthiness of documents and actors is not explicitly measured, leaving actors unaware of how latest security events may have impacted the trustworthiness of the information being used and the actors involved. This leads to situations where information producers give documents to consumers they should not trust and consumers use information from non-reputable documents or producers. The concepts and technologies developed as part of the Behavior-Based Access Control (BBAC) effort strive to overcome these limitations by means of performing accurate calculations of trustworthiness of actors, e.g., behavior and usage patterns, as well as documents, e.g., provenance and workflow data dependencies. BBAC analyses a wide range of observables for mal-behavior, including network connections, HTTP requests, English text exchanges through emails or chat messages, and edit sequences to documents. The current prototype service strategically combines big data batch processing to train classifiers and real-time stream processing to classifier observed behaviors at multiple layers. To scale up to enterprise regimes, BBAC combines clustering analysis with statistical classification in a way that maintains an adjustable number of classifiers.
Keywords: Access control; Big data; Computer security; Electronic mail; Feature extraction; Monitoring; HTTP; TCP; big data; chat; documents; email; insider threat; machine learning; support vector machine; trust; usage patterns (ID#: 15-8536)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357562&isnumber=7357245
Thuraisingham, B.; Cadenhead, T.; Kantarcioglu, M.; Khadilkar, V., "Design and Implementation of a Semantic Web-Based Inference Controller: A Summary," in Information Reuse and Integration (IRI), 2015 IEEE International Conference on, pp. 451-456, 13-15 Aug. 2015. doi: 10.1109/IRI.2015.75
Abstract: This paper provides a summary of the design and implementation of a prototype inference controller that operates over a provenance graph and protects important provenance information from unauthorized users. We use as our data model the Resource Description Framework (RDF), which supports the interoperability of multiple databases having disparate data schemas. In addition, we express policies and rules in terms of Semantic Web rules.
Keywords: inference mechanisms; semantic Web; RDF; data model; disparate data schemas; provenance graph; resource description framework; semantic Web-based inference controller; Cognition; Knowledge based systems; Process control; Query processing; Resource description framework; Security (ID#: 15-8537)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301011&isnumber=7300933
Kun Yang; Forte, D.; Tehranipoor, M., "An RFID-based technology for electronic component and system Counterfeit detection and Traceability," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-6, 14-16 April 2015. doi: 10.1109/THS.2015.7225279
Abstract: The vulnerabilities in today's supply chain have raised serious concerns about the security and trustworthiness of electronic components and systems. Testing for device provenance, detection of counterfeit integrated circuits/systems, and traceability are challenging issues to address. In this paper, we develop a novel RFID-based system suitable for electronic component and system Counterfeit detection and System Traceability called CST. CST is composed of different types of on-chip sensors and in-system structures that provide the information needed to detect multiple counterfeit IC types (recycled, cloned, etc.), verify the authenticity of the system with some degree of confidence, and track/identify boards. Central to CST is an RFID tag employed as storage and a channel to read the information from different types of chips on the printed circuit board (PCB) in both power-off and power-on scenarios. Simulations and experimental results using Spartan 3E FPGAs demonstrate the effectiveness of this system. The efficiency of the radio frequency (RF) communication has also been verified via a PCB prototype with a printed slot antenna.
Keywords: counterfeit goods; field programmable gate arrays; microstrip antennas; printed circuits; production engineering computing; radiofrequency identification; supply chains; CST; PCB prototype; RF; RFID tag; RFID-based system; RFID-based technology; Spartan 3E FPGA; counterfeit integrated circuits; device provenance; electronic component; In-system structures; multiple counterfeit IC types; on-chip sensors; printed circuit board; printed slot antenna; radio frequency communication; supply chain; system counterfeit detection; Electronic components; Field programmable gate arrays; Radiation detectors; Radio frequency; Radiofrequency identification; Sensor systems (ID#: 15-8538)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225279&isnumber=7190491
Jilcott, S., "Scalable Malware Forensics Using Phylogenetic Analysis," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-6, 14-16 April 2015. doi: 10.1109/THS.2015.7225311
Abstract: Malware forensics analysts confront one of our biggest homeland security challenges - a continuing flood of new malware variants released by adaptable adversaries seeking new targets in cyberspace, exploiting new technologies, and bypassing existing security mechanisms. Reverse engineering new samples, understanding their capabilities, and ascertaining provenance is time-intensive and requires considerable human expertise. We present DECODE, a prototype malware forensics analysis system developed under DARPA's Cyber Genome program. DECODE increases the actionable forensics derivable from large repositories of collected malware by quickly identifying a new malware sample as a variant of other malware samples, without relying on pre-existing anti-virus signatures. DECODE also accelerates reverse engineering efforts by quickly identifying parts of the malware that have already been seen in other samples and characterizing the new and different capabilities. DECODE can also reconstruct the evolution of malware variants over time. DECODE applies phylogenetic analysis to provide these advantages. Phylogenetic analysis is the study of similarities and differences in program structure to find relationships within groups of software programs, providing insights about new malware variants not available from signature-based malware detection.
Keywords: digital forensics; invasive software; reverse engineering; statistical analysis; DECODE; malware forensics; phylogenetic analysis; program structure; reverse engineering; Acceleration; Irrigation; Phylogeny; Pipelines; formatting; insert; style; styling (ID#: 15-8539)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225311&isnumber=7190491
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications