![]() |
Publications of Interest |
The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: news@scienceofsecurity.net
(ID#:16-11190)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence
![]() |
6LoWPAN 2015 |
6LoWPAN, IPv6 over Low power Wireless Personal Area Networks, is an architecture intended to allow low power devices to participate in the Internet of Things (IoT). The IEEE specification allows for operation in either a secure or a non-secure mode. For the Science of Security community, the creation of a secure process in low power and ad hoc environments relates to the hard problems of resilience and composability. In the IoT context, it also relates to cyber physical system security. The research cited here was presented in 2015.
G. Peretti, V. Lakkundi, and M. Zorzi, “BlinkToSCoAP: An End-to-End Security Framework for the Internet of Things,” Communication Systems and Networks (COMSNETS), 2015 7th International Conference on, Bangalore, 2015, pp. 1-6.
doi: 10.1109/COMSNETS.2015.7098708
Abstract: The emergence of Internet of Things and the availability of inexpensive sensor devices and platforms capable of wireless communications enable a wide range of applications such as intelligent home and building automation, mobile healthcare, smart logistics, distributed monitoring, smart grids, energy management, asset tracking to name a few. These devices are expected to employ Constrained Application Protocol for the integration of such applications with the Internet, which includes User Datagram Protocol binding with Datagram Transport Layer Security protocol to provide end-to-end security. This paper presents a framework called BlinkToSCoAP, obtained through the integration of three software libraries implementing lightweight versions of DTLS, CoAP and 6LoWPAN protocols over TinyOS. Furthermore, a detailed experimental campaign is presented that evaluates the performance of DTLS security blocks. The experiments analyze BlinkToSCoAP messages exchanged between two Zolertia Z1 devices, allowing evaluations in terms of memory footprint, energy consumption, latency and packet overhead. The results obtained indicate that securing CoAP with DTLS in Internet of Things is certainly feasible without incurring much overhead.
Keywords: Internet; Internet of Things; computer network reliability; computer network security; protocols; 6LoWPAN protocol; BlinkToSCoAP; CoAP protocol; DTLS protocol; TinyOS; Zolertia Zl device; asset tracking; availability; building automation; constrained application protocol; datagram transport layer security protocol; distributed monitoring; end-to-end security framework; energy consumption; energy management; intelligent home; latency overhead; memory footprint; message exchange; mobile healthcare; packet overhead; sensor device; smart grid; smart logistics; user datagram protocol; wireless communication; Computer languages; Logic gates; Payloads; Performance evaluation; Random access memory; Security; Servers; 6LoWPAN; CoAP; DTLS; M2M; end-to-end security (ID#: 16-9555)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098708&isnumber=7098633
I. Halcu, G. Stamatescu, and V. Sgârciu, “Enabling Security on 6LoWPAN / IPv6 Wireless Sensor Networks,” Electronics, Computers and Artificial Intelligence (ECAI), 2015 7th International Conference on, Bucharest, 2015, pp. SSS-29–SSS-32.
doi: 10.1109/ECAI.2015.7301201
Abstract: The increasing interest in the development of open-source, IPv6 platforms for Wireless Sensor Networks (WSN) and the Internet of Things (IoT), offers a significant potential ubiquitous monitoring and control. The usage of IPv6 in WSNs enables the integration of sensing applications with the Internet. For relevant goals, we consider security should properly be addressed as an integral part of high-level layers of the protocol stack. This paper describes and evaluates the usage of new compressed 6LoWPAN security headers, with a focus on the link-layer. Leveraging the Contiki operating system for resource constrained devices, along with link-layer security sublayers and IPv6, helpful insight is achieved for evaluation and deployment.
Keywords: IP networks; operating systems (computers); personal area networks; public domain software; telecommunication security; ubiquitous computing; wireless sensor networks; 6LoWPAN security headers; Contiki operating system; IPv6 wireless sensor networks; WSN; link-layer; link-layer security sublayers; open-source development; resource constrained devices; ubiquitous control; ubiquitous monitoring; Encryption; IEEE 802.15 Standard; Memory management; Payloads; Protocols; Wireless sensor networks; 6LoWPAN; 802.15.4; LLSEC; Security; Wireless Sensor Networks (ID#: 16-9556)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301201&isnumber=7301133
P. Pongle and G. Chavan, “A Survey: Attacks on RPL and 6LoWPAN in IoT,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-6. doi: 10.1109/PERVASIVE.2015.7087034
Abstract: 6LoWPAN (IPv6 over Low-Power Wireless Personal Area Networks) standard allows heavily constrained devices to connect to IPv6 networks. 6LoWPAN is novel IPv6 header compression protocol, it may go easily under attack. Internet of Things consist of devices which are limited in resource like battery powered, memory and processing capability etc. for this a new network layer routing protocol is designed called RPL (Routing Protocol for low power Lossy network). RPL is light weight protocol and doesn’t have the functionality like of traditional routing protocols. This rank based routing protocol may goes under attack. Providing security in Internet of Things is challenging as the devices are connected to the unsecured Internet, limited resources, the communication links are lossy and set of novel technologies used such as RPL, 6LoWPAN etc. This paper focus on possible attacks on RPL and 6LoWPAN network, counter measure against them and consequences on network parameters. Along with comparative analysis of methods to mitigate these attacks are done and finally the research opportunities in network layer security are discussed.
Keywords: IP networks; Internet; Internet of Things; computer network security; personal area networks; routing protocols; 6LoWPAN; IPv6 over Low-Power Wireless Personal Area Network standard; IoT; RPL; network layer routing protocol; network layer security; novel IPv6 header compression protocol; rank based routing protocol; routing protocol for low power lossy network; Authentication; Delays; Maintenance engineering; Network topology; Topology; Attacks; Security (ID#: 16-9557)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087034&isnumber=7086957
Li Xue and Sun Zhixin, “An Improved 6LoWPAN Hierarchical Routing Protocol,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 318-322. doi: (not provided)
Abstract: IETF 6LoWPAN working group is engaged in the IPv6 protocol stack research work based on IEEE802.15.4 standard. In this working group, the routing protocol is one of the important research contents. In the 6LoWPAN, HiLow is a well-known layered routing protocol. This paper puts forward an improved hierarchical routing protocol GHiLow by improving HiLow parent node selection and path restoration strategy. GHiLow improves the parent node selection by increasing the choice of parameters. Simutaneously, it also improves path recovery by analysing different situations to recovery path. Therefore, GHiLow contributes to the enhancement of network performance and the decrease of network energy consumption.
Keywords: personal area networks; routing protocols; 6LoWPAN hierarchical routing protocol; IEEE802.15.4 standard; IETF 6LoWPAN working group; IPv6 protocol; node selection; parent node selection; path restoration strategy; Artificial neural networks; Protocols; Routing; 6LoWPAN; HiLow; path recovery (ID#: 16-9558)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332588&isnumber=7332527
C. Matthias, S. Kris, B. An, S. Ruben, M. Nele, and A. Kris, “Study on Impact of Adding Security in a 6LoWPAN Based Network,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 577-584. doi: 10.1109/CNS.2015.7346871
Abstract: 6LoWPAN, a technology for allowing the deployment of IPv6 on Low Power and Lossy Networks enables interoperability and user-friendliness when establishing applications related to the highly popular trend of Internet of Things. In this paper, we investigate the impact of including a low cost security solution into the communication scheme on latency, power and memory requirements. The measurements demonstrate that this impact is acceptable for most applications. They also show that the impact drastically decreases when the number of transmitted messages decreases or the number of hops increases.
Keywords: IP networks; computer network security; 6LoWPAN; IPv6; Internet of Things; low cost security solution; Cryptography; IEEE 802.15 Standard; Internet; Protocols; Servers; Wireless Sensor and Actuator Network; energy consumption; latency; security (ID#: 16-9559)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346871&isnumber=7346791
Y. Qiu and M. Ma, “An Authentication and Key Establishment Scheme to Enhance Security for M2M in 6LoWPANs,” Communication Workshop (ICCW), 2015 IEEE International Conference on, London, 2015, pp. 2671-2676.
doi: 10.1109/ICCW.2015.7247582
Abstract: With the rapid development of wireless communication technologies, machine-to-machine (M2M) communications, which is an essential part of the Internet of Things (IoT), allows wireless and wired systems to monitor environments without human intervention. To extend the use of M2M applications, the standard of Internet Protocol version 6 (IPv6) over Low power Wireless Personal Area Networks (6LoWPAN), developed by The Internet Engineering Task Force (IETF), would be applied into M2M communication to enable IP-based M2M sensing devices to connect to the open Internet. Although the 6LoWPAN standard has specified important issues in the communication, security functionalities at different protocol layers have not been detailed. In this paper, we propose an enhanced authentication and key establishment scheme for 6LoWPAN networks in M2M communications. The security proof by the Protocol Composition Logic (PCL) and the formal verification by the Simple Promela Interpreter (SPIN) show that the proposed scheme in 6LoWPAN could enhance the security functionality with the ability to prevent malicious attacks such as replay attacks, man-in-the-middle attacks, impersonation attacks, Sybil attacks, and etc.
Keywords: Internet; Internet of Things; cryptographic protocols; personal area networks; transport protocols; 6LoWPAN; IETF; IPv6; Internet engineering task force; Internet protocol version 6; IoT; M2M communication; PCL; SPIN; authentication scheme; key establishment scheme; low power wireless personal area network; machine-to-machine communication; protocol composition logic; protocol layer; security enhancement; simple Promela interpreter; wireless communication technology; Authentication; Cryptography; Internet of things; Protocols; Servers; M2M (ID#: 16-9560)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247582&isnumber=7247062
S. Vohra and R. Srivastava, “A Survey on Techniques for Securing 6LoWPAN,” Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, Gwalior, 2015, pp. 643-647. doi: 10.1109/CSNT.2015.163
Abstract: The integration of low power wireless personal area networks (LoWPANs) with the Internet allows the vast number of smart objects to harvest data and information through the Internet. Such devices will also be open to many security threats from Internet as well as local network itself. To provide security from both, along with Cryptography techniques, there also requires certain mechanism which provides anonymity & privacy to the communicating parties in the network in addition to providing Confidentiality & Integrity. This paper provides survey on techniques used for securing 6LoWPAN from different attacks and aims to assist the researchers and application developers to provide baseline reference to further carry out their research in this field.
Keywords: Internet; cryptography; personal area networks; telecommunication security; 6LoWPAN; baseline reference; cryptography techniques; local network; low power wireless personal area networks; security threats; smart objects; Computer crime; IEEE 802.15 Standard; Protocols; Routing; Sensors; IDS; IEEE 802.15.4; IPsec; IPv6; Internet of Thing; MT6D (ID#: 16-9561)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279997&isnumber=7279856
C. Cervantes, D. Poplade, M. Nogueira, and A. Santos, “Detection of Sinkhole Attacks for Supporting Secure Routing on 6LoWPAN for Internet of Things,” Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, Ottawa, ON, 2015, pp. 606-611. doi: 10.1109/INM.2015.7140344
Abstract: The Internet of Things (IoT) networks are vulnerable to various kinds of attacks, being the sinkhole attack one of the most destructive since it prevents communication among network devices. In general, existing solutions are not effective to provide protection and security against attacks sinkhole on IoT, and they also introduce high consumption of resources de memory, storage and processing. Further, they do not consider the impact of device mobility, which in essential in urban scenarios, like smart cities. This paper proposes an intrusion detection system, called INTI (Intrusion detection of SiNkhole attacks on 6LoWPAN for InterneT of ThIngs), to identify sinkhole attacks on the routing services in IoT. Moreover, INTI aims to mitigate adverse effects found in IDS that disturb its performance, like false positive and negative, as well as the high resource cost. The system combines watchdog, reputation and trust strategies for detection of attackers by analyzing the behavior of devices. Results show the INTI performance and its effectiveness in terms of attack detection rate, number of false positives and false negatives.
Keywords: Internet; Internet of Things; security of data; 6LoWPAN; INTI; IoT networks; intrusion detection system; secure routing; sinkhole attacks; Base stations; Internet of things; Mathematical model; Monitoring; Routing; Security; Wireless sensor networks (ID#: 16-9562)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140344&isnumber=7140257
T. Gonnot, W. J. Yi, E. Monsef, and J. Saniie, “Robust Framework for 6LoWPAN-Based Body Sensor Network Interfacing with Smartphone,” Electro/Information Technology (EIT), 2015 IEEE International Conference on, Dekalb, IL, 2015, pp. 320-323. doi: 10.1109/EIT.2015.7293361
Abstract: This paper presents the design of a robust framework for body sensor network. In this framework, sensor nodes communicate using 6LoWPAN, running on the Contiki operating system, which is designed for energy efficiency and configuration flexibility. Furthermore, an embedded router is implemented using a Raspberry Pi to bridge the information to a Bluetooth capable smartphone. Consequently, the smartphone can process, analyze, compress and send the data to the cloud using its data connection. One of the major application of this framework is home patient monitoring, with 24/7 data collection capability. The collected data can be sent to a doctor at any time, or only when an anomaly is detected.
Keywords: Bluetooth; body sensor networks; computer network security; data analysis; data compression; home networks; operating systems (computers); patient monitoring; smart phones; telecommunication network routing; 6LoWPAN-based body sensor network; Bluetooth capable smartphone; Contiki operating system; Raspberry Pi; anomaly detection; configuration flexibility; data collection capability; data connection; data process; data sending; embedded router; energy efficiency; home patient monitoring; robust framework; sensor nodes; IEEE 802.15 Standard; Reliability; Routing protocols; Servers; Wireless communication (ID#: 16-9563)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293361&isnumber=7293314
Bingqing Luo, Suning Tang, and Zhixin Sun, “Research of Neighbor Discovery for IPv6 over Low-Power Wireless Personal Area Networks,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 233-238. doi: (not provided)
Abstract: The Ipv6 neighbor discovery protocol is unable to meet the networking and address configuration requirements of the nodes in the wireless sensor network (WSN). To address this problem, the 6lowpan network architecture is presented in this paper, and based on the architecture, a method for configuring addresses of the 6lowpan nodes and a basic process for interaction during neighbor discovery are proposed. A context management and distributing strategy is also proposed to expanded 6lowpan domain, providing an approach to the standard protocol RFC6775. Simulation results show that the proposed 6lowpan neighbor discovery protocol is highly feasible and effective.
Keywords: IP networks; personal area networks; protocols; telecommunication power management; wireless sensor networks; Ipv6 neighbor discovery protocol; WSN; configuration requirements; low power wireless personal area networks; neighbor discovery; wireless sensor network; Context; Logic gates; Routing protocols; Standards; Synchronization; Wireless sensor networks; 6LoWPAN; address configuration; context; header compression (ID#: 16-9564)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332574&isnumber=7332527
D. Singh, G. Tripathi, and A. Jara, “Secure Layers Based Architecture for Internet of Things,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 321-326. doi: 10.1109/WF-IoT.2015.7389074
Abstract: The Internet of Things (IoT) is an Internet based infrastructure of smart machines/objects/things where each machine has the capability of self-configuration and interact/communicate with physical objects based on standard and interoperable communication protocols. The basic attributes of the physical objects is having identities. They are also having virtual personalities using intelligent interfaces and seamlessly integrated in-to the current evolving information networks. Indeed with this heavy open interaction amongst the objects come issues of reliable and secure object for IoT services. Hence, this paper presents a novel conceptual cross layer based architecture which ensures proper usage of Adaptive Interface Translation Table (AITT) with the new security features for secure IoT services with the help of five layers. Each such layer has a specific responsibility to process their assigned task and forward data to the next layers for further processing and inferences. Finally, we present a conceptual solution and visual aspect for security of IoT application and services.
Keywords: Internet; Internet of Things; open systems; security of data; AITT; Internet based infrastructure; Internet of Thing; IoT application; IoT service; adaptive interface translation table; conceptual cross layer based architecture; conceptual solution; information network; intelligent interface; interoperable communication protocol; physical object; secure layers based architecture; self-configuration; smart machines; virtual personality; visual aspect; Cross layer design; Protocols; Security; Sensors; Standards; Wireless sensor networks; 6LoWPAN; Future Internet Services; IoT architecture; IoT security; WSN (ID#: 16-9565)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389074&isnumber=7389012
F. Labeau, A. Agarwal, and B. Agba, “Comparative Study of Wireless Sensor Network Standards for Application in Electrical Substations,” Computing, Communication and Security (ICCCS), 2015 International Conference on, Pamplemousses, 2015, pp. 1-5. doi: 10.1109/CCCS.2015.7374135
Abstract: Power utilities around the world are modernizing their grid by adding layers of communication capabilities to allow for more advanced control, monitoring and preventive maintenance. Wireless Sensor Networks (WSNs), due to their ease of deployment, low cost and flexibility, are considered as a solution to provide diagnostics information about the health of the connected devices and equipment in the electrical grid. However, in specific environments such as high voltage substations, the equipment in the grid produces a strong and specific radio noise, which is impulsive in nature. The robustness of off-the-shelf equipment to this type of noise is not guaranteed; it is therefore important to analyze the characteristics of devices, algorithms and protocols to understand whether they are suited to such harsh environments. In this paper, we review several WSN standards: 6LoWPAN, Zigbee, WirelessHART, ISA100.11a and OCARI. Physical layer specifications (IEEE 802.15.4) are similar for all standards, with considerable architectural differences present in the higher layers. The purpose of this paper is to determine the appropriate WSN standard that could support reliable communication in the impulsive noise environment, in electrical substations. Our review concludes that the WirelessHART sensor network is one of the most suitable to be implemented in a harsh impulsive noise environment.
Keywords: Zigbee; impulse noise; radiofrequency interference; substations; wireless sensor networks; 6LoWPAN; IEEE 802.15.4; ISA100.11a; OCARI; WSN; WirelessHART; electrical substation; high voltage substation; impulsive noise; off-the-shelf equipment; power utility; preventive maintenance; radio noise; wireless sensor network standard; IEEE 802.15 Standard; Network topology; Protocols; Substations; Wireless sensor networks; Wireless Sensor Networks; impulsive noise environment; reliable communication (ID#: 16-9566)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374135&isnumber=7374113
Zhibo Pang, Yuxin Cheng, M. E. Johansson, and G. Bag, “Work-in-Progress: Industry-Friendly and Native-IP Wireless Communications for Building Automation,” Industrial Networks and Intelligent Systems (INISCom), 2015 1st International Conference on, Tokyo, 2015, pp. 163-167. doi: 10.4108/icst.iniscom.2015.258563
Abstract: Wireless communication technologies for building automation (BA) systems are evolving towards native IP connectivity. More Industry Friendly and Native-IP Wireless Building Automation (IF-NIP WiBA) is needed to address the concerns of the entire value chain of the BA industry including the security, reliability, latency, power consumption, engineering process, and independency. In this paper, a hybrid architecture which can seamless support both Cloud-Based Mode and Stand-Alone Mode is introduced based on the 6LoWPAN WSAN (wireless sensor and actuator networks) technology and verified by a prototyping minimal system. The preliminary experimental results suggest that, ((1) both the WSAN and Cloud communications can meet the requirements of non-real-time application of BA systems, (2) the reliability and latency of the WSAN communications is not sufficient for soft real-time applications but it is not far away to meet such requirements by sufficient optimization in the near future, (3) the reliability of Cloud is pretty sufficient but the latency is quite far from the requirement of soft real-time applications. To optimize the latency and power consumption in WSAN, design industrial friendly engineering process, and investigate security mechanisms should be the main focus in the future.
Keywords: IP networks; building management systems; optimisation; telecommunication network reliability; wireless sensor networks; work in progress; 6LoWPAN WSAN; BA systems; IF-NIP WiBA; building automation; cloud-based mode; industry-friendly wireless communications; native IP connectivity; native-IP wireless communications; reliability; stand-alone mode; wireless sensor and actuator networks; work-in-progress; Actuators; Communication system security; Logic gates; Optimization; Reliability; Wireless communication; Wireless sensor networks; 6LoWPAN; Native IP Connectivity (NIP); Wireless Building Automation (WiBA); Wireless Sensor and Actuator Networks (WSAN) (ID#: 16-9567)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7157839&isnumber=7157808
S. Chakrabarty, D. W. Engels, and S. Thathapudi, “Black SDN for the Internet of Things,” Mobile Ad Hoc and Sensor Systems (MASS), 2015 IEEE 12th International Conference on, Dallas, TX, 2015, pp. 190-198. doi: 10.1109/MASS.2015.100
Abstract: In this paper, we present Black SDN, a Software Defined Networking (SDN) architecture for secure Internet of Things (IoT) networking and communications. SDN architectures were developed to provide improved routing and networking performance for broadband networks by separating the control plain from the data plain. This basic SDN concept is amenable to IoT networks, however, the common SDN implementations designed for wired networks are not directly amenable to the distributed, ad hoc, low-power, mesh networks commonly found in IoT systems. SDN promises to improve the overall lifespan and performance of IoT networks. However, the SDN architecture changes the IoT network’s communication patterns, allowing new types of attacks, and necessitating a new approach to securing the IoT network. Black SDN is a novel SDN-based secure networking architecture that secures both the meta-data and the payload within each layer of an IoT communication packet while utilizing the SDN centralized controller as a trusted third party for secure routing and optimized system performance management. We demonstrate through simulation the feasibility of Black SDN in networks where nodes are asleep most of their lives, and specifically examine a Black SDN IoT network based upon the IEEE 802.15.4 LR WPAN (Low Rate - Wireless Personal Area Network) protocol.
Keywords: Internet of Things; Zigbee; broadband networks; computer network security; software defined networking; telecommunication control; telecommunication network routing; IEEE 802.15.4 LR WPAN; IoT systems; SDN centralized controller; black SDN; internet of things; low rate wireless personal area network protocol; mesh networks; optimized system performance management; secure routing; software defined networking architecture; Cryptography; IEEE 802.15 Standard; Protocols; Routing; 6LoWPAN; Black Networks; IEEE 802.15.4; IoT; SDN; Software Defined Networks; Wireless HART; Zig Bee (ID#: 16-9568)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366932&isnumber=7366897
F. Van den Abeele, T. Vandewinckele, J. Hoebeke, I. Moerman, and P. Demeester, “Secure Communication in IP-Based Wireless Sensor Networks via a Trusted Gateway,” Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/ISSNIP.2015.7106963
Abstract: As the IP-integration of wireless sensor networks enables end-to-end interactions, solutions to appropriately secure these interactions with hosts on the Internet are necessary. At the same time, burdening wireless sensors with heavy security protocols should be avoided. While Datagram TLS (DTLS) strikes a good balance between these requirements, it entails a high cost for setting up communication sessions. Furthermore, not all types of communication have the same security requirements: e.g. some interactions might only require authorization and do not need confidentiality. In this paper we propose and evaluate an approach that relies on a trusted gateway to mitigate the high cost of the DTLS handshake in the WSN and to provide the flexibility necessary to support a variety of security requirements. The evaluation shows that our approach leads to considerable energy savings and latency reduction when compared to a standard DTLS use case, while requiring no changes to the end hosts themselves.
Keywords: IP networks; Internet; authorisation; computer network security; energy conservation; internetworking; protocols; telecommunication power management; trusted computing; wireless sensor networks; DTLS handshake; WSN authorization; communication security; datagram TLS; end-to-end interactions; energy savings; heavy security protocol; latency reduction; trusted gateway; wireless sensor network IP integration; Bismuth; Cryptography; Logic gates; Random access memory; Read only memory; Servers; Wireless sensor networks; 6LoWPAN; CoAP; DTLS; Gateway; IP; IoT (ID#: 16-9569)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106963&isnumber=7106892
S. Ziegler, A. Skarmeta, P. Kirstein, and L. Ladid, “Evaluation and Recommendations on IPv6 for the Internet of Things,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 548-552. doi: 10.1109/WF-IoT.2015.7389113
Abstract: This article presents some key achievements and recommendations from the IoT6 European research project on IPv6 exploitation for the Internet of Things (IoT). It highlights the potential of IPv6 to support the integration of a global IoT deployment including legacy systems by overcoming horizontal fragmentation as well as more direct vertical integration between communicating devices and the cloud.
Keywords: Internet of Things; cloud computing; service-oriented architecture; software maintenance; IPv6 exploitation; IoT6 European research project; legacy systems; Europe; Interoperability; Object recognition; Protocols; Routing; Security; Standards; 6LoWPAN; CoAP; IPv6; Machine-to-Machine; addressing; integration; interoperability; scalability (ID#: 16-9570)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389113&isnumber=7389012
F. Medjek, D. Tandjaoui, M. R. Abdmeziem, and N. Djedjig, “Analytical Evaluation of the Impacts of Sybil Attacks Against RPL Under Mobility,” Programming and Systems (ISPS), 2015 12th International Symposium on, Algiers, 2015, pp. 1-9. doi: 10.1109/ISPS.2015.7244960
Abstract: The Routing Protocol for Low-Power and Lossy Networks (RPL) is the standardized routing protocol for constrained environments such as 6LoWPAN networks, and is considered as the routing protocol of the Internet of Things (IoT), However, this protocol is subject to several attacks that have been analyzed on static case. Nevertheless, IoT will likely present dynamic and mobile applications. In this paper, we introduce potential security threats on RPL, in particular Sybil attack when the Sybil nodes are mobile. In addition, we present an analytical analysis and a discussion on how network performances can be affected. Our analysis shows, under Sybil attack while nodes are mobile, that the performances of RPL are highly affected compared to the static case. In fact, we notice a decrease in the rate of packet delivery, and an increase in control messages overhead. As a result, energy consumption at constrained nodes increases. Our proposed attack demonstrate that a Sybil mobile node can easily disrupt RPL and overload the network with fake messages making it unavailable.
Keywords: computer network performance evaluation; computer network security; mobile computing; routing protocols; 6LoWPAN networks; Internet of Things; IoT; RPL; Sybil attacks; constrained environments; dynamic application; energy consumption; lossy network; low-power network; mobile application; network performance; routing protocol; security threats; Maintenance engineering; Mobile nodes; Routing; Routing protocols; Security; Topology (ID#: 16-9571)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7244960&isnumber=7244951
N. Djedjig, D. Tandjaoui, and F. Medjek, “Trust-Based RPL for the Internet of Things,” 2015 IEEE Symposium on Computers and Communication (ISCC), Larnaca, 2015, pp. 962-967. doi: 10.1109/ISCC.2015.7405638
Abstract: The Routing Protocol for Low-Power and Lossy Networks (RPL) is the standardized routing protocol for constrained environments such as 6LoWPAN networks, and is considered as the routing protocol of the Internet of Things (IoT). However, this protocol is subject to several internal and external attacks. In fact, RPL is facing many issues. Among these issues, trust management is a real challenge when deploying RPL. In this paper, we highlight and discuss the different issues of trust management in RPL. We consider that using only TPM (Trust Platform Module) to ensure trustworthiness between nodes is not sufficient. Indeed, an internal infected or selfish node could participate in constructing RPL topology. To overcome this issue, we propose to strengthen RPL by adding a new trustworthiness metric during RPL construction and maintenance. This metric represents the level of trust for each node in the network, and is calculated using selfishness, energy, and honesty components. It allows a node to decide whether or not to trust the other nodes during the construction of the topology.
Keywords: Internet of Things; routing protocols; telecommunication network topology; TPM; energy component; honesty component; routing protocol for low-power and lossy network; selfishness component; standardized routing protocol; trust platform module; trust-based RPL topology; Measurement; Routing; Routing protocols; Security; Topology; Wireless sensor networks
(ID#: 16-9572)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405638&isnumber=7405441
S. Raza, P. Misra, Z. He, and T. Voigt, “Bluetooth Smart: An Enabling Technology for the Internet of Things,” Wireless and Mobile Computing, Networking and Communications (WiMob), 2015 IEEE 11th International Conference on, Abu Dhabi, 2015, pp. 155-162. doi: 10.1109/WiMOB.2015.7347955
Abstract: The past couple of years have seen a heightened interest in the Internet of Things (IoT), transcending industry, academia and government. As with new ideas that hold immense potential, the optimism of IoT has also exaggerated the underlying technologies well before they can mature into a sustainable ecosystem. While 6LoWPAN has emerged as a disruptive technology that brings IP capability to networks of resource constrained devices, a suitable radio technology for this device class is still debatable. In the recent past, Bluetooth Low Energy (LE) — a subset of the Bluetooth v4.0 stack — has surfaced as an appealing alternative that provides a low-power and loosely coupled mechanism for sensor data collection with ubiquitous units (e.g., smartphones and tablets). When Bluetooth 4.0 was first released, it was not targeted for IP-connected devices but for communication between two neighboring peers. However, the latest release of Bluetooth 4.2 offers features that make Bluetooth LE a competitive candidate among the available low-power communication technologies in the IoT space. In this paper, we discuss the novel features of Bluetooth LE and its applicability in 6LoWPAN networks. We also highlight important research questions and pointers for potential improvement for its greater impact.
Keywords: Bluetooth; Internet of Things; smart phones; 6LoWPAN networks; Bluetooth low energy; Bluetooth smart; Bluetooth v4.0 stack; IP-connected devices; IoT; low-power communication; resource constrained devices; sensor data collection; smartphones; tablets; ubiquitous units; Internet; Privacy; Protocols; Security; Smart phones; Standards; Bluetooth 4.2; Bluetooth Smart; Low Energy; Research Challenges (ID#: 16-9573)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7347955&isnumber=7347915
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Anonymity in Wireless Networks 2015 |
Minimizing privacy risk is one of the major problems in the development of social media and hand-held smartphone technologies, vehicle ad hoc networks, and wireless sensor networks. For the Science of Security community, the research issues addressed relate to the hard problems of resiliency, composability, metrics, and human behavior. These research articles were presented in 2015.
A. Barroso and M. Hollick, “Performance Evaluation of Delay-Tolerant Wireless Friend-to-Friend Networks for Undetectable Communication,” Local Computer Networks (LCN), 2015 IEEE 40th Conference on, Clearwater Beach, FL, 2015, pp. 474-477. doi: 10.1109/LCN.2015.7366356
Abstract: Anonymous communication systems have recently increased in popularity in wired networks, but there are no adequate equivalent systems for wireless networks under strong surveillance. In this work we evaluate the performance of delay-tolerant friend-to-friend networking, which can allow anonymous communication in a wireless medium under strong surveillance by relying on trust relationships between the network’s users. Since strong anonymity properties incur in performance penalties, a good understanding of performance under various conditions is crucial for the successful deployment of such a system. We simulate a delay-tolerant friend-to-friend network in several scenarios using real-world mobility data, analyze the trade-offs of network-related parameters and offer a preliminary throughput estimation.
Keywords: ad hoc networks; delay tolerant networks; delay-tolerant wireless friend-to-friend networks; trust relationships; undetectable communication; Jamming; Peer-to-peer computing; Security; anonymous communication; delay-tolerant; friend-to-friend networks; undetectability; wireless (ID#: 16-11153)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366356&isnumber=7366232
O. Javidbakht and P. Venkitasubramaniam, “Relay Selection in Wireless Networks for Optimal Delay Anonymity Tradeoff,” Signal Processing Advances in Wireless Communications (SPAWC), 2015 IEEE 16th International Workshop on, Stockholm, 2015,
pp. 360-364. doi: 10.1109/SPAWC.2015.7227060
Abstract: Wireless networks are susceptible to eavesdropping by unauthorized intruders who aim to extract information about the networked exchanges. Even when packets are encrypted, unsophisticated energy detectors can be used to identify the source destination pairs using the packet transmission timing on the wireless medium. Anonymous network protocols aim to prevent this information retrieval through the use of special intermediate relays that add artificial delays so as to confuse the eavesdropper. Previous studies have demonstrated that a tradeoff exists between the anonymity—secrecy of source destination pairs from timing analysis— provided by such relays and the latency incurred. The focus of this work is the tradeoff between anonymity and delay when a network of such relays are employed, as in practical anonymous systems such as Tor. Specifically, the problem of best route selection in anonymous networks that optimally trades off delay for anonymity has been investigated in this work. Using Shannon Entropy as the metric of anonymity, sufficient conditions on network parameters to achieve maximum anonymity are derived. The optimal route selection algorithm to obtain a desired tradeoff is shown to be computationally impractical, and a suboptimal route selection algorithm that effectively balances delay and anonymity has been proposed which has a negligible gap to the optimal solution, but requires far less computational resources. An incremental optimization which allows for real time addition of new users to the anonymous system is investigated and the performance compared with the centralized schemes.
Keywords: information retrieval; optimisation; relay networks (telecommunication); telecommunication network routing; telecommunication security; Shannon Entropy; energy detector; information retrieval; optimal delay anonymity tradeoff; optimization; packet transmission timing; relay selection; suboptimal route selection algorithm; wireless network; Bandwidth; Delays; Optimization; Relays; Security; Wireless communication (ID#: 16-11154)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7227060&isnumber=7226983
K. K. Gagneja, “Secure Communication Scheme for Wireless Sensor Networks to Maintain Anonymity,” Computing, Networking and Communications (ICNC), 2015 International Conference on, Garden Grove, CA, 2015, pp. 1142-1147. doi: 10.1109/ICCNC.2015.7069511
Abstract: In wireless sensor networks it is becoming more and more important for sensor nodes to maintain anonymity while communicating data because of security reasons. Anonymous communication among sensor nodes is important, because sensor nodes want to conceal their identities either being a base station or being a source node. Anonymous communication in wireless sensor networks includes numerous important aspects, for instance base station anonymity, communication association anonymity, and source node anonymity. From the literature, we can observe that existing anonymity schemes for wireless sensor networks either cannot realize the complete anonymities, or they are suffering from various overheads such as enormous memory usage, complex computation, and long communications. This paper is presenting an efficient secure anonymity communication protocol (SACP) for wireless sensor networks that can realize complete anonymities offering minimal overheads with respect to storage, computation and communication costs. The given secure anonymity communication protocol is compared with various existing anonymity protocols, and the performance analysis shows that our protocol accomplishes all three anonymities: sender node anonymity, base station anonymity, and communication association anonymity while using little memory, low communication cost, and small computation costs.
Keywords: cryptographic protocols; telecommunication security; wireless sensor networks; SACP maintenance; base station; secure anonymity communication protocol; secure communication scheme; symmetric cryptography; wireless sensor network; Base stations; Conferences; Data communication; Peer-to-peer computing; Protocols; Synthetic aperture sonar; Wireless sensor networks; anonimity; identity; security; sensor nodes; wireless (ID#: 16-11155)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069511&isnumber=7069279
J. R. Ward and M. Younis, “Base Station Anonymity Distributed Self-Assessment in Wireless Sensor Networks,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 103-108. doi: 10.1109/ISI.2015.7165947
Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. In most applications, the sensors act as data sources and forward information generated by event triggers to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN with the least amount of effort. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to identify the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Previous work has proposed anonymity-boosting techniques to improve the BS’s anonymity posture, but all require some amount of overhead such as increased energy consumption, increased latency, or decreased throughput. If the BS understood its own anonymity posture, then it could evaluate whether the benefits of employing an anti-traffic analysis technique are worth the associated overhead. In this paper we propose two distributed approaches to allow a BS to assess its own anonymity and correspondingly employ anonymity-boosting techniques only when needed. Our approaches allow a WSN to increase its anonymity on demand, based on real-time measurements, and therefore conserve resources. The simulation results confirm the effectiveness of our approaches.
Keywords: security of data; wireless sensor networks; WSN; anonymity-boosting techniques; anti-traffic analysis technique; base station; base station anonymity distributed self-assessment; conventional security mechanisms; improved BS anonymity; Current measurement; Energy consumption; Entropy; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 16-11156)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165947&isnumber=7165923
S. Alsemairi and M. Younis, “Adaptive Packet-Combining to Counter Traffic Analysis in Wireless Sensor Networks,” Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, Dubrovnik, 2015, pp. 337-342. doi: 10.1109/IWCMC.2015.7289106
Abstract: Wireless Sensor Networks (WSNs) have become an attractive choice for many applications that serve in hostile setup. The operation model of a WSN makes it possible for an adversary to determine the location of the base-station (BS) in the network by intercepting transmissions and employing traffic analysis techniques such as Evidence Theory. By locating the BS, the adversary can then target it with denial-of-service attacks. This paper promotes a novel strategy for countering such an attack by adaptively combining packet payloads. The idea is to trade off packet delivery latency for BS location anonymity. Basically, a node on a data route will delay the forwarding of a packet until one or multiple additional packets arrive and the payloads are then combined in a single packet. Such an approach decreases the number of evidences that an adversary will collect and makes the traffic analysis inclusive in implicating the BS position. Given the data delivery delay that will be imposed, the proposed technique is to be adaptively applied when the BS anonymity needs a boost. The simulation results confirm the effectiveness of the proposed technique.
Keywords: packet radio networks; telecommunication security; telecommunication traffic; wireless sensor networks; BS location anonymity; WSN; adaptive packet-combining; counter traffic analysis; data delivery delay; denial-of-service attacks; evidence theory; packet delivery latency; Cryptography; Delays; Payloads; Routing; Topology; Wireless sensor networks; Anonymity; Location Privacy; Security; Traffic Analysis; Wireless Sensor Network (ID#: 16-11157)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289106&isnumber=7288920
J. R. Ward and M. Younis, “A Cross-Layer Defense Scheme for Countering Traffic Analysis Attacks in Wireless Sensor Networks,” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 972-977. doi: 10.1109/MILCOM.2015.7357571
Abstract: In most Wireless Sensor Network (WSN) applications the sensors forward their readings to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary’s attack. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to locate the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Published anonymity-boosting techniques mainly focus on a single layer of the communication protocol stack and assume that changes in the protocol operation will not be detectable. In fact, existing single-layer techniques may not be able to protect the network if the adversary could guess what anonymity measure is being applied by identifying which layer is being exploited. In this paper we propose combining physical-layer and network-layer techniques to boost the network resilience to anonymity attacks. Our cross-layer approach avoids the shortcomings of the individual single-layer schemes and allows a WSN to effectively mask its behavior and simultaneously misdirect the adversary’s attention away from the BS’s location. We confirm the effectiveness of our cross-layer anti-traffic analysis measure using simulation.
Keywords: cryptographic protocols; telecommunication security; telecommunication traffic; wireless sensor networks; WSN; anonymity-boosting techniques; authentication; base station; central sink; communication protocol; cross-layer defense scheme; encryption; network-layer techniques; physical-layer techniques; single-layer techniques; traffic analysis attacks; traffic analysis techniques; Array signal processing; Computer security; Measurement; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 16-11158)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357571&isnumber=7357245
J. R. Ward and M. Younis, “A Cross-Layer Distributed Beamforming Approach to Increase Base Station Anonymity in Wireless Sensor Networks,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-7. doi: 10.1109/GLOCOM.2015.7417430
Abstract: In most applications of wireless sensor networks (WSNs), nodes act as data sources and forward measurements to a central base station (BS) that may also perform network management tasks. The critical role of the BS makes it a target for an adversary’s attack. Even if a WSN employs conventional security primitives such as encryption and authentication, an adversary can apply traffic analysis techniques to find the BS. Therefore, the BS should be kept anonymous to protect its identity, role, and location. Previous work has demonstrated distributed beamforming to be an effective technique to boost BS anonymity in WSNs; however, the implementation of distributed beamforming requires significant coordination messaging that increases transmission activities and alerts the adversary to the possibility of deceptive activities. In this paper we present a novel, cross-layer design that exploits the integration of the control traffic of distributed beamforming with the MAC protocol in order to boost the BS anonymity while keeping the rate of node transmission at a normal rate. The advantages of our proposed approach include minimizing the overhead of anonymity measures and lowering the transmission power throughout the network which leads to increased spectrum efficiency and reduced energy consumption. The simulation results confirm the effectiveness our cross-layer design.
Keywords: access protocols; array signal processing; wireless sensor networks; MAC protocol; WSN; base station anonymity; central base station; cross-layer distributed beamforming approach; Array signal processing; Media Access Protocol; Schedules; Security; Synchronization; Wireless sensor networks (ID#: 16-11159)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417430&isnumber=7416057
Seungsoo Baek, Seung-Hyun Seo, and Seungjoo Kim, “Preserving Biosensor Users’ Anonymity over Wireless Cellular Network,” Ubiquitous and Future Networks (ICUFN), 2015 Seventh International Conference on, Sapporo, 2015, pp. 470-475. doi: 10.1109/ICUFN.2015.7182588
Abstract: A wireless body sensor network takes a significant part in mobile E-healthcare monitoring service. Major concerns for patient’s sensitive information are related to secure data transmission and preserving anonymity. So far, most researchers have only focused on security or privacy issues related to wireless body area network (WBAN) without considering all the communication vulnerabilities. However, since bio data sensed by biosensors travel over both WBAN and the cellular network, it is required to study about a privacy-enhanced scheme that covers all the secure communications. In this paper, we first point out the weaknesses of previous work in [9]. Then, we propose a novel privacy-enhanced E-healthcare monitoring scheme in wireless cellular network. Our proposed scheme provides anonymous communication between a patient and a doctor in a wireless cellular network satisfying security requirements.
Keywords: biosensors; body area networks; body sensor networks; cellular radio; data privacy; health care; patient monitoring; telecommunication security; telemedicine; WBAN; biosensor users anonymity preservation; mobile e-healthcare monitoring service; privacy issues; privacy-enhanced e-healthcare monitoring scheme; secure data transmission; security issues; wireless body area network; wireless body sensor network; wireless cellular network; Bioinformatics; Cloning; Cloud computing; Medical services; Mobile communication; Smart phones; Wireless communication; Anonymity; E-healthcare; Privacy; Unlinkability; Wireless body area network; Wireless cellular network (ID#: 16-11160)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182588&isnumber=7182475
S. Alsemairi and M. Younis, “Clustering-Based Mitigation of Anonymity Attacks in Wireless Sensor Networks,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-7. doi: 10.1109/GLOCOM.2015.7417501
Abstract: The use of wireless sensor networks (WSNs) can be advantageous in applications that serve in hostile environments such as security surveillance and military battlefield. The operation of a WSN typically involves collection of sensor measurements at an in-situ Base-Station (BS) that further processes the data and either takes action or reports findings to a remote command center. Thus the BS plays a vital role and is usually guarded by concealing its identity and location. However, the BS can be susceptible to traffic analysis attack. Given the limited communication range of the individual sensors and the objective of conserving their energy supply, the sensor readings are forwarded to the BS over multi-hop paths. Such a routing topology allows an adversary to correlate intercepted transmissions, even without being able to decode them, and apply attack models such as Evidence Theory (ET) in order to determine the position of the BS. This paper proposes a technique to counter such an attack by reshaping the routing topology. Basically, the nodes in a WSN are grouped in unevenly-sized clusters and each cluster has a designated aggregation node (cluster head). An inter-cluster head routes are then formed so that the BS experiences low traffic volume and does not become distinguishable among the WSN nodes. The simulation results confirm the effectiveness of the proposed technique in boosting the anonymity of the BS.
Keywords: military communication; telecommunication network routing; telecommunication traffic; wireless sensor networks; WSN nodes; anonymity attacks; clustering-based mitigation; evidence theory; in-situ base-station; military battlefield; security surveillance; Measurement; Optimized production technology; Receivers; Routing; Security; Topology; Wireless sensor networks
(ID#: 16-11161)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417501&isnumber=7416057
X. Wang, L. Dong, C. Xu, and P. Li, “Location Privacy Protecting Based on Anonymous Technology in Wireless Sensor Networks,” Parallel Architectures, Algorithms and Programming (PAAP), 2015 Seventh International Symposium on, Nanjing, 2015, pp. 229-235. doi: 10.1109/PAAP.2015.50
Abstract: Wireless sensor network is a type of information sharing network, where the attacker can monitor the network traffic or trace the transmission of packets to infer the position of the target node. Particularly, the target node mainly refers to the source node and the aggregation node. Firstly, we discuss the privacy protection method which is based on the anonymous location to prevent from the location privacy problems. Then, we suggest at least n anonymous nodes distributing near the target node, and select one of the fake nodes by routing protocol to replace the real one to carry out the location of the data communication. Finally, in order to improve the security of nodes and increase the difficulty of the attacker tracking, we select the routing tree which is generated via Collection Tree Protocol (CTP) to build the anonymous group and verified by simulation. Experiments show that anonymity of the proposed treatment increases the difficulty of the attackers significantly.
Keywords: data privacy; routing protocols; telecommunication network topology; telecommunication security; telecommunication traffic; trees (mathematics); wireless sensor networks; CTP; aggregation node; anonymous technology; collection tree protocol; information sharing network; location privacy protection method; network traffic; packet transmission; routing protocol; routing tree selection; source node; target node; Base stations; Data privacy; Monitoring; Privacy; Routing; Security; Wireless sensor networks; Collection Tree Protocol; Location Privacy; Wireless Sensor Network (ID#: 16-11162)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7387330&isnumber=7387279
A. F. Callanan and P. Thulasiraman, “Achieving Sink Node Anonymity Under Energy Constraints in Tactical Wireless Sensor Networks,” Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2015 IEEE International Multi-Disciplinary Conference on, Orlando, FL, 2015, pp.186-192. doi: 10.1109/COGSIMA.2015.7108196
Abstract: A wireless sensor network (WSN) is a distributed network that facilitates wireless information gathering within a region of interest. The information collected by sensors is aggregated at a central node know as the sink node. Two challenges in the deployment of WSNs are limited battery power of each sensor node and sink node anonymity. The role played by the sink node raises its profile as a high value target for attack, thus its anonymity is crucial to the security of a WSN. In order to improve network security, we must implement a protocol that conceals the sink node’s location while being cognizant of energy resource constraints. In this paper we develop a routing algorithm based on node clustering to improve sink node anonymity while simultaneously limiting node energy depletion. Via MATLAB simulations, we analyze the effectiveness of this algorithm in obfuscating the sink node’s location in the WSN while preserving node energy. We show that the anonymity of the sink node is independent of traffic volume and that the average energy consumed by a node remains consistent across topological variations.
Keywords: routing protocols; telecommunication power management; telecommunication security; wireless sensor networks; central node; distributed network; energy constraints; network security; node clustering; node energy depletion; region of interest; routing algorithm; sink node anonymity; tactical wireless sensor networks; wireless information gathering; Clustering algorithms; Nominations and elections; Routing; Security; Sensors; Topology; Wireless sensor networks (ID#: 16-11163)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7108196&isnumber=7107964
A. s. Abuzneid, T. Sobh, and M. Faezipour, “Temporal Privacy Scheme for End-to-End Location Privacy in Wireless Sensor Networks,” Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, Visakhapatnam, 2015, pp. 1-6. doi: 10.1109/EESCO.2015.7253969
Abstract: Wireless sensor network (WSN) is built of hosts called sensors which can sense a phenomenon such as motion, temperature, and humidity. Sensors represent what they sense in data format. Providing an efficient end-to-end privacy solution would be a challenging task due to the open nature of the WSN. The key schemes needed for end-to-end location privacy are anonymity, observability, capture likelihood and safety period. On top of that, having temporal privacy is crucial to attain. We extend this work to provide a solution against global adversaries. We present a network model that is protected against passive/active and local/multi-local/global attacks. This work provides a solution for temporal privacy to attain end-to-end anonymity and location privacy.
Keywords: data privacy; telecommunication security; telecommunication traffic; wireless sensor networks; WSN; active attack; anonymity scheme; capture likelihood scheme; data format; end-to-end location privacy; global attack; local attack; multilocal attack; observability scheme; passive attack; safety period scheme; temporal privacy scheme; traffic rate privacy; Correlation; Delays; Monitoring; Privacy; Protocols; Routing; Wireless sensor networks; sink privacy; source location privacy; temporal privacy; traffic rate privacy (ID#: 16-11164)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7253969&isnumber=7253613
J. R. Ward and M. Younis, “Distributed Beamforming Relay Selection to Increase Base Station Anonymity in Wireless Ad Hoc Networks,” Computer Communication and Networks (ICCCN), 2015 24th International Conference on, Las Vegas, NV, 2015, pp. 1-8. doi: 10.1109/ICCCN.2015.7288399
Abstract: Wireless ad hoc networks have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. In most applications, nodes act as data sources and forward information to a central base station (BS) that may also perform network management tasks. The critical role of the BS makes it a target for an adversary’s attack. Even if an ad hoc network employs conventional security primitives such as encryption and authentication, an adversary can apply traffic analysis techniques to find the BS. Therefore, the BS should be kept anonymous to protect its identity, role, and location. Previous work has demonstrated distributed beamforming to be an effective technique to boost BS anonymity in wireless ad hoc networks; however, the increased anonymity and corresponding energy consumption depend on the quality and quantity of selected helper relays. In this paper we present a novel, distributed approach for determining a set of relays per hop that boosts BS anonymity using evidence theory analysis while minimizing energy consumption. The identified relay set is further prioritized using local wireless channel statistics. The simulation results demonstrate the effectiveness our approach.
Keywords: ad hoc networks; array signal processing; relay networks (telecommunication); telecommunication network management; telecommunication power management; telecommunication security; wireless channels; central base station; commercial community; distributed beamforming relay selection; energy consumption minimization; evidence theory analysis; hostile border; identity protection; industrial control; local wireless channel statistics; military community; traffic analysis technique; wireless ad hoc network security; Array signal processing; Mobile ad hoc networks; Protocols; Relays; Synchronization; Wireless communication (ID#: 16-11165)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288399&isnumber=7288342
N. Baroutis and M. Younis, “Using Fake Sinks and Deceptive Relays to Boost Base-Station Anonymity in Wireless Sensor Network,” Local Computer Networks (LCN), 2015 IEEE 40th Conference on, Clearwater Beach, FL, 2015, pp. 109-116. doi: 10.1109/LCN.2015.7366289
Abstract: In applications of wireless Sensor Networks (WSNs), the base-station (BS) acts as a sink for all data traffic. The continuous flow of packets toward the BS enables the adversary to analyze the traffic and uncover the BS position. In this paper we present a technique to counter such an attack by morphing the traffic pattern in the WSN. Our approach introduces multiple fake sinks and deceptive relays so that nodes other than the BS are implicated as the destination of all data traffic. Since the problem of the optimal fake sink’s placement is NP-hard, we employ a heuristic to determine the most suitable fake sink count and placement for a network. Dynamic load-balancing trees are formed to identify relay nodes and adapt the topology to route packets to the faked (and real) sinks while extending the network lifetime. The simulation results confirm the effectiveness of the proposed technique.
Keywords: relay networks (telecommunication); telecommunication network routing; telecommunication network topology; telecommunication traffic; trees (mathematics); wireless sensor networks; NP-hard; WSN; base-station anonymity boost; data traffic pattern morphing; deceptive relay; dynamic load-balancing tree; optimal fake sink placement; packet routing; wireless sensor network; Frequency selective surfaces; Network topology; Relays; Routing; Topology; Traffic control; Wireless sensor networks; Traffic analysis; anonymity; location privacy; sensor networks (ID#: 16-11166)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366289&isnumber=7366232
J. Y. Koh, J. C. M. Teo, D. Leong, and W. C. Wong, “Reliable Privacy-Preserving Communications for Wireless Ad Hoc Networks,” Communications (ICC), 2015 IEEE International Conference on, London, 2015, pp. 6271-6276. doi: 10.1109/ICC.2015.7249323
Abstract: We present a phantom-receiver-based routing scheme to enhance the anonymity of each source-destination pair (or contextual privacy) while using an adjustable amount of overhead. We also study how traditional network coding and opportunistic routing can leak contextual privacy. We then incorporated both network coding and opportunistic routing into our scheme for better network performance and show how we mitigate its vulnerability. Contrary to prior works, we allow the destination to anonymously submit an acknowledgment to the source for enhanced reliability. Performance analysis and simulations are used to demonstrate the efficacy of the proposed scheme against commonly considered traffic analysis attacks.
Keywords: ad hoc networks; data privacy; network coding; telecommunication network reliability; telecommunication network routing; telecommunication traffic; contextual privacy; network performance; opportunistic routing; phantom-receiver-based routing scheme; reliable privacy-preserving communications; source-destination pair; traffic analysis attacks; wireless ad hoc networks; Ad hoc networks; Cryptography; Network coding; Phantoms; Privacy; Receivers; Routing; contextual privacy; global adversary; phantom receiver; traffic analysis (ID#: 16-11167)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249323&isnumber=7248285
M. Chaudhari and S. Dharawath, “Toward a Statistical Framework for Source Anonymity in Sensor Network Using Quantitative Measures,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, Coimbatore, 2015, pp. 1-5. doi: 10.1109/ICIIECS.2015.7193169
Abstract: In some applications in sensor network the location and privacy of certain events must remain anonymous or undetected even by analyzing the network traffic. In this paper the framework for modeling, investigating and evaluating the sensor network is suggested and results are charted. Suggested two folded structure introduces the notion of “interval indistinguishability” which gives a quantitative evaluation to form anonymity in sensor network and secondly it charts source anonymity to statistical problem of binary hypothesis checking with nuisance parameters. The system is made energy efficient by enhancing the available techniques for choosing cluster head. The energy efficiency of the sensor network is charted.
Keywords: statistical analysis; telecommunication security; telecommunication traffic; wireless sensor networks; binary hypothesis checking; network traffic; nuisance parameters; quantitative evaluation; quantitative measurement; sensor network; source anonymity; statistical framework; statistical problem; wireless sensor network; Conferences; Energy efficiency; Privacy; Protocols; Technological innovation; Wireless sensor networks; Binary Hypothesis; Interval Indistinguishability; Wireless Sensor Network; residual energy
(ID#: 16-11168)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193169&isnumber=7192777
E. Chan-Tin, “AnonCall: Making Anonymous Cellular Phone Calls,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 626-631. doi: 10.1109/ARES.2015.13
Abstract: The threat of mass surveillance and the need for privacy have become mainstream recently. Most of the anonymity schemes have focused on Internet privacy. We propose an anonymity scheme for cellular phone calls. The cellular phones form an ad-hoc network relaying phone conversations through direct wifi connections. A proof-of-concept implementation on an Android smartphone is completed and shown to work with minimal delay in communications.
Keywords: Android (operating system); Internet; ad hoc networks; cellular radio; mobile handsets; smart phones; wireless LAN; Android smartphone; AnonCall; Internet privacy; Wi-Fi; ad-hoc network; anonymity schemes; anonymous cellular phone calls; mass surveillance; Cellular phones; Mobile communication; Mobile handsets; Relays; Wireless networks; Anonymity; Cellular; Privacy
(ID#: 16-11169)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299973&isnumber=7299862
A. s. Abuzneid, T. Sobh, and M. Faezipour, “An Enhanced Communication Protocol for Anonymity and Location Privacy in WSN,” Wireless Communications and Networking Conference Workshops (WCNCW), 2015 IEEE, New Orleans, LA, 2015,
pp. 91-96. doi: 10.1109/WCNCW.2015.7122535
Abstract: Wireless sensor networks (WSNs) consist of many sensors working as hosts. These sensors can sense a phenomenon and represent it in a form of data. There are many applications for WSNs such as object tracking and monitoring where the objects need protection. Providing an efficient location privacy solution would be challenging to achieve due to the exposed nature of the WSN. The communication protocol needs to provide location privacy measured by anonymity, observability, capture- likelihood and safety period. We extend this work to allow for countermeasures against semi-global and global adversaries. We present a network model that is protected against a sophisticated passive and active attack using local, semi-global, and global adversaries.
Keywords: protocols; telecommunication security; wireless sensor networks; WSN; active attacks; anonymity; capture-likelihood; communication protocol enhancement; global adversaries; local adversaries; location privacy; object tracking; observability; passive attacks; safety period; semiglobal adversaries; Conferences; Energy efficiency; Internet of things; Nickel; Privacy; Silicon; Wireless sensor networks; contextual privacy; privacy; sink privacy; source location privacy (ID#: 16-11170)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122535&isnumber=7122513
D. Tang and J. Ren, “A Delay-Aware and Secure Data Forwarding Scheme for Urban Sensing Networks,” Communications (ICC), 2015 IEEE International Conference on, London, 2015, pp. 3003-3007. doi: 10.1109/ICC.2015.7248784
Abstract: People-centric urban sensing is envisioned as a novel urban sensing paradigm. Communication delay and security are two important design issues in urban sensing network. To address these two issues concurrently, we propose a novel DElay-Aware secuRe (DEAR) forwarding scheme by combining secret sharing and two-phase message forward. In DEAR scheme, the collected data is first split into pieces. Each piece is being relayed to the application data server through a randomly selected delivery node. The combination of secret sharing scheme and two-phase message forward ensures confidentiality of the collected data and anonymity of the participating users. It also makes it infeasible for the application data server to estimate the source node identity. Moreover, DEAR provides redundancy in message forwarding to achieve high message delivery ratio. This design makes the trade-off between security and communication delay adjustable based on selection of the (k, n) scheme.
Keywords: data communication; electronic messaging; redundancy; security of data; wireless sensor networks; DEAR data forwarding scheme; data confidentiality; data server; delay aware and secure data forwarding scheme; secret sharing scheme; two-phase message forwarding redundancy; urban sensing network; urban sensing paradigm; Cryptography; Delays; Privacy; Sensors; Servers; Wireless communication; Wireless sensor networks (ID#: 16-11171)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7248784&isnumber=7248285
I. Safaka, L. Czap, K. Argyraki, and C. Fragouli, “Towards Unconditional Tor-Like Anonymity,” Network Coding (NetCod), 2015 International Symposium on, Sydney, NSW, 2015, pp. 66-70. doi: 10.1109/NETCOD.2015.7176791
Abstract: We design and evaluate a traffic anonymization protocol for wireless networks, aiming to protect against computationally powerful adversaries. Our protocol builds on recent key-generation techniques that leverage intrinsic properties of the wireless together with standard coding techniques. We show how to exploit the security properties of such keys to design a Tor-like anonymity network, without making any assumptions about the computational capabilities of an adversary. Our analysis and evaluation on simulated ad-hoc wireless networks, shows that our protocol achieves a level of anonymity comparable to the level of the Tor network.
Keywords: ad hoc networks; protocols; Tor-like anonymity; Tor-like anonymity network; ad-hoc wireless networks; intrinsic properties; key generation techniques; standard coding techniques; traffic anonymization protocol; wireless networks; Ad hoc networks; Encryption; Protocols; Relays; Uncertainty; Wireless communication (ID#: 16-11172)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176791&isnumber=7176630
S. Vohra and R. Srivastava, “A Survey on Techniques for Securing 6LoWPAN,” Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, Gwalior, 2015, pp. 643-647. doi: 10.1109/CSNT.2015.163
Abstract: The integration of low power wireless personal area networks (LoWPANs) with the Internet allows the vast number of smart objects to harvest data and information through the Internet. Such devices will also be open to many security threats from Internet as well as local network itself. To provide security from both, along with Cryptography techniques, there also requires certain mechanism which provides anonymity & privacy to the communicating parties in the network in addition to providing Confidentiality & Integrity. This paper provides survey on techniques used for securing 6LoWPAN from different attacks and aims to assist the researchers and application developers to provide baseline reference to further carry out their research in this field.
Keywords: Internet; cryptography; personal area networks; telecommunication security; 6LoWPAN; baseline reference; cryptography techniques; local network; low power wireless personal area networks; security threats; smart objects; Computer crime; IEEE 802.15 Standard; Protocols; Routing; Sensors; IDS; IEEE 802.15.4; IPsec; IPv6; Internet of Thing; MT6D
(ID#: 16-11173)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279997&isnumber=7279856
Ming-Huang Guo, Horng-Twu Liaw, Meng-Yu Chiu, and Li-Ping Tsai, “Authenticating with Privacy Protection in Opportunistic Networks,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 375-380. doi: (not provided)
Abstract: In this study, we propose an authentication mechanism with privacy protection for opportunistic networks. It is applied for the short-term and limited-time wireless network environment, and a super node is set to manage node registration. The proposal implements some encryption and security technologies to against security threats and attacks. In the analysis, the proposed mechanism finishes the authentication with less data, and provides anonymity and user privacy in the network.
Keywords: radio networks; telecommunication security; authenticating protection; authentication mechanism; encryption; opportunistic networks; privacy protection; security attacks; security technologies; security threats; wireless network environment; Authentication; Computers; Cryptography; Electronic mail; Authentication Mechanisms; Opportunistic Network; Privacy Protection (ID#: 16-11174)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332598&isnumber=7332527
Remya S and Lakshmi K S, “SHARP: Secured Hierarchical Anonymous Routing Protocol for MANETs,” Computer Communication and Informatics (ICCCI), 2015 International Conference on, Coimbatore, 2015, pp. 313-318. doi: 10.1109/ICCCI.2015.7218121
Abstract: Mobile ad-hoc network (MANET) is one of the developing fields for research and development of wireless network. MANETs are self-organizing, infrastructure less, independent, dynamic topology based, open and decentralized networks. This is an ideal choice for uses such as communication and data sharing. Due to the open and decentralized nature of the network the nodes can join or leave the network as they wish. There is no centralized authority to maintain the membership of nodes in the network. In MANETs security is the major concern in applications such as communication and data sharing. These are so many chances of different types of attacks due to self- organizing property of MANETs. Malicious attacker may try to attack the data packets by tracing the route. They may try to find the source and destination through different types attacks. MANETs are vulnerable to malicious attackers that target to damage and analyze data and traffic analysis by communication eavesdropping or attacking routing protocols. Anonymous routing protocols are used by MANETs that hides the identity of nodes as well as routes from outside observers. In MANETs anonymity means identity and location anonymity of data sources and destinations as well as route anonymity. However existing anonymous routing protocols have significantly high cost, which worsens the resource constraint problem in MANETs. This paper proposes Secured Hierarchical Anonymous Routing Protocol (SHARP) based on cluster routing. SHARP offers anonymity to source, destination, and routes. Theoretically SHARP achieves better anonymity protection compared to other anonymous routing protocols.
Keywords: mobile ad hoc networks; pattern clustering; routing protocols; telecommunication security; telecommunication traffic; MANET; SHARP; cluster routing; communication eavesdropping; data anonymity; data packet; data sharing; decentralized network; malicious attacker; mobile ad-hoc network; resource constraint problem; secured hierarchical anonymous routing protocol; self-organizing property; traffic analysis; wireless network; Ad hoc networks; Cryptography; Mobile computing; Receivers; Routing; Routing protocols; Anonymous routing; Cryptographic techniques; RSA; cluster-based routing; random forwarder (ID#: 16-11175)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218121&isnumber=7218046
A. Alkhelaiwi and D. Grigoras, “The Origin and Trustworthiness of Data in Smart City Applications,” 2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC), Limassol, 2015, pp. 376-382. doi: 10.1109/UCC.2015.60
Abstract: Mobile devices and their sensors facilitate the development of a large range of environment-sensing applications and systems. Crowd sensing is used to feed smart city applications with anonymous but still relevant data. The quality and success of smart city applications depend on several aspects of user involvement, such as data trust and information about data origin. However, with the anonymity and openness of crowd sensing, smart city applications are exposed to untrustworthy and malicious data that can lead to poor decisions. In this paper, we propose a cloud architecture for smart city applications that includes, as a core service, a reputation system for evaluating the trustworthiness of crowd sensing data. This service will run locally, as close to the crowd as possible, for example, on wireless local area network (WLAN) access points (AP). Additionally, data stored in the cloud is traceable by its origin information.
Keywords: cloud computing; mobile computing; smart cities; trusted computing; WLAN AP; loud architecture; core service; crowd sensing data trustworthiness; environment-sensing applications; environment-sensing systems; malicious data; mobile devices; reputation system; sensors; smart city applications; wireless local area network access points; Cloud computing; Computer architecture; Intelligent sensors; Mobile handsets; Smart cities; Wireless LAN; cloud; crowd sensing; data origin; trust (ID#: 16-11176)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7431435&isnumber=7431374
M. Guo, N. Pissinou, and S. S. Iyengar, “Pseudonym-Based Anonymity Zone Generation for Mobile Service with Strong Adversary Model,” Consumer Communications and Networking Conference (CCNC), 2015 12th Annual IEEE, Las Vegas, NV, 2015, pp. 335-340. doi: 10.1109/CCNC.2015.7157998
Abstract: The popularity of location-aware mobile devices and the advances of wireless networking have seriously pushed location-based services into the IT market. However, moving users need to report their coordinates to an application service provider to utilize interested services that may compromise user privacy. In this paper, we propose an online personalized scheme for generating anonymity zones to protect users with mobile devices while on the move. We also introduce a strong adversary model, which can conduct inference attacks in the system. Our design combines a geometric transformation algorithm with a dynamic pseudonyms-changing mechanism and user-controlled personalized dummy generation to achieve strong trajectory privacy preservation. Our proposal does not involve any trusted third-party and will not affect the existing LBS system architecture. Simulations are performed to show the effectiveness and efficiency of our approach.
Keywords: authorisation; data privacy; mobile computing; IT market; LBS system architecture; anonymity zone generation; application service provider; dynamic pseudonyms-changing mechanism; geometric transformation algorithm; inference attacks; location-aware mobile devices; location-based services; mobile devices; mobile service; online personalized scheme; pseudonym-based anonymity zone generation; strong-adversary model; strong-trajectory privacy preservation; user data protection; user privacy; user-controlled personalized dummy generation; wireless networking; Computational modeling; Privacy; Quality of service; Anonymity Zone; Design; Geometric; Location-based Services; Pseudonyms; Trajectory Privacy Protection (ID#: 16-11177)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7157998&isnumber=7157933
J. Liu and Y. Hu, “A New Off-Line Electronic Cash Scheme for Bank Delegation,” Information Science and Technology (ICIST), 2015 5th International Conference on, Changsha, 2015, pp. 186-191. doi: 10.1109/ICIST.2015.7288965
Abstract: Due to the high-speed, low-cost ubiquity of the internet and wireless networks access, the electronic commerce has attracted extensive attention from both academia and industry in the past decade. Electronic cash (e-cash) is a popular billing mechanism for electronic transactions since it can fully protect the anonymity and identity privacy of customers in various electronic transactions. To support withdrawing and storing money from all levels of the bank for the customers in the real world, in this paper, we propose a proxy blind signature scheme and an e-cash scheme based on the new proxy blind signature scheme. The proxy blind signature scheme is proved secure in the Random Oracle Model under the chosen-target computational Diffie-Hellman assumptions, and the e-cash scheme can provide unforgeability of e-cash, anonymity of honest customers and efficient traceability of double spending.
Keywords: bank data processing; cryptography; digital signatures; electronic money; bank delegation; computational Diffie-Hellman assumptions; double spending traceability; e-cash scheme; electronic commerce; honest customer anonymity; off-line electronic cash scheme; proxy blind signature scheme; random oracle model; Business; Forgery; Glands (ID#: 16-11178)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288965&isnumber=7288906
E. Papapetrou, V. F. Bourgos, and A. G. Voyiatzis, “Privacy-Preserving Routing in Delay Tolerant Networks Based on Bloom Filters,” World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2015 IEEE 16th International Symposium on a, Boston, MA, 2015, pp. 1-9. doi: 10.1109/WoWMoM.2015.7158148
Abstract: Privacy preservation in opportunistic networks, such as disruption and delay tolerant networks, constitutes a very challenging area of research. The wireless channel is vulnerable to malicious nodes that can eavesdrop data exchanges. Moreover, all nodes in an opportunistic network can act as routers and thus, gain access to sensitive information while forwarding data. Node anonymity and data protection can be achieved using encryption. However, cryptography-based mechanisms are complex to handle and computationally expensive for the participating (mobile) nodes. We propose SimBet-BF, a privacy-preserving routing algorithm for opportunistic networks. The proposed algorithm builds atop the SimBet algorithm and uses Bloom filters so as to represent routing as well as other sensitive information included in data packets. SimBet-BF provides anonymous communication and avoids expensive cryptographic operations, while the functionality of the SimBet algorithm is not significantly affected. In fact, we show that the required security level can be achieved with a negligible routing performance trade-off.
Keywords: delay tolerant networks; delays; radio networks; telecommunication network routing; telecommunication security; Bloom filters; SimBet algorithm; cryptography based mechanisms; delay tolerant networks; eavesdrop data exchanges; expensive cryptographic operations; malicious nodes; mobile nodes; opportunistic networks; privacy preserving routing algorithm; wireless channel; Cryptography; Measurement; Peer-to-peer computing; Privacy; Protocols; Routing (ID#: 16-11179)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158148&isnumber=7158105
A. K. Tyagi and N. Sreenath, “Location Privacy Preserving Techniques for Location Based Services over Road Networks,” Communications and Signal Processing (ICCSP), 2015 International Conference on, Melmaruvathur, 2015, pp. 1319-1326. doi: 10.1109/ICCSP.2015.7322723
Abstract: With the rapid development of wireless and mobile technologies (LBS), Privacy of personal location information in location-based services of a vehicle ad-hoc network (VANET) users is becoming an increasingly important issue. LBSs provide enhanced functionalities, they open up new vulnerabilities that can be exploited to cause security and privacy breaches. During communication in LBSs, individuals (vehicle users) face privacy risks (for example location privacy, identity privacy, data privacy etc.) when providing personal location data to potentially untrusted LBSs. However, as vehicle users with mobile (or wireless) devices are highly autonomous and heterogeneous, it is challenging to design generic location privacy protection techniques with desired level of protection. Location privacy is an important issue in vehicular networks since knowledge of a vehicle’s location can result in leakage of sensitive information. This paper focuses and discussed on both potential location privacy threats and preserving mechanisms in LBSs over road networks. The proposed research in this paper carries significant intellectual merits and potential broader impacts i.e. (a) investigate the impact of inferential attacks (for example inference attack, position co-relation attack, transition attack and timing attack etc.) in LBSs for vehicular ad-hoc networks (VANET) users, and proves the vulnerability of using long-term pseudonyms (or other approaches like silent period, random encryption period etc.) for camouflaging users’ real identities. (b) An effective and extensible location privacy architecture based on the one approach like mix zone model with other approaches to protect location privacy are discussed. (c) This paper addresses the location privacy preservation problems in details from a novel angle and provides a solid foundation for future research to protecting user’s location information.
Keywords: data privacy; mobile computing; risk management; road traffic; security of data; telecommunication security; vehicular ad hoc networks; VANET; extensible location privacy architecture; identity privacy; inference attack; intellectual merits; location privacy preserving techniques; location privacy threats; location-based services; long-term pseudonyms; mix zone model; mobile technologies; personal location information; position correlation attack; privacy breach; privacy risks; road networks; security breach; timing attack; transition attack; vehicle ad-hoc network; wireless technologies; Communication system security; Mobile communication; Mobile computing; Navigation; Privacy; Vehicles; Wireless communication; Location privacy; Location-Based Service; Mix zones; Mobile networks; Path confusion; Pseudonyms; k-anonymity (ID#: 16-11180)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322723&isnumber=7322423
N. W. Lo, M. C. Chiang, and C. Y. Hsu, “Hash-Based Anonymous Secure Routing Protocol in Mobile Ad Hoc Networks,” Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 55-62. doi: 10.1109/AsiaJCIS.2015.27
Abstract: A mobile ad hoc network (MANET) is composed of multiple wireless mobile devices in which an infrastructure less network with dynamic topology is built based on wireless communication technologies. Novel applications such as location-based services and personal communication Apps used by mobile users with handheld wireless devices utilize MANET environments. In consequence, communication anonymity and message security have become critical issues for MANET environments. In this study, a novel secure routing protocol with communication anonymity, named as Hash-based Anonymous Secure Routing (HASR) protocol, is proposed to support identity anonymity, location anonymity and route anonymity, and defend against major security threats such as replay attack, spoofing, route maintenance attack, and denial of service (DoS) attack. Security analyses show that HASR can achieve both communication anonymity and message security with efficient performance in MANET environments.
Keywords: cryptography; mobile ad hoc networks; mobile computing; mobility management (mobile radio); routing protocols; telecommunication network topology; telecommunication security; DoS attack; HASR protocol; Hash-based anonymous secure routing protocol; MANET; denial of service attack; dynamic network topology; handheld wireless devices; location-based services; message security; mobile users; personal communication Apps; route maintenance attack; wireless communication technologies; wireless mobile devices; Cryptography; Mobile ad hoc networks; Nickel; Routing; Routing protocols; communication anonymity; message security; mobile ad hoc network; routing protocol (ID#: 16-11181)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153936&isnumber=7153836
K. Mrabet, F. E. Bouanani, and H. Ben-Azza, “A Secure Multi-Hops Routing for VANETs,” Wireless Networks and Mobile Communications (WINCOM), 2015 International Conference on, Marrakech, 2015, pp. 1-5. doi: 10.1109/WINCOM.2015.7381299
Abstract: Vehicular ad-hoc networks (VANETs) are a promising communication technology. they offers many applications, which will improve traffic management and safety. Nevertheless, those applications have stringent security requirements, as they affect road traffic safety. Security requirement like authentication, privacy and Integrity are crucial to VANETs, as they avoid attacks against vehicle-to-vehicle and vehicle-to-roadside communication. In this paper, we investigate the authentication and privacy issues in VANETs. We explore the Attribute Based Signature (ABS) primitive and its variants. We then select among existing ABS literature, an efficient scheme (the best known) that achieve both traceability and user-privacy (anonymity). Finally, we propose a protocol for VANETs that uses traceable ABS in general context of multi-hop routing.
Keywords: cryptographic protocols; data privacy; intelligent transportation systems; radio networks; telecommunication security; vehicular ad hoc networks; ABS; VANET; attribute based signature; road traffic safety; secure multihop routing; traffic management; vehicle-to-roadside communication; vehicle-to-vehicle communication; vehicular ad-hoc networks; Authentication; Cryptography; Privacy; Safety; Vehicles; Vehicular ad hoc networks; ABS/TABS schemes; Authentication; Security; Traceability; VANETs
(ID#: 16-11182)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381299&isnumber=7381297
T. Ishitaki, T. Oda, and L. Barolli, “Application of Neural Networks and Friedman Test for User Identification in Tor Networks,” 2015 10th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA), Krakow, 2015, pp. 448-454. doi: 10.1109/BWCCA.2015.88
Abstract: Due to the amount of anonymity afforded to users of the Tor infrastructure, Tor has become a useful tool for malicious users. With Tor, the users are able to compromise the non-repudiation principle of computer security. Also, the potentially hackers may launch attacks such as DDoS or identity theft behind Tor. For this reason, there are needed new systems and models to detect or identify the bad behavior users in Tor networks. In this paper, we present the application of Neural Networks (NNs) and Friedman test for user identification in Tor networks. We used the Back-propagation NN and constructed a Tor server, a Deep Web browser (Tor client) and a Surface Web browser. Then, the client sends the data browsing to the Tor server using the Tor network. We used Wireshark Network Analyzer to get the data and then used the Back-propagation NN to make the approximation. We present many simulation results for different number of hidden units considering Tor client and Surface Web client. The simulation results show that our simulation system has a good approximation and can be used for user identification in Tor networks.
Keywords: backpropagation; client-server systems; computer network security; neural nets; DDoS; Friedman test; The Onion Router; Tor client; Tor infrastructure; Tor networks; Tor server; Wireshark network analyzer; backpropagation NN; computer security; data browsing; deep Web browser; hackers; identity theft behind Tor; malicious users; neural networks; surface Web browser; surface Web client; user identification; Deep Web; Friedman Test; Hidden Unit; Intrusion Detection; Neural Networks; Tor Networks; User Identification (ID#: 16-11183)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7424866&isnumber=7424228
V. Sharma and C. C. Shen, “Evaluation of an Entropy-Based K-Anonymity Model for Location Based Services,” Computing, Networking and Communications (ICNC), 2015 International Conference on, Garden Grove, CA, 2015, pp. 374-378. doi: 10.1109/ICCNC.2015.7069372
Abstract: As the market for cellular telephones, and other mobile devices, keeps growing, the demand for new services arises to attract the end users. Location Based Services (LBS) are becoming important to the success and attractiveness of next generation wireless systems. To access location-based services, mobile users have to disclose their location information to service providers and third party applications. This raises privacy concerns, which have hampered the widespread use of LBS. Location privacy mechanisms include Anonymization, Obfuscation, Policy Based Scheme, k-anonymity and Adding Fake Events. However most existing solutions adopt the k-anonymity principle. We propose an entropy based location privacy mechanism to protect user information against attackers. We look at the effectiveness of the technique in a continuous LBS scenarios, i.e., where users are moving and recurrently requesting for Location Based Services, we also evaluate the overall performance of the system with its drawbacks.
Keywords: data protection; mobile handsets; mobility management (mobile radio); next generation networks; LBS; cellular telephone; entropy-based k-anonymity model evaluation; location based service; location privacy mechanism; mobile device; mobile user; next generation wireless system; policy based scheme; user information protection; Computational modeling; Conferences; Entropy; Measurement; Mobile communication; privacy; Query processing; Location Based Services (LBS); entropy; k-anonymity; privacy
(ID#: 16-11184)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069372&isnumber=7069279
H. Hasrouny, C. Bassil, A. E. Samhat, and A. Laouiti, “Group-Based Authentication in V2V Communications,” Digital Information and Communication Technology and its Applications (DICTAP), 2015 Fifth International Conference on, Beirut, 2015, pp. 173-177. doi: 10.1109/DICTAP.2015.7113193
Abstract: In this paper, we investigate a security architecture for V2V communication that ensure integrity, confidentiality, anonymity, authenticity and non-repudiation. Based on IEEE 1609.2 Standard, we propose group-based V2V authentication and communication for safety message dissemination with lightweight solution, decentralized via group leaders (GLs), efficient, economical and applicable in real mode. We simulate the existing security solutions using “Estinet“ simulator and we show that our group-based authentication proposal performs better than other schemes.
Keywords: IEEE standards; data integrity; message authentication; public key cryptography; vehicular ad hoc networks; Estinet simulator; IEEE 1609.2 standard integrity; V2V communication security architecture; VANET anonymity; group-based V2V authentication; message dissemination safety; message nonrepudiation; public key infrastructure; vehicle to vehicle communication confidentiality; Authentication; Decision support systems; Proposals; Safety; Standards; Vehicular and wireless technologies; IEEE 1609.2; PKI; Security; V2V communication; VANET; authentication (ID#: 16-11185)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113193&isnumber=7113160
S. Doswell, D. Kendall, N. Aslam, and G. Sexton, “A Longitudinal Approach to Measuring the Impact of Mobility on Low-Latency Anonymity Networks,” Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, Dubrovnik, 2015, pp. 108-113. doi: 10.1109/IWCMC.2015.7289066
Abstract: The increasing mobility of Internet users is becoming an emerging issue for low-latency anonymity networks such as Tor. The increase in network churn, generated by a growing mobile client base recycling connections, could impact maintaining the critical balance between anonymity and performance. New combinatorial approaches for measuring both anonymity and performance need to be developed in order to identify critical changes to the network dynamics, and trigger intervention if and when required. We present q-factor, a novel longitudinal approach to measuring anonymity and performance within highly dynamic environments. By modelling q-factor, we show that the impact of mobility, over time, on anonymity is significant. However, by using q-factor, we are able to anticipate and significantly reduce the number of these critical events occurring. In order to make more effective strategic design and/or real-time network decisions in the future, low-latency anonymity networks will be required to adopt an even more proactive approach to network management. The potential impact from increasing mobile usage needs to be considered, as what may initially be perceived as a good solution, may in fact degrade, or in the worst case could destroy the anonymity of users over time.
Keywords: Internet; mobility management (mobile radio); Internet user mobility; longitudinal approach; low latency anonymity networks; mobile client base recycling connections; mobility impact; network dynamics; network management; real-time network decisions; trigger intervention; Bandwidth; Measurement; Mobile communication; Mobile computing; Q-factor; Recycling; Anonymity; Privacy-Enhancing Technology; Security Monitoring and Management (ID#: 16-11186)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289066&isnumber=7288920
A. Algarni and L. Burd, “CommEasy: An Innovative Interactive Communication System for Promoting Communication and Participation,” Frontiers in Education Conference (FIE), 2015. 32614 2015. IEEE, El Paso, TX, 2015, pp. 1-7. doi: 10.1109/FIE.2015.7344129
Abstract: The advancements made in handheld devices and the widespread use of these devices among users all over the world has opened up new avenues for the use of these devices in education. Many applications have been developed to work on these devices to support the teaching and learning process in all its dimensions. CommEasy is an innovative, interactive communication system for smart handheld devices based on Internet and WiFi technology. This system has been developed mainly to enhance communication and participation in distance-learning classrooms that use video-conferencing technology. It allows students to pose questions for the instructors using their own Apple smart handheld devices, guarantees them complete anonymity and allows the instructors to respond to their students. It also enables instructors to evaluate the learning of their students by posing questions with multiple answers to which students can respond through their devices. This paper concentrates on the role of CommEasy in enhancing teacher-student communication and interaction. The hypothesis to be tested is that CommEasy will increase the level of student participation in the distance-learning classroom. According to the results of the experiment conducted in King Saud University, the null hypothesis is rejected, and the experimental hypothesis mentioned above is accepted.
Keywords: Internet; computer aided instruction; distance learning; interactive systems; teaching; teleconferencing; video communication; wireless LAN; Apple smart handheld devices; CommEasy; King Saud University; Wi-Fi technology; distance learning classrooms; education; innovative interactive communication system; teacher-student communication; teaching; video conferencing technology; Education; Handheld computers; IEEE 802.11 Standard; Streaming media; PRS; communication; participations
(ID#: 16-11187)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7344129&isnumber=7344011
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Attribution 2015 |
Attribution of the source of an attack or the author of malware is a continuing problem in computer forensics. For the Science of Security community, it is an important issue related to human behavior, metrics, and composability. The research presented here was presented in 2015.
![]() |
Novino Nirmal. A, Kyung-Ah Sohn, and T. S. Chung, “A Graph Model Based Author Attribution Technique for Single-Class E-Mail Classification,” Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, Las Vegas, NV, 2015, pp. 191-196. doi: 10.1109/ICIS.2015.7166592
Abstract: Electronic mails have increasingly replaced all written modes of communications for important correspondences including personal and business transactions. An e-mail is given equal significance as a signed document. Hence email impersonation through compromised accounts has become a major threat. In this paper, we have proposed an email style acquisition and classification model for authorship attribution that serves as an effective tool to prevent and detect email impersonation. The proposed model gains knowledge of the author’s email style by being trained only with the sample email texts of the author and then identifies if a given email text is a legitimate email of the author or not. Extracting the significant features that represent an author’s style from the available concise emails is a big challenge in email authorship attribution. We have proposed to use a graph-based model to precisely extract the unique feature set of the author. We have used one-class SVM classifier to deal with the single-class sample data that consists of only true positive samples. Two classification models have been designed and compared. The first one is a probability model which is based on the probability of occurrence of a feature in the specific email. The second technique is based on inclusive compound probability of a feature to appear in a sentence of an email. Both the models have been evaluated against the public Enron dataset.
Keywords: electronic mail; graph theory; support vector machines; author attribution technique; business transactions; classification model; electronic mails; email style acquisition; graph model; inclusive compound probability; one-class SVM classifier; personal transactions; public Enron dataset; single-class e-mail classification; single-class sample data; Compounds; Context; Electronic mail; Sensitivity; Support vector machines; Testing; Training; Author Attribution; Graph Model; Information Retrieval; One Class SVM; Stylometry; Text Classification (ID#: 16-10894)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166592&isnumber=7166553
A. F. Ahmed, R. Mohamed, B. Mostafa, and A. S. Mohammed, “Authorship Attribution in Arabic Poetry,” Intelligent Systems: Theories and Applications (SITA), 2015 10th International Conference on, Rabat, 2015, pp. 1-6. doi: 10.1109/SITA.2015.7358411
Abstract: In this paper, we present the Arabic poetry as an authorship attribution task. Several features such as Characters, Sentence length; Word length, Rhyme, and First word in sentence are used as input data for Markov Chain methods. The data is filtered by removing the punctuation and alphanumeric marks that were present in the original text. The data set of experiment was divided into two groups: training dataset with known authors and test dataset with unknown authors. In the experiment, a set of thirty-three poets from different eras have been used. The Experiment shows interesting results with classification precision of 96.96%.
Keywords: Markov processes; humanities; natural language processing; pattern classification; Arabic poetry; Markov chain classifier; alphanumeric marks; authorship attribution task; classification precision; poets; punctuation marks; rhyme; word length; Context; Feature extraction; Sea measurements; Strips; Training; Training data; Arabic Poetry; Authorship attribution; Markov Chain; Text Classification (ID#: 16-10895)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7358411&isnumber=7358370
E. Castillo, D. Vilariño, O. Cervantes, and D. Pinto, “Author Attribution Using a Graph Based Representation,” Electronics, Communications and Computers (CONIELECOMP), 2015 International Conference on, Cholula, 2015, pp. 135-142. doi: 10.1109/CONIELECOMP.2015.7086940
Abstract: Authorship attribution is the task of determining the real author of a given anonymous document. Even though different approaches exist in literature, this problem has been barely dealt with by using document representations that employ graph structures. Actually, most research works in literature handle this problem by employing simple sequences of n words (n-grams), such as bigrams and trigrams. In this paper, an exploration in the use of graphs for representing document sentences is presented. These structures are used for carrying out experiments for solving the problem of Authorship attribution. The experiments that are presented here attain approximately a 79% of accuracy, showing that the graph-based representation could be a way of encapsulating various levels of natural language descriptions in a simple structure.
Keywords: graph theory; natural language processing; text analysis; anonymous document; author attribution; document sentence representation; graph based representation; graph structures; natural language descriptions; Feature extraction; Kernel; Semantics; Support vector machines; Syntactics; Topology; Writing (ID#: 16-10896)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086940&isnumber=7086811
M. Khonji, Y. Iraqi, and A. Jones, “An Evaluation of Authorship Attribution Using Random Forests,” Information and Communication Technology Research (ICTRC), 2015 International Conference on, Abu Dhabi, 2015, pp. 68-71. doi: 10.1109/ICTRC.2015.7156423
Abstract: Electronic text (e-text) stylometry aims at identifying the writing style of authors of electronic texts, such as electronic documents, blog posts, tweets, etc. Identifying such styles is quite attractive for identifying authors of disputed e-text, identifying their profile attributes (e.g. gender, age group, etc), or even enhancing services such as search engines and recommender systems. Despite the success of Random Forests, its performance has not been evaluated on Author Attribtion problems. In this paper, we present an evaluation of Random Forests in the problem domain of Authorship Attribution. Additionally, we have taken advantage of Random Forests’ robustness against noisy features by extracting a diverse set of features from evaluated e-texts. Interestingly, the resultant model achieved the highest classification accuracy in all problems, except one where it misclassified only a single instance.
Keywords: feature extraction; text analysis; author attribution problems; authorship attribution; e-text stylometry; electronic text; feature extraction; random forests; Accuracy; Authentication; Feature extraction; Noise measurement; Radio frequency; Testing; Training
(ID#: 16-10897)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7156423&isnumber=7156393
E. Nunes, N. Kulkarni, P. Shakarian, A. Ruef, and J. Little, “Cyber-Deception and Attribution in Capture-the-Flag Exercises,” 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Paris, 2015, pp. 962-965. doi: 10.1145/2808797.2809362
Abstract: Attributing the culprit of a cyber-attack is widely considered one of the major technical and policy challenges of cyber-security. The lack of ground truth for an individual responsible for a given attack has limited previous studies. Here, we overcome this limitation by leveraging DEFCON capture-the-flag (CTF) exercise data where the actual ground-truth is known. In this work, we use various classification techniques to identify the culprit in a cyberattack and find that deceptive activities account for the majority of misclassified samples. We also explore several heuristics to alleviate some of the misclassification caused by deception.
Keywords: pattern classification; security of data; DEFCON CTF exercise data; DEFCON capture-the-flag exercise data; capture-the-flag exercises; classification techniques; culprit attribution; cyber-attack; cyber-deception; cyber-security; Computer crime; Decision trees; Logistics; Payloads; Social network services; Support vector machines; Training (ID#: 16-10898)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7403662&isnumber=7403513
R. D. Guttman, J. A. Hammerstein, J. A. Mattson, and A. L. Schlackman, “Automated Failure Detection and Attribution in Virtual Environments,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-5. doi: 10.1109/THS.2015.7225309
Abstract: Creating reproducible experiments in a realistic cyber environment is a non-trivial challenge [1] [2]. Utilizing the STEPfwd platform, we have developed a system for easily delivering high-fidelity cyber environments to research participants anywhere in the world. In this paper we outline the operation method of the STEPfwd platform. Special focus will be given to realism enhancing capabilities offered in the environment, such as simulated user behavior and traffic generation. We then discuss how these realism enhancing capabilities are leveraged to perform automated failure detection and attribution within the environment.
Keywords: digital simulation; fault tolerant computing; virtual reality; STEPfwd platform; automated failure detection; high-fidelity cyber environments; realistic cyber environment; traffic generation; user behavior simulation; virtual environments; Electronic mail; Monitoring; Servers; Videos; Virtual machine monitors; Virtual machining (ID#: 16-10899)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225309&isnumber=7190491
Barathi Ganesh H B, Reshma U, and Anand Kumar M, “Author Identification Based on Word Distribution in Word Space,” Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, Kochi, 2015, pp. 1519-1523. doi: 10.1109/ICACCI.2015.7275828
Abstract: Author attribution has grown into an area that is more challenging from the past decade. It has become an inevitable task in many sectors like forensic analysis, law, journalism and many more as it helps to detect the author in every documentation. Here unigram/bigram features along with latent semantic features from word space were taken and the similarity of a particular document was tested using Random forest tree, Logistic Regression and Support Vector Machine in order to create a global model. Dataset from PAN Author Identification shared task 2014 is taken for processing. It has been observed that the proposed model shows state-of-art accuracy of 80% which is significantly greater when compared to the Author Identification PAN results of the year 2014.
Keywords: digital forensics; law; regression analysis; support vector machines; trees (mathematics); author attribution; author identification; forensic analysis; journalism; latent semantic features; logistic regression; random forest tree; support vector machine; unigram/bigram features; word distribution; word space; Accuracy; Computational modeling; Feature extraction; Logistics; Semantics; Support vector machines; Vegetation; Author attribution; Logistic Regression; PAN Author Identification 2014; Random forest tree; Support Vector Machine (ID#: 16-10900)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275828&isnumber=7275573
J. Rivera, “Achieving Cyberdeterrence and the Ability of Small States to Hold Large States at Risk,” Cyber Conflict: Architectures in Cyberspace (CyCon), 2015 7th International Conference on, Tallinn, 2015, pp. 7-24. doi: 10.1109/CYCON.2015.7158465
Abstract: Achieving cyberdeterrence is a seemingly elusive goal in the international cyberdefense community. The consensus among experts is that cyberdeterrence is difficult at best and perhaps impossible, due to difficulties in holding aggressors at risk, the technical challenges of attribution, and legal restrictions such as the UN Charter’s prohibition against the use of force. Consequently, cyberspace defenders have prioritized increasing the size and strength of the metaphorical “walls” in cyberspace over facilitating deterrent measures.
Keywords: security of data; UN Charter prohibition; cyberdeterrence; cyberspace defenders; Cyberspace; Force; Internet; Lenses; National security; Power measurement; attribution; deterrence; use of force (ID#: 16-10901)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158465&isnumber=7158456
C. Wei, T. Wu, and H. Fu, “Plain-to-Plain Scan Registration Based on Geometric Distributions of Points,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 1194-1199. doi: 10.1109/ICInfA.2015.7279468
Abstract: Scan registration plays a critical role in odometry, mapping and localization for Autonomous Ground Vehicle. In this paper, we propose to adopt a probabilistic framework to model the locally planar patch distributions of candidate points from two or more consecutive scans instead of the original point-to-point mode. This can be regarded as the plain-to-plain measurement metric which ensures a very high confidence in the normal orientation of aligned patches. We take into account the geometric attribution of the scanning beam to pick out feature points and then which can reduce the number of selected points to a lower level. The optimization of transform is achieved by the combination of high frequency but coarse scan-to-scan motion estimation and low frequency but fine scan-to-map batch adjustment. We validate the effectiveness of our method by the qualitative tests on our collected point clouds and the quantitative comparisons on the public KITTI odometry datasets.
Keywords: SLAM (robots); automatic guided vehicles; distance measurement; motion estimation; optimisation; probability; transforms; autonomous ground vehicle; coarse scan-to-scan motion estimation; geometric attribution; geometric distributions; localization; mapping; odometry; plain-to-plain measurement metric; plain-to-plain scan registration; planar patch distributions; probabilistic framework; qualitative tests; scan-to-map batch adjustment; transform optimization; Feature extraction; Laser radar; Probabilistic logic; Three-dimensional displays; Trajectory; Transforms; Vehicles; batch adjustment (ID#: 16-10902)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279468&isnumber=7279248
M. Spitters, F. Klaver, G. Koot, and M. v. Staalduinen, “Authorship Analysis on Dark Marketplace Forums,” Intelligence and Security Informatics Conference (EISIC), 2015 European, Manchester, 2015, pp. 1-8. doi: 10.1109/EISIC.2015.47
Abstract: Anonymity networks like Tor harbor many underground markets and discussion forums dedicated to the trade of illegal goods and services. As they are gaining in popularity, the analysis of their content and users is becoming increasingly urgent for many different parties, ranging from law enforcement and security agencies to financial institutions. A major issue in cyber forensics is that anonymization techniques like Tor’s onion routing have made it very difficult to trace the identities of suspects. In this paper we propose classification set-ups for two tasks related to user identification, namely alias classification and authorship attribution. We apply our techniques to data from a Tor discussion forum mainly dedicated to drug trafficking, and show that for both tasks we achieve high accuracy using a combination of character-level n-grams, stylometric features and timestamp features of the user posts.
Keywords: law; marketing; security of data; stock markets; Tor harbor; Tor onion routing; anonymity networks; authorship analysis; dark marketplace forums; financial institutions; illegal goods; illegal services; law enforcement; security agencies; underground markets; Discussion forums; Distance measurement; Drugs; Message systems; Roads; Security; Writing; alias detection; author attribution; dark web; machine learning; stylometric analysis; text mining (ID#: 16-10903)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7379716&isnumber=7379706
Zhi-Ying Lv, Xi-Nong Liang, Xue-Zhang Liang, and Li-Wei Zheng, “A Fuzzy Multiple Attribute Decision Making Method Based on Possibility Degree,” Fuzzy Systems and Knowledge Discovery (FSKD), 2015 12th International Conference on, Zhangjiajie, 2015,
pp. 450-454. doi: 10.1109/FSKD.2015.7381984
Abstract: A fuzzy multiple attribute decision making method is investigated, in which the weights are given by interval numbers and the attribute values are given by the form of triangular fuzzy numbers or linguistic terms. A possibility degree formula for the comparison between two trapezoidal fuzzy numbers is proposed. According to the ordered weighted average (OWA) operator, an approach is presented to aggregate the possibility matrixes based on attributes and then the most desirable alternative is selected. This fuzzy multiple attribute decision making method is used in the field of financial investment evaluation, the set of attributes of the decision making program is built by analyzing of financial and accounting reports in the same industry. Finally a numerical example is provided to demonstrate the practicality and the feasibility of the proposed method.
Keywords: accounting; decision theory; financial management; fuzzy set theory; investment; possibility theory; OWA operator; accounting reports; financial investment evaluation; fuzzy multiple attribute decision making method; linguistic terms; ordered weighted average operator; possibility degree formula; trapezoidal fuzzy numbers; triangular fuzzy numbers; Decision making; Indexes; Industries; Information technology; Investment; Open wireless architecture; Pragmatics; OWA aggregation Operators; investment options; multiple attribution decision making; possibility degree; trapezoidal fuzzy number (ID#: 16-10904)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381984&isnumber=7381900
O. Granichin, N. Kizhaeva, D. Shalymov, and Z. Volkovich, “Writing Style Determination Using the KNN Text Model,” Intelligent Control (ISIC), 2015 IEEE International Symposium on, Sydney, NSW, 2015, pp. 900-905. doi: 10.1109/ISIC.2015.7307296
Abstract: The aim of the paper is writing style investigation. The method used is based on re-sampling approach. We present the text as a series of characters generated by distinct probability sources. A re-sampling procedure is applied in order to simulate samples from the texts. To check if samples are generated from the same population we use a KNN-based two-sample test. The proposed method shows high ability to distinguish variety of different texts.
Keywords: probability; sampling methods; text analysis; KNN text model; KNN-based two-sample test; distinct probability sources; resampling approach; writing style determination; writing style investigation; Gaussian distribution; Histograms; Lead; Probability distribution; Sociology; Testing; Writing; Writing style; authorship attribution; re-sampling; twosample test (ID#: 16-10905)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7307296&isnumber=7307273
Kyounghun Kim, Yunseok Noh, and S. B. Park, “Detecting Multiple Userids on Korean Social Media for Mining TV Audience Response,” TENCON 2015 - 2015 IEEE Region 10 Conference, Macao, 2015, pp. 1-4. doi: 10.1109/TENCON.2015.7373121
Abstract: Possession of multiple userids by a single user happens when more than two userids actually belong to the same user. In analysis of audience response of TV program, it is so important to detect these multi-id users because they often use the multiple ids to manipulate audience response or to take illegal profits. Detecting multiple userids of a single user has similiar nature with authorship attribution in terms of identifying authorships for given arbitrary texts. The conventional supervised techniques for authorship attribution, however, are difficult to be employed directly to the problem of multiple userids detection. This is because we do not know real authors and multiple userids may belong to the same author in the task of multiple userids detection. In addition, since we can not have all authors in advance, userids cannot be treated as classes. This paper proposes a method of learning the element-wise differences between multiple userids. Each userid is represented as a feature vector from their postings on web social media. Then the similarity vector between two userid vectors can be obtained by performing their element-wise difference. With the similarity vectors, we train the similarity patterns for detecting if multiple userids belong to the same user or not. In order to solve the problem successfully, we present six features which are effective for Korean social media. We conducted comprehensive experiments on the Korean social media dataset. The experimental results show that the proposed similarity learning method with all presented features is successful for detecting multiple userids on Korean social media.
Keywords: data mining; learning (artificial intelligence); social networking (online); television broadcasting; Korean social media; TV audience response mining; Web social media; authorship attribution; element-wise difference; illegal profits; multiple userids detection; similarity learning method; supervised technique; Feature extraction; Learning systems; Media; Probabilistic logic; TV; Training; Writing (ID#: 16-10906)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373121&isnumber=7372693
I. Chepurna and M. Makrehchi, “Exploiting Class Bias for Discovery of Topical Experts in Social Media,” 2015 IEEE International Conference on Data Mining Workshop (ICDMW), Atlantic City, NJ, 2015, pp. 64-71. doi: 10.1109/ICDMW.2015.198
Abstract: Discovering contexts of user’s expertise can be a challenging task, especially if there is no explicit attribution provided. With more professionals adopting social networks as a mean of communicating with their colleagues and broadcasting updates on the area of their competence, it is crucial to detect such individuals automatically. This would not only allow for better follower recommendation, but would also help to mine valuable insights and emerging signals in different communities. We posit that topical groups have their unique semantic signatures. Hence, we can treat identification of expert’s topical attribution as a binary classification task, exploiting the class bias to generate training sample without any manual labor. In thiswork, we present profile-and behavior-based models to explore experts topicality. While the former focuses on the static profile of user activity, the latter takes into account consistency and dynamics of a topic in user feed. We also propose a naive baseline tailored to a domain used in evaluation. All models are assessed on a case study of Twitter investment community.
Keywords: social networking (online); Twitter investment community; behavior-based models; binary classification task; class bias; experts topicality; follower recommendation; profile-based models; semantic signatures; social media; social networks; topic consistency; topic dynamics; topical attribution; topical experts discovery; user activity; valuable insights; Context; Media; Semantics; Stock markets; Training; Twitter (ID#: 16-10907)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7395654&isnumber=7395635
C. Napoli, E. Tramontana, G. L. Sciuto, M. Wozniak, R. Damaevicius, and G. Borowik, “Authorship Semantical Identification Using Holomorphic Chebyshev Projectors,” Computer Aided System Engineering (APCASE), 2015 Asia-Pacific Conference on, Quito, 2015, pp. 232-237. doi: 10.1109/APCASE.2015.48
Abstract: Text attribution and classification, for both information retrieval and analysis, have become one of the main issues in the matter of security, trust and copyright preservation. This paper proposes an innovative approach for text classification using Chebyshev polynomials and holomorphic transforms of the coefficients space. The main advantage of this choice lies in the generality and robustness of the proposed semantical identifier, which can be applied to various contexts and lexical domains without any modification.
Keywords: copyright; information retrieval; pattern classification; security of data; text analysis; trusted computing; Chebyshev polynomial; authorship semantical identification; coefficients space; copyright preservation; holomorphic Chebyshev projector; holomorphic transform; information analysis; information retrieval; security; semantical identifier; text attribution; text classification; trust; Chebyshev approximation; Databases; Feature extraction; Modeling; Polynomials; Radiation detectors; Transforms; authorship identification; data mining; natural language; neural networks; text mining (ID#: 16-10908)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287025&isnumber=7286975
B. Caillat, B. Gilbert, R. Kemmerer, C. Kruegel, and G. Vigna, “Prison: Tracking Process Interactions to Contain Malware,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1282-1291. doi: 10.1109/HPCC-CSS-ICESS.2015.297
Abstract: Modern operating systems provide a number of different mechanisms that allow processes to interact. These interactions can generally be divided into two classes: inter-process communication techniques, which a process supports to provide services to its clients, and injection methods, which allow a process to inject code or data directly into another process’ address space. Operating systems support these mechanisms to enable better performance and to provide simple and elegant software development APIs that promote cooperation between processes. Unfortunately, process interaction channels introduce problems at the end-host that are related to malware containment and the attribution of malicious actions. In particular, host-based security systems rely on process isolation to detect and contain malware. However, interaction mechanisms allow malware to manipulate a trusted process to carry out malicious actions on its behalf. In this case, existing security products will typically either ignore the actions or mistakenly attribute them to the trusted process. For example, a host-based security tool might be configured to deny untrusted processes from accessing the network, but malware could circumvent this policy by abusing a (trusted) web browser to get access to the Internet. In short, an effective host-based security solution must monitor and take into account interactions between processes. In this paper, we present Prison, a system that tracks process interactions and prevents malware from leveraging benign programs to fulfill its malicious intent. To this end, an operating system kernel extension monitors the various system services that enable processes to interact, and the system analyzes the calls to determine whether or not the interaction should be allowed. Prison can be deployed as an online system for tracking and containing malicious process interactions to effectively mitigate the threat of malware. The system can also be used as a dynamic analysis to aid an analyst in understanding a malware sample’s effect on its environment.
Keywords: Internet; application program interfaces; invasive software; online front-ends; operating system kernels; software engineering; system monitoring; Prison; Web browser; code injection; dynamic analysis tool; host-based security solution; host-based security systems; injection method; interprocess communication technique; malicious action attribution; malware containment; operating system kernel extension; process address space; process interaction tracking; process isolation; software development API; trusted process; Browsers; Kernel; Malware; Monitoring; inter-process communication; prison; windows (ID#: 16-10909)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336344&isnumber=7336120
L.-Y. Chen, P.-M. Lee, and T.-C. Hsiao, “A Sensor Tagging Approach for Reusing Building Blocks of Knowledge in Learning Classifier Systems,” Evolutionary Computation (CEC), 2015 IEEE Congress on, Sendai, 2015, pp. 2953-2960. doi: 10.1109/CEC.2015.7257256
Abstract: During the last decade, the extraction and reuse of building blocks of knowledge for the learning process of Extended Classifier System (XCS) in Multiplexer (MUX) problem domain have been demonstrate feasible by using Code Fragment (CF) (i.e. a tree-based structure ordinarily used in the field of Genetic Programming (GP)) as the representation of classifier conditions (the resulting system was called XCSCFC). However, the use of the tree-based structure may lead to the bloating problem and increase in time complexity when the tree grows deep. Therefore, we proposed a novel representation of classifier conditions for the XCS, named Sensory Tag (ST). The XCS with the ST as the input representation is called XCSSTC. The experiments of the proposed method were conducted in the MUX problem domain. The results indicate that the XCSSTC is capable of reusing building blocks of knowledge in the MUX problems. The current study also discussed about two different aspects of reusing of building blocks of knowledge. Specifically, we proposed the “attribution selection” part and the “logical relation between the attributes” part.
Keywords: feature selection; learning (artificial intelligence); pattern classification; trees (mathematics); CF; XCS; attribution selection; code fragment; extended classifier system; knowledge building block; learning process; sensor tagging approach; tree-based structure; Accuracy; Encoding; Impedance matching; Indexes; Multiplexing; Sociology; Statistics; Building Blocks; Extended Classifier System (XCS); Hash table; Pattern Recognition; Scalability; Sensory Tag (ID#: 16-10910)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7257256&isnumber=7256859
S. Y. Chang, Y. C. Hu, and Z. Liu, “Securing Wireless Medium Access Control Against Insider Denial-of-Service Attackers,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 370-378. doi: 10.1109/CNS.2015.7346848
Abstract: In a wireless network, users share a limited resource in bandwidth. To improve spectral efficiency, the network dynamically allocates channel resources and, to avoid collisions, has its users cooperate with each other using a medium access control (MAC) protocol. In a MAC protocol, the users exchange control messages to establish more efficient data communication, but such MAC assumes user compliance and can be detrimental when a user misbehaves. An attacker who compromised the network can launch a two-pronged denial-of-service (DoS) attack that is more devastating than an outsider attack: first, it can send excessive reservation requests to waste bandwidth, and second, it can focus its power on jamming those channels that it has not reserved. Furthermore, the attacker can falsify information to skew the network control decisions to its favor. To defend against such insider threats, we propose a resource-based channel access scheme that holds the attacker accountable for its channel reservation. Building on the randomization technology of spread spectrum to thwart outsider jamming, our solution comprises of a bandwidth allocation component to nullify excessive reservations, bandwidth coordination to resolve over-reserved and under-reserved spectrum, and power attribution to determine each node’s contribution to the received power. We analyze our scheme theoretically and validate it with WARP-based testbed implementation and MATLAB simulations. Our results demonstrate superior performance over the typical solutions that bypass MAC control when faced against insider adversary, and our scheme effectively nullifies the insider attacker threats while retaining the MAC benefits between the collaborative users.
Keywords: access control; access protocols; radio access networks; telecommunication security; DoS attack; MAC protocol; MATLAB; WARP; channel reservation; data communication; denial-of-service attackers; medium access control protocol; over-reserved spectrum; power attribution; resource-based channel access; spectral efficiency; under-reserved spectrum; wireless medium access control; wireless network; Bandwidth; Communication system security; Data communication; Jamming; Media Access Protocol; Wireless communication (ID#: 16-10911)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346848&isnumber=7346791
C. H. Lin and J. L. Shih, “A Preliminary Study of Using Laban Movement Analysis to Observe the Change of Teenagers’ Bodily Expressions of Emotions in Game-Based Learning,” Advanced Applied Informatics (IIAI-AAI), 2015 IIAI 4th International Congress on, Okayama, 2015, pp. 329-334. doi: 10.1109/IIAI-AAI.2015.260
Abstract: This study aims to develop a recognition system of bodily expressions of emotions (BEE) that can be applied to digital kinetic games. There were three stages of the study. First, 76 emotion and action terms frequently used by teenagers were selected, they were quantized into four quadrants, and gathered into 16 clusters. The second stage is to construct the database of emotional motions. The researchers collected the data of body motions expressed in accordance to the 16 emotion terms, classified the attributions of body motions using the particular factors of Effort and Space Harmony based on Laban Movement Analysis (LMA). The last stage is to construct the algorithms of BEE based on the attribution database built in the earlier stage. Finally, the researchers would compare the results of the digital recognition system with the database that was built in the second stage to secure the accuracy of the recognition system. Once the body recognition system was built, it would be an important research tool and mechanism to recognize the emotional changes of the player during the kinetic game play, and compare the learning conditions with learning effectiveness to do further investigations.
Keywords: computer aided instruction; computer games; database management systems; emotion recognition; BEE algorithms; Laban movement analysis; attribution database; bodily expressions of emotions recognition system; body motions; digital kinetic games; digital recognition system; emotional motion database; learning conditions; learning effectiveness; space harmony; teenager bodily expression change; Algorithm design and analysis; Classification algorithms; Clustering algorithms; Databases; Emotion recognition; Games; Kinetic theory; bodily expressions of emotions; emotional motions; kinetic game (ID#: 16-10912)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373925&isnumber=7373854
H. Ghaemmaghami, D. Dean, and S. Sridharan, “A Cluster-Voting Approach for Speaker Diarization and Linking of Australian Broadcast News Recordings,” Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, South Brisbane, QLD, 2015, pp. 4829-4833. doi: 10.1109/ICASSP.2015.7178888
Abstract: We present a clustering-only approach to the problem of speaker diarization to eliminate the need for the commonly employed and computationally expensive Viterbi segmentation and realignment stage. We use multiple linear segmentations of a recording and carry out complete-linkage clustering within each segmentation scenario to obtain a set of clustering decisions for each case. We then collect all clustering decisions, across all cases, to compute a pairwise vote between the segments and conduct complete-linkage clustering to cluster them at a resolution equal to the minimum segment length used in the linear segmentations. We use our proposed cluster-voting approach to carry out speaker diarization and linking across the SAIVT-BNEWS corpus of Australian broadcast news data. We compare our technique to an equivalent baseline system with Viterbi realignment and show that our approach can outperform the baseline technique with respect to the diarization error rate (DER) and attribution error rate (AER).
Keywords: pattern clustering; speaker recognition; speech synthesis; AER; Australian broadcast news recordings; DER; SAIVT-BNEWS; Viterbi segmentation; attribution error rate; cluster-voting approach; complete-linkage clustering; diarization error rate; minimum segment length; multiple linear segmentations; segmentation scenario; speaker diarization; Adaptation models; Hidden Markov models; Joining processes; Measurement; Reliability; Speech; Viterbi algorithm; Viterbi realignment; cluster-voting
(ID#: 16-10913)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178888&isnumber=7177909
F. Palmetshofer, D. Schmidt, D. Heinke, N. Gehrke, and U. Steinhoff, “The Volume Fraction of Iron Oxide in a Certain Particle Size Range Determines the Harmonic Spectrum of Magnetic Tracers,” Magnetic Particle Imaging (IWMPI), 2015 5th International Workshop on, Istanbul, 2015, pp. 1-1. doi: 10.1109/IWMPI.2015.7107057
Abstract: Magnetic Particle Spectroscopy (MPS) is a well established method to characterize the signal strength of MPI tracers. To better understand the signal generation it is of considerable interest to identify the optimum particle size or size range causing a maximum signal. Yet, these tracers often exhibit a broad distribution of core diameters impeding an attribution of the signal strength to a specific particle size or size range. In our work, we present a combined evaluation of dynamic and static magnetization measurements to better understand the relation between particle size distribution and dynamic signal generation.
Keywords: biomagnetism; biomedical imaging; iron compounds; magnetic particles; magnetisation; particle size; Fe2O3; core diameters; dynamic magnetization measurements; dynamic signal generation; harmonic spectrum; iron oxide; magnetic particle spectroscopy; magnetic tracers; optimum particle size; particle size distribution; signal strength; static magnetization measurements; volume fraction; Atmospheric measurements; Correlation; Harmonic analysis; Iron; Magnetic field measurement; Particle measurements; Size measurement (ID#: 16-10914)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107057&isnumber=7106981
H. Cai and T. Wolf, “Source Authentication and Path Validation with Orthogonal Network Capabilities,” Computer Communications Workshops (INFOCOM WKSHPS), 2015 IEEE Conference on, Hong Kong, 2015, pp. 111-112. doi: 10.1109/INFCOMW.2015.7179368
Abstract: In-network source authentication and path validation are fundamental primitives to construct security mechanisms such as DDoS mitigation, path compliance, packet attribution, or protection against flow redirection. Unfortunately, most of the existing approaches are based on cryptographic techniques. The high computational cost of cryptographic operations makes these techniques fall short in the data plane of the network, where potentially every packet needs to be checked at Gigabit per second link rates in the future Internet. In this paper, we propose a new protocol, which uses a set of orthogonal sequences as credentials, to solve this problem, which enables a low overhead of verification in routers. Our evaluation of a prototype experiment demonstrates the fast verification speed and low storage consumption of our protocol, while providing reasonable security properties.
Keywords: Internet; authorisation; computer network security; cryptographic protocols; Gigabit per second link rates; cryptographic operations; in-network source authentication; orthogonal network capabilities; path validation; Authentication; Conferences; Cryptography; Optimized production technology; Routing protocols (ID#: 16-10915)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7179368&isnumber=7179273
A. Thekkilakattil and G. Dodig-Crnkovic, “Ethics Aspects of Embedded and Cyber-Physical Systems,” Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 39-44. doi: 10.1109/COMPSAC.2015.41
Abstract: The growing complexity of software employed in the cyber-physical domain is calling for a thorough study of both its functional and extra-functional properties. Ethical aspects are among important extra-functional properties, that cover the whole life cycle with different stages from design, development, deployment/production to use of cyber physical systems. One of the ethical challenges involved is the question of identifying the responsibilities of each stakeholder associated with the development and use of a cyber-physical system. This challenge is made even more pressing by the introduction of autonomous increasingly intelligent systems that can perform functionalities without human intervention, because of the lack of experience, best practices and policies for such technology. In this article, we provide a framework for responsibility attribution based on the amount of autonomy and automation involved in AI based cyber-physical systems. Our approach enables traceability of anomalous behaviors back to the responsible agents, be they human or software, allowing us to identify and separate the “responsibility” of the decision-making software from human responsibility. This provides us with a framework to accommodate the ethical “responsibility” of the software for AI based cyber-physical systems that will be deployed in the future, underscoring the role of ethics as an important extra-functional property. Finally, this systematic approach makes apparent the need for rigorous communication protocols between different actors associated with the development and operation of cyber-physical systems that further identifies the ethical challenges involved in the form of group responsibilities.
Keywords: artificial intelligence; computational complexity; decision making; embedded systems; ethical aspects; AI based cyber-physical systems; cyber-physical domain; decision-making software; ethical responsibility; ethics aspects; extra-functional properties; group responsibilities; human responsibility; intelligent systems; software complexity; Artificial intelligence; Cyber-physical systems; Ethical aspects; Ethics; Safety; Software; Stakeholders; Extra-functional Properties; Software-responsibility (ID#: 16-10916)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273594&isnumber=7273573
D. S. Romanovskiy, A. V. Solomonov, S. A. Tarasov, I. A. Lamkin, G. B. Galiev, and S. S. Pushkarev, “Low Temperature Photoluminescence and Photoreflectance of Metamorphic HEMT Structures with High Mole Fraction of In,” Young Researchers in Electrical and Electronic Engineering Conference (EIConRusNW), 2015 IEEE NW Russia, St. Petersburg, 2015, pp. 36-39. doi: 10.1109/EIConRusNW.2015.7102227
Abstract: Low-temperature photoluminescence and photoreflectance have been studied in several metamorphic HEMT (MHEMT) heterostructures with the In0.7Ga0.3As active regions and different buffer layer designs. It was found that structures with step-graded metamorphic buffer have better quality. Also it was shown that mismatched superlattices in metamorphic buffer can influence on the half-width of photoluminescence spectra. The possible attribution of photoluminescence spectral lines and their thermal behaviour are critically discussed. The photoreflectance spectrum shows a lot of features in energy region, where none of PL features observed.
Keywords: III-V semiconductors; gallium arsenide; high electron mobility transistors; indium compounds; photoluminescence; photoreflectance; superlattices; In high mole fraction; In0.7Ga0.3As; MHEMT; low temperature photoluminescence; metamorphic HEMT structures; mismatched superlattices; photoluminescence spectra; photoreflectance spectrum; step-graded metamorphic buffer; Biology; Indium gallium arsenide; mHEMTs; HEMTs; low temperature; metamorphic buffer (ID#: 16-10917)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102227&isnumber=7102217
Y. Wang, S. Qiu, C. Gao, and J. Wei, “Cluster Analysis on the Service Oriented Alliance Enterprise Manufacturing Resource,” Advanced Mechatronic Systems (ICAMechS), 2015 International Conference on, Beijing, 2015, pp. 1-4. doi: 10.1109/ICAMechS.2015.7287118
Abstract: To alliance manufacturing enterprises, there are some problems in resources such as different kinds, ununiform standards, high repeatability etc. Under cloud manufacturing service platform, using cluster analysis method to classify intelligent resources of alliance enterprise was studied. The same or similar resources are divided into one resources cluster. The uniqueness attribution of resources is solved. According to the cloud user needs, resources in a cluster not in all resources set are searched and matched. Using the evaluation method of manufacturing dynamic capacity, optimal resources is provided. It can improve use ratio of the resources. Finally an instance was verified.
Keywords: cloud computing; customer services; manufacturing systems; production engineering computing; statistical analysis; cloud manufacturing service platform; cloud user needs; cluster analysis method; intelligent resources; manufacturing dynamic capacity; optimal resource; resource use ratio improvement; service-oriented alliance enterprise manufacturing resource; Dynamic scheduling; Indexes; Manufacturing; Measurement; Resource management; Standards; Symmetric matrices; Cloud manufacturing; cluster analysis; manufacturing capacity; resource cluster (ID#: 16-10918)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287118&isnumber=7287059
P. Bakalov, E. Hoel, and W. L. Heng, “Time Dependent Transportation Network Models,” Data Engineering (ICDE), 2015 IEEE 31st International Conference on, Seoul, 2015, pp. 1364-1375. doi: 10.1109/ICDE.2015.7113383
Abstract: Network data models are frequently used as a mechanism to solve wide range of problems typical for the GIS applications and transportation planning in particular. They do this by modelling the two most important aspects of such systems: the connectivity and the attribution. For a long time the attributes like the travel time, associated with a transportation network model have been considered static. With the advancement of the technology data vendors now have the capability to capture more accurate information about the speeds of streets at different times of the day and provide this data to customers. The network attributes are not static anymore but associated with a particular time instance (e.g time-dependent). In this paper we describe our time dependent network model tailored towards the need of transportation network modelling. Our solution is based on the existing database functionality (tables, joins, sorting algorithms) provided by a standard relational DBMS and has been implemented and tested and currently being shipped as a part of the ESRI ArcGIS 10.1 platform and all subsequent releases.
Keywords: geographic information systems; relational databases; GIS applications; database functionality; network data model; relational DBMS; time dependent transportation network models; transportation planning; Algorithm design and analysis; Analytical models; Data models; Geometry; Junctions; Roads (ID#: 16-10919)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113383&isnumber=7113253
X. Pan, C. j. He, and T. Wen, “A SOS Reliability Evaluate Approach Based on GERT,” Reliability and Maintainability Symposium (RAMS), 2015 Annual, Palm Harbor, FL, 2015, pp. 1-7. doi: 10.1109/RAMS.2015.7105085
Abstract: Since SOS (system-of-systems) has properties of independence, connectivity, attribution, diversity, and emergent etc., traditional reliability modeling and assessment approaches are not fully suitable to SOS. Typically, SOS is task-oriented and a specified task of SOS will generate a particular logic process. Therefore, the reliability of a SOS can be evaluated by assessing the SOS tasks process. First, this paper uses ABM (Activity Based Methodology) method, which is based on DODAF (Department of Defense Architecture Framework), to decompose a SOS task into activity models. Second, a GERT (Graphical Evaluation and Re view Technique) hierarchical stochastic network is used to describe task process of SOS, which is dynamic and stochastic. Third, a reliability assessment method of SOS based on the SOS task process DES (Discrete Event Simulation) is presented. At last, a case study is given to illustrate the method.
Keywords: discrete event simulation; reliability; stochastic processes; ABM; DES; DODAF; Department of Defense architecture framework; GERT; SOS reliability evaluate approach; SOS tasks process; activity based methodology; graphical evaluation and review technique; hierarchical stochastic network; reliability assessment; system-of-systems; Aircraft; Atmospheric modeling; Reliability theory; Stochastic processes; system of systems
(ID#: 16-10920)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7105085&isnumber=7105053
D. Regan and S. K. Srivatsa, “Adaptive Artificial Bee Colony Based Parameter Selection for Subpixel Mapping Multiagent System in Remote-Sensing Imagery,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, Coimbatore, 2015, pp. 1-8. doi: 10.1109/ICIIECS.2015.7193224
Abstract: Remote sensing has become an important source of land use/cover information at a range of spatial and temporal scales. The existence of mixed pixels is a major problem in remote-sensing image classification. Although the soft classification and spectral unmixing techniques can obtain an abundance of different classes in a pixel to solve the mixed pixel problem, the subpixel spatial attribution of the pixel will still be unknown. The subpixel mapping technique can effectively solve this problem by providing a fine-resolution map of class labels from coarser spectrally unmixed fraction images. However, most traditional subpixel mapping algorithms treat all mixed pixels as an identical type, either boundary-mixed pixel or linear subpixel, leading to incomplete and inaccurate results. To improve the subpixel mapping accuracy, this paper proposes an adaptive subpixel mapping framework based on a multiagent system for remote sensing imagery. In the proposed multiagent subpixel mapping framework, three kinds of agents, namely, feature detection agents, subpixel mapping agents and decision agents, are designed to solve the subpixel mapping problem. This confirms that MASSM is appropriate for the subpixel mapping of remote-sensing images. But the major problem is that the selection of the parameters becomes assumption in order to overcome these problems proposed work focus on adaptive selection of parameters based on the optimization methods, it automatically selects the parameters value in the classification, and it improves the classification results in the remote-sensing imagery. Experiments with artificial images and synthetic remote-sensing images were performed to evaluate the performance of the proposed artificial bee colony based optimization subpixel mapping algorithm in comparison with the hard classification method and other subpixel mapping algorithms: subpixel mapping based on a back-propagation neural network and the spatial attraction model. The experimental results indicate that the proposed algorithm outperforms the other two subpixel mapping algorithms in reconstructing the different structures in mixed pixels.
Keywords: backpropagation; image classification; remote sensing; adaptive artificial bee colony based parameter selection; adaptive subpixel mapping; back-propagation neural network; boundary-mixed pixel; decision agents; feature detection agents; linear subpixel; soft classification; spatial attraction model; spectral unmixing techniques; subpixel mapping agents; subpixel mapping multiagent system; Classification algorithms; Image reconstruction; Indexes; Lead; Remote sensing; Spatial resolution; Artificial Bee Colony; Enhancement; Hyperspectral Image Sub-Pixel Mapping; Remote Sensing; Resolution; Subpixel Mapping; Super-Resolution Mapping (ID#: 16-10921)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193224&isnumber=7192777
B. Amjad, M. Hirano, and N. Minato, “Loosely Coupled SME Network in Manufacturing Industry: A Challenge in Niigata Sky Project,” Management of Engineering and Technology (PICMET), 2015 Portland International Conference on, Portland, OR, 2015,
pp. 295-301. doi: 10.1109/PICMET.2015.7273172
Abstract: As the emerging nations add influence to the state of global economy, the new demand in air logistics is expected to catapult the increase in aircraft production, introducing a new horizon for potential suppliers to enter into the supply chain regime. This scenario may capture the attention not only from the industry institutions, but also from the public entities, that are in interest of plotting the economic viability of underlying jurisdiction. In such objective, the aviation cluster incubation project of “Niigata Sky Project (NSP)” which is implemented by “the City of Niigata, Japan”, implies a new perspective of implementing a sustainable industry cluster. Present commercial aircraft production is configured under the fundamental of securing a high standard of quality assurance. This prevalence has embedded a structure that requires highly competitive supplier selection policy, which public case can be observed through Germany. Niigata in this regard, demonstrates a lenient non-competitive attribution, and where the suppliers’ competency is developed through their collaborative participation in the project. This case can be represented as “collaboratively institutionalized” versus the German case of “competitively institutionalized” cluster formation. Defining these distinctive practices’ character and finding its applicable phases in the cluster’s lifecycle, shall provide a new view for the cluster managers and SMEs in developing their organizational strengths for regional development.
Keywords: aerospace industry; quality assurance; small-to-medium enterprises; supply chain management; sustainable development; Japan; Niigata Sky Project; air logistics; aircraft production; competitively institutionalized cluster formation; economic viability; loosely coupled SME network; manufacturing industry; small-to-medium sized enterprise; supplier selection policy; supply chain regime; sustainable industry cluster; Aircraft; Cities and towns; Companies; Industries; Joints; Manufacturing; Production (ID#: 16-10922)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273172&isnumber=7272950
W. Tengteng and T. Lijun, “An Analysis to the Concentric Relaxation Vulnerability Area of Voltage Sag in Power System,” Power and Energy Engineering Conference (APPEEC), 2015 IEEE PES Asia-Pacific, Brisbane, QLD, 2015, pp. 1-5. doi: 10.1109/APPEEC.2015.7380901
Abstract: This paper presents a new definition of vulnerability area of voltage sags concentric relaxation vulnerability area (CRVA) that is the area of voltage sags caused by fault with the fault point as the center. Analyzing CRVA could greatly reduce the workload involved in traditional voltage sag vulnerability areas and provided us a judgment reference of the attribution of liabilities between power supply department and power users. Based on such conclusions, we use the GUI of MATLAB to conduct user graphical interface programmed to develop a general software for analyzing CRVA caused by voltage sags. Meanwhile, we take into account the influence posed on CRVA by transformer connection mode when the fault occurs on the low- or high-voltage side of the transformer. The above algorithm enables us to derive the CRVA of IEEE-30 bus system and build such a model through PSCAD to demonstrate the accuracy of CRVA analyzing software in a simulated manner.
Keywords: IEEE standards; graphical user interfaces; power engineering computing; power supply quality; power system faults; power system reliability; power transformer protection; IEEE-30 bus system; MATLAB GUI; PSCAD; graphical user interface; power supply department; power system fault; transformer connection mode; transformer high-voltage; transformer low-voltage fault; voltage sag concentric relaxation vulnerability area analysis; Circuit faults; Graphical user interfaces; MATLAB; Power quality; Voltage fluctuations; concentric relaxation; short circuit fault; voltage sag; vulnerability area (ID#: 16-10923)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380901&isnumber=7380859
M. Teramoto and J. Nakamura, “Application of Applied KOTO-FRAME to the Five-Story Pagoda Aseismatic Mechanism,” 2015 IEEE International Conference on Data Mining Workshop (ICDMW), Atlantic City, NJ, 2015, pp. 742-748. doi: 10.1109/ICDMW.2015.173
Abstract: We have created and are proposing the KOTO-FRAME previously called dynamic quality function deployment (DQFD) technique, which evolved from quality function deployment (QFD). This method was applied to aseismatic mechanisms and is recognized as tacit knowledge for creating a logical structure from the architecture of a five-story pagoda by experimenting a model of the steric balancing toy principle. Consequently, without complex calculations, we were able to define the corresponding data structure in the attribution table of experiment or evaluation, which is worth applying not only to transact past data but also future data via experiment or evaluation utilizing the idea of “market of data.”
Keywords: buildings (structures); history; knowledge management; quality function deployment; structural engineering; DQFD technique; KOTO-FRAME; dynamic quality function deployment; five-story pagoda aseismatic mechanism; steric balancing toy principle; tacit knowledge; Conferences; Data structures; Decision support systems; Floors; Quality function deployment; Stability analysis; Structural beams; MoDAT; QFD; construction; data; tacit knowledge (ID#: 16-10924)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7395742&isnumber=7395635
P. Heyer, J. J. Rivas, L. E. Sucar, and F. Orihuela-Espina, “Improving Classification of Posture Based Attributed Attention Assessed by Ranked Crowd-Raters,” Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2015 9th International Conference on, Istanbul, 2015, pp. 277-279. doi: 10.4108/icst.pervasivehealth.2015.259171
Abstract: Attribution of attention from observable body posture is plausible, providing additional information for affective computing applications. We previously reported a promissory 69.72 ± 10.50 (μ ± σ) of F-measure to use posture as a proxy for attributed attentional state with implications for affective computing applications. Here, we aim at improving that classification rate by reweighting votes of raters giving higher confidence to those raters that are representative of the raters population. An increase to 75.35 ± 11.66 in F-measure was achieved. The improvement in predictive power by the classifier is welcomed and its impact is still being assessed.
Keywords: cognition; human computer interaction; learning (artificial intelligence); F-measure; posture classification; posture-based attributed attention; ranked crowd-rater; semisupervised learning; Affective computing; Head; Human computer interaction; Sensitivity; Sensors; Sociology; Statistics; adaptation; attention; neurorehabilitation; posture; semi-supervised learning (ID#: 16-10925)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7349418&isnumber=7349344
J. Li and D. Liu, “DPSO-Based Clustering Routing Algorithm for Energy Harvesting Wireless Sensor Networks,” Wireless Communications & Signal Processing (WCSP), 2015 International Conference on, Nanjing, 2015, pp. 1-5. doi: 10.1109/WCSP.2015.7341030
Abstract: Energy harvesting wireless sensor networks (EH-WSNs) have been widely used in various areas in recent years. Unlike battery-powered wireless sensor networks, EH-WSNs are powered by energy harvested from ambience. This change calls for new designs on routing protocols in EH-WSNs. This paper proposes a novel centralized clustering routing algorithm based on discrete particle swarm optimization (DPSO). The base station (BS) gathers status information from all nodes, then runs a modified DPSO algorithm to find the optimal topology for wireless sensor networks. The cluster heads election and member nodes attribution are considered as an overall problem and are optimized simultaneously. Simulation results show that the DPSO-based clustering routing has stronger ability to balance energy consumption among sensor nodes in EH-WSNs and increases the network throughput by 15% than sLEACH.
Keywords: energy harvesting; particle swarm optimisation; pattern clustering; routing protocols; telecommunication network topology; wireless sensor networks; BS; DPSO-based clustering routing algorithm; EH-WSN; base station; centralized clustering routing protocol algorithm; discrete particle swarm optimization; energy consumption; energy harvesting wireless sensor network; optimal topology; sLEACH; Algorithm design and analysis; Clustering algorithms; Energy harvesting; Optimization; Routing; Routing protocols; Wireless sensor networks (ID#: 16-10926)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7341030&isnumber=7340966
A. R. Estrada and L. T. Schlemer, “Taxonomy of Faculty Assumptions About Students,” Frontiers in Education Conference (FIE), 2015. 32614 2015. IEEE, El Paso, TX, 2015, pp. 1-5. doi: 10.1109/FIE.2015.7344036
Abstract: If you are part of a faculty meeting, a committee, or a learning community of instructors, you will sooner or later hear the same conversation - the conversation that begins with complaints about students. The attributions about lack of engagement, focus on grades, or the entitlement of this generation are common, and though typically unexamined such complaints are not completely ungrounded. This narrative creates a community around a shared “problem.” This camaraderie is natural, but what are the consequences? Beyond whether such statements are “true,” we believe these assumptions about students are affecting student learning. There is a phenomenon in education known as “self-fulfilling prophesy” where what we believe about students becomes manifest in part because instructors behave in ways that bring about what the instructor initially expects. As a first step in exploring these assumptions, 150 participants in a Teaching Professor Conference in May 2014, generated a list of assumptions they held about students. These assumptions were categorized into four dimensions: Motivation, Behavior, Preparation, and Systems with each dimension having a continuum. This paper describes the taxonomy and references theories to support the organization. The paper will give examples of the assumptions and discuss the next steps to validate the ideas.
Keywords: teaching; faculty assumptions taxonomy; faculty meeting; instructor learning community; self-fulfilling prophesy; student learning; teaching professor conference; Context; Engineering education; Organizations; Psychology; Space exploration; Taxonomy; Faculty assumptions; psychology (ID#: 16-10927)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7344036&isnumber=7344011
J. Albadarneh et al., “Using Big Data Analytics for Authorship Authentication of Arabic Tweets,” 2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC), Limassol, Cyprus, 2015, pp. 448-452. doi: 10.1109/UCC.2015.80
Abstract: Authorship authentication of a certain text is concerned with correctly attributing it to its author based on its contents. It is a very important problem with deep root in history as many classical texts have doubtful attributions. The information age and ubiquitous use of the Internet is further complicating this problem and adding more dimensions to it. We are interested in the modern version of this problem where the text whose authorship needs authentication is an online text found in online social networks. Specifically, we are interested in the authorship authentication of tweets. This is not the only challenging aspect we consider here. Another challenging aspect is the language of the tweets. Most current works and existing tools support English. We chose to focus on the very important, yet largely understudied, Arabic language. Finally, we add another challenging aspect to the problem at hand by addressing it at a very large scale. We present our effort to employ big data analytics to address the authorship authentication problem of Arabic tweets. We start by crawling a dataset of more than 53K tweets distributed across 20 authors. We then use preprocessing steps to clean the data and prepare it for analysis. The next step is to compute the feature vectors of each tweet. We use the Bag-Of-Words (BOW) approach and compute the weights using the Term Frequency-Inverse Document Frequency (TF-IDF). Then, we feed the dataset to a Naive Bayes classifier implemented on a parallel and distributed computing framework known as Hadoop. To the best of our knowledge, none of the previous works on authorship authentication of Arabic text addressed the unique challenges associated with (1) tweets and (2) large-scale datasets. This makes our work unique on many levels. The results show that the testing accuracy is not very high (61.6%), which is expected in the very challenging setting that we consider.
Keywords: Algorithm design and analysis; Authentication; Big data; Clustering algorithms; Electronic mail; Machine learning algorithms; Support vector machines (ID#: 16-10928)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7431455&isnumber=7431374
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Authentication & Authorization with Privacy, 2015 (Part 1) |
Authorization and authentication are cornerstones of computer security. As systems become larger, faster, and more complex, authorization and authentication methods and protocols are proving to have limits and challenges. The research cited here explores new methods and techniques for improving security in cloud environments. This work was presented in 2015.
R. Zhang, L. Zhu, C. Xu, and Y. Yi, “An Efficient and Secure RFID Batch Authentication Protocol with Group Tags Ownership Transfer,” 2015 IEEE Conference on Collaboration and Internet Computing (CIC), Hangzhou, 2015, pp. 168-175. doi: 10.1109/CIC.2015.15
Abstract: Tag Authentication is an essential issue in RFID system which is wildly applied in many areas. Compared to the per-tag-based authentication, the batch-mode authentication has better performance for complex applications. However, many existing batch authentication protocols suffer from security and privacy threats, low efficiency problem or high communication and computation cost. In order to solve these problems better, we propose a new RFID batch authentication protocol. In this protocol, tags are grouped and each key in one group shares the same group key. The connection between the group key and the tag’s own key is fully utilized to construct our batch authentication protocol. Meanwhile, based on the proposed batch authentication protocol, we propose a group tags ownership transfer protocol which also supports the tag authorisation recovery. Compared with previous schemes, our scheme achieves stronger security and higher efficiency. In the side of security and privacy, our scheme meets most requirements, such as tag information privacy, forward/backward security, resistance against reply, tracking, and Dos attacks. Then we carry on the theoretical analysis and implement the simulation experiment. Both of them indicate that the performance of our scheme is much efficient than other authentication schemes. Particularly, the simulation results show that the run time of the whole authentication process in our scheme is decreased 20% at least compared with the other existing schemes.
Keywords: cryptographic protocols; data privacy; radiofrequency identification; telecommunication security; Dos attacks; RFID system; backward security; forward security; group tags ownership transfer protocol; privacy threats; secure RFID batch authentication protocol; security threats; tag authentication; tag authorisation recovery; tag information privacy; Authentication; Computer crime; Privacy; Protocols; Radiofrequency identification; Servers; Batch authentication; Ownership transfer; Privacy; RFID; Security (ID#: 16-9714)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7423079&isnumber=7423045
M. F. F. Khan and K. Sakamura, “Fine-Grained Access Control to Medical Records in Digital Healthcare Enterprises,” Networks, Computers and Communications (ISNCC), 2015 International Symposium on, Hammamet, 2015, pp. 1-6. doi: 10.1109/ISNCC.2015.7238590
Abstract: Adopting IT as an integral part of business and operation is certainly making the healthcare industry more efficient and cost-effective. With the widespread digitalization of personal health information, coupled with big data revolution and advanced analytics, security and privacy related to medical data—especially ensuring authorized access thereto—is facing a huge challenge. In this paper, we argue that a fine-grained approach is needed for developing access control mechanisms contingent upon various environmental and application-dependent contexts along with provision for secure delegation of access-control rights. In particular, we propose a context-sensitive approach to access control, building on conventional discretionary access control (DAC) and role-based access control (RBAC) models. Taking a holistic view to access control, we effectively address the precursory authentication part as well. The eTRON architecture—which advocates use of tamper-resistant chips equipped with functions for mutual authentication and encrypted communication—is used for authentication and implementing the DAC-based delegation of access-control rights. For realizing the authorization and access decision, we used the RBAC model and implemented context verification on top of it. Our approach closely follows regulatory and technical standards of the healthcare domain. Evaluation of the proposed system in terms of various security and performance showed promising results.
Keywords: authorisation; cryptography; health care; medical computing; message authentication; DAC-based delegation; RBAC models; access decision; advanced analytics; application-dependent contexts; authorization; big data revolution; context verification; context-sensitive approach; digital healthcare enterprises; discretionary access control models; eTRON architecture; encrypted communication; environmental contexts; fine-grained access control; healthcare industry; medical records; mutual authentication; personal health information; precursory authentication; regulatory standards; role-based access control models; technical standards; Authentication; Authorization; Context; Cryptography; Medical services; DAC; RBAC; access control; authentication; context-awareness; eTRON; healthcare enterprise; security (ID#: 16-9715)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7238590&isnumber=7238567
A. Upadhyaya and M. Bansal, “Deployment of Secure Sharing: Authenticity and Authorization Using Cryptography in Cloud Environment,” Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, Ghaziabad, 2015, pp. 852-855. doi: 10.1109/ICACEA.2015.7164823
Abstract: Cloud computing is a cost-effective, scalable and flexible model of providing network services to a range of users including individual and business over the Internet. It has brought the revolution in the era of traditional method of storing and sharing of resources. It provides a variety of benefits to its users such as effective and efficient use of dynamically allocated shared resources, economics of scale, availability of resources etc. On the other part, cloud computing presents level of security risks because essential services are often controlled and handled by third party which makes it difficult to maintain data security and privacy and support data and service availability. Since cloud is a collection of machines called servers and all users’ data stored on these machines, it emerges the security issues of confidentiality, integrity and availability. Authentication and authorization for data access on cloud is more than a necessity. Our work attempts to overcome these security challenges. The proposed methodology provides more control of owner on the data stored on cloud by restricting the access to specific user for specific file with limited privileges and for limited time period on the basis of secret key using symmetric as well as asymmetric mechanism. The integrity and confidentiality of data is ensured doubly by not only encrypting the secret key but also to the access permission and limited file information.
Keywords: authorisation; cloud computing; commerce; cryptography; economies of scale; information retrieval; Internet; authenticity; authorization; availability of resources; business; cloud environment; data access; dynamically allocated shared resources; economics of scale; network services; secure sharing; Authorization; Cloud computing; Computational modeling; Computers; Cryptography; Servers; Asymmetric Cryptography; Cloud Computing; Economics of Scale; Scalability; Symmetric Cryptography (ID#: 16-9716)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164823&isnumber=7164643
X. Zhu, Y. Xu, J. Guo, X. Wu, H. Zhu, and W. Miao, “Formal Verification of PKMv3 Protocol Using DT-Spin,” Theoretical Aspects of Software Engineering (TASE), 2015 International Symposium on, Nanjing, 2015, pp. 71-78. doi: 10.1109/TASE.2015.20
Abstract: WiMax (Worldwide Interoperability for Microwave Access, IEEE 802.16) is a standard-based wireless technology, which uses Privacy Key Management (PKM) protocol to provide authentication and key management. Three versions of PKM protocol have been released and the third version (PKMv3) strengthens the security by enhancing the message management. In this paper, a formal analysis of PKMv3 protocol is presented. Both the subscriber station (SS) and the base station (BS) are modeled as processes in our framework. Discrete time describes the lifetime of the Authorization Key (AK) and the Transmission Encryption Key (TEK), which are produced by BS. Moreover, the PKMv3 model is constructed through the discrete-time PROMELA (DT-PROMELA) language and the tool DT-Spin implements the PKMv3 model with lifetime. Finally, we simulate communications between SS and BS and some properties are verified, i.e. liveness, succession and message consistency, which are extracted from PKMv3 and specified using Linear Temporal Logic (LTL) formulae and assertions. Our model provides a basis for further verification of PKMv3 protocol with time characteristic.
Keywords: WiMax; authorisation; computer network security; cryptographic protocols; formal verification; message authentication; private key cryptography; temporal logic; AK; BS; DT-PROMELA language; DT-Spin; DT-spin; IEEE 802.16; LTL; PKM protocol; PKMv3 model; PKMv3 protocol; SS; TEK; WiMax; Worldwide Interoperability for Microwave Access; authentication; authorization key; base station; discrete-time PROMELA language; formal verification; linear temporal logic; message management; privacy key management protocol; security; standard-based wireless technology; subscriber station; third version; transmission encryption key; Authentication; Authorization; Encryption; IEEE 802.16 Standard; Protocols; Discrete-time PROMELA; modeling; verification (ID#: 16-9717)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7307736&isnumber=7307716
X. Chen, G. Sime, C. Lutteroth, and G. Weber, “OAuthHub — A Service for Consolidating Authentication Services,” Enterprise Distributed Object Computing Conference (EDOC), 2015 IEEE 19th International, Adelaide, SA, 2015, pp. 201-210. doi: 10.1109/EDOC.2015.36
Abstract: OAuth has become a widespread authorization protocol to allow inter-enterprise sharing of user preferences and data: a Consumer that wants access to a user’s protected resources held by a Service Provider can use OAuth to ask for the user’s authorization for access to these resources. However, it can be tedious for a Consumer to use OAuth as a way to organize user identities, since doing so requires supporting all Service Providers that the Consumer would recognize as users’ “identity providers”. Each Service Provider added requires extra work, at the very least, registration at that Service Provider. Different Service Providers may differ slightly in the API they offer, their authentication/authorization process or even their supported version of OAuth. The use of different OAuth Service Providers also creates privacy, security and integration problems. Therefore OAuth is an ideal candidate for Software as a Service, while posing interesting challenges at the same time. We use conceptual modelling to derive new high-level models and provide an analysis of the solution space. We address the aforementioned problems by introducing a trusted intermediary—OAuth Hub—into this relationship and contrast it with a variant, OAuth Proxy. Instead of having to support and control different OAuth providers, Consumers can use OAuth Hub as a single trusted intermediary to take care of managing and controlling how authentication is done and what data is shared. OAuth Hub eases development and integration issues by providing a consolidated API for a range of services. We describe how a trusted intermediary such as OAuth Hub can fit into the overall OAuth architecture and discuss how it can satisfy demands on security, reliability and usability.
Keywords: cloud computing; cryptographic protocols; API; OAuth service providers; OAuthHub; authentication services; authorization protocol; software as a service; Analytical models; Authentication; Authorization; Privacy; Protocols; Servers (ID#: 16-9718)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7321173&isnumber=7321136
W. Ma, K. Sartipi, and M. Sharghigoorabi, “Security Middleware Infrastructure for Medical Imaging System Integration,” 2015 17th International Conference on Advanced Communication Technology (ICACT), Seoul, 2015, pp. 353-357. doi: 10.1109/ICACT.2015.7224818
Abstract: With the increasing demand of electronic medical records sharing, it is a challenge for medical imaging service providers to protect the patient privacy and secure their IT infrastructure in an integrated environment. In this paper, we present a novel security middleware infrastructure for seamlessly and securely linking legacy medical imaging systems, diagnostic imaging web applications as well as mobile applications. Software agent such as user agent and security agent have been integrated into medical imaging domains that can be trained to perform tasks. The proposed security middleware utilizes both online security technologies such as authentication, authorization and accounting, and post security procedures to discover system security vulnerability. By integrating with the proposed security middleware, both legacy system users and Internet users can be uniformly identified and authenticated; access to patient diagnostic images can be controlled based on patient’s consent directives and other access control polices defined at a central point; relevant user access activities can be audited at a central repository; user access behaviour patterns are mined to refine existing security policies. A case study is presented based on the proposed infrastructure.
Keywords: authorisation; data privacy; medical image processing; middleware; software agents; IT infrastructure security; accounting technology; authentication technology; authorization technology; diagnostic imaging Web applications; electronic medical records; information technology; legacy medical imaging systems; medical imaging service providers; medical imaging system integration; mobile applications; patient privacy; security agent; security middleware infrastructure; software agent; system security vulnerability; user agent; Authentication; Authorization; Biomedical imaging; Middleware; Picture archiving and communication systems; Access Control; Agent; Behaviour Pattern; Medical Imaging; Security (ID#: 16-9719)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7224818&isnumber=7224736
S. Unger and D. Timmermann, “DPWSec: Devices Profile for Web Services Security,” Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/ISSNIP.2015.7106961
Abstract: As cyber-physical systems (CPS) build a foundation for visions such as the Internet of Things (IoT) or Ambient Assisted Living (AAL), their communication security is crucial so they cannot be abused for invading our privacy and endangering our safety. In the past years many communication technologies have been introduced for critically resource-constrained devices such as simple sensors and actuators as found in CPS. However, many do not consider security at all or in a way that is not suitable for CPS. Also, the proposed solutions are not interoperable although this is considered a key factor for market acceptance. Instead of proposing yet another security scheme, we looked for an existing, time-proven solution that is widely accepted in a closely related domain as an interoperable security framework for resource-constrained devices. The candidate of our choice is the Web Services Security specification suite. We analysed its core concepts and isolated the parts suitable and necessary for embedded systems. In this paper we describe the methodology we developed and applied to derive the Devices Profile for Web Services Security (DPWSec). We discuss our findings by presenting the resulting architecture for message level security, authentication and authorization and the profile we developed as a subset of the original specifications. We demonstrate the feasibility of our results by discussing the proof-of-concept implementation of the developed profile and the security architecture.
Keywords: Internet; Internet of Things; Web services; ambient intelligence; assisted living; security of data; AAL; CPS; DPWSec; IoT; ambient assisted living; communication security; cyber-physical system; devices profile for Web services security; interoperable security framework; message level security; resource-constrained devices; Authentication; Authorization; Cryptography; Interoperability; Web services; Applied Cryptography; Cyber-Physical Systems (CPS); DPWS; Intelligent Environments; Internet of Things (IoT); Usability (ID#: 16-9720)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106961&isnumber=7106892
V. Delgado-Gomes, J. F. Martins, C. Lima, and P. N. Borza, “Smart Grid Security Issues,” 2015 9th International Conference on Compatibility and Power Electronics (CPE), Costa da Caparica, 2015, pp. 534-538. doi: 10.1109/CPE.2015.7231132
Abstract: The smart grid concept is being fostered due to required evolution of the power network to incorporate distributed energy sources (DES), renewable energy sources (RES), and electric vehicles (EVs). The inclusion of these components on the smart grid requires an information and communication technology (ICT) layer in order to exchange information, control, and monitor the electrical components of the smart grid. The two-way communication flows brings cyber security issues to the smart grid. Different cyber security countermeasures need to be applied to the heterogeneous smart grid according to the computational resources availability, time communication constraints, and sensitive information data. This paper presents the main security issues and challenges of a cyber secure smart grid, whose main objectives are confidentiality, integrity, authorization, and authentication of the exchanged data.
Keywords: authorisation; data integrity; distributed power generation; power engineering computing; power system security; renewable energy sources; smart power grids; DES; ICT; RES; computational resources availability; cyber secure smart grid; cyber security; data authentication; data authorization; data confidentiality; data integrity; distributed energy sources; electric vehicles; information and communication technology; power network evolution; renewable energy sources; smart grid security; time communication constraints; two-way communication flow; Computer security; Monitoring; NIST; Privacy; Smart grids; Smart grid; challenges; cyber security; information and communication technology (ICT) (ID#: 16-9721)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7231132&isnumber=7231036
V. Oleshchuk, “Constraints Validation in Privacy-Preserving Attribute-Based Access Control,” Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2015 IEEE 8th International Conference on, Warsaw, 2015, pp. 429-431. doi: 10.1109/IDAACS.2015.7340772
Abstract: Attribute-Based Access Control (ABAC) has been found to be extremely useful and flexible and has drawn a lot of research in recent years. It was observed that in the context of new emerging applications, attributes play an increasingly important role both in defining and enforcing more elaborated and flexible security policies. Recently, NIST has proposed more formal definition of ABAC. In this paper we discuss a general privacy-preserving ABAC model (which combines both authentication and authorization) and propose an approach to handle constraints in such privacy preserving setting.
Keywords: authorisation; constraint handling; data privacy; message authentication; NIST; authentication; authorization; constraints handling; constraints validation; general privacy-preserving ABAC model; privacy-preserving attribute-based access control; security policies; Authentication; Authorization; Context; Privacy; ABAC; access control; attributes; constraints; credentials; privacy; pseudonyms; security (ID#: 16-9722)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7340772&isnumber=7340677
Patil Madhubala R., “Survey on Security Concerns in Cloud Computing,” Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, Noida, 2015, pp. 1458-1462. doi: 10.1109/ICGCIoT.2015.7380697
Abstract: Cloud consists of vast number of servers. Cloud contains tremendous amount of information. There are various problems in cloud computing such as storage, bandwidth, environment problems like availability, Heterogeneity, scalability and security problems like reliability and privacy. Though so many efforts are taken to solve these problems there are still some security problems[1]. Ensuring security to this data is important issue in cloud Storage. Cloud computing security can be defined as broad set of technologies, policies and controls deployed to protect applications, data and corresponding infrastructure of cloud computing. Due to tremendous progress in technology providing security to customers data becomes more and more important. This paper will tell the need of third party auditor in security of cloud. This paper will give brief idea about what are the security threats in cloud computing. This paper will analyze the various security objectives such as confidentiality, integrity, authentication, auditing, accountability, availability, authorization. This paper also studies the various data security concerns such as various reconnaissance techniques, denial of service, account cracking, hostile and self-replicating codes, system or network penetration, Buffer overflow, SQL injection attack.
Keywords: cloud computing; security of data; storage allocation; cloud computing security; cloud storage; Cloud computing; Computer crime; Data privacy; Reconnaissance; Servers; Data security concerns; Security objectives; Third party audit; cloud computing
(ID#: 16-9723)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380697&isnumber=7380415
S. Fugkeaw and H. Sato, “Privacy-Preserving Access Control Model for Big Data Cloud,” 2015 International Computer Science and Engineering Conference (ICSEC), Chiang Mai, 2015, pp. 1-6. doi: 10.1109/ICSEC.2015.7401416
Abstract: Due to the proliferation of advanced analytic applications built on a massive scale of data from several data sources, big data technology has emerged to shift the paradigm of data management. Big data management is usually taken into data outsourcing environment such as cloud computing. According to the outsourcing environment, security and privacy management becomes one of the critical issues for business decision. Typically, cryptographic-based access control is employed to support privacy-preserving authentication and authorization for data outsourcing scenario. In this paper, we propose a novel access control model combining Role-based Access Control (RBAC) model, symmetric encryption, and ciphertext attribute-based encryption (CP-ABE) to support fine-grained access control for big data outsourced in cloud storage systems. We also demonstrate the efficiency and performance of our proposed scheme through the implementation.
Keywords: Big Data; authorisation; cloud computing; cryptography; data privacy; message authentication; outsourcing; CP-ABE; RBAC model; advanced analytic applications; authorization; big data cloud; ciphertext attribute-based encryption; cloud storage systems; cryptographic-based access control; data management; data outsourcing environment; fine-grained access control; privacy-preserving authentication; role-based access control model; symmetric encryption; Access control; Big data; Cloud computing; Data models; Encryption; Access Control; Cloud Computing; Encryption; RBAC (ID#: 16-9724)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7401416&isnumber=7401392
H. Graupner, K. Torkura, P. Berger, C. Meinel, and M. Schnjakin, “Secure Access Control for Multi-Cloud Resources,” Local Computer Networks Conference Workshops (LCN Workshops), 2015 IEEE 40th, Clearwater Beach, FL, 2015, pp. 722-729. doi: 10.1109/LCNW.2015.7365920
Abstract: Privacy, security, and trust concerns are continuously hindering the growth of cloud computing despite its attractive features. To mitigate these concerns, an emerging approach targets the use of multi-cloud architectures to achieve portability and reduce cost. Multi-cloud architectures however suffer several challenges including inadequate cross-provider APIs, insufficient support from cloud service providers, and especially non-unified access control mechanisms. Consequently, the available multi-cloud proposals are unhandy or insecure. This paper proposes two contributions. At first, we survey existing cloud storage provider interfaces. Following, we propose a novel technique that deals with the challenges of connecting modern authentication standards and multiple cloud authorization methods.
Keywords: authorisation; cloud computing; data privacy; storage management; trusted computing; cloud computing; cloud storage provider interfaces; inadequate cross-provider APIs; modern authentication standards; multicloud resources; multiple cloud authorization methods; nonunified access control mechanisms; privacy; secure access control; security; trust concerns; Access control; Authentication; Cloud computing; Containers; Google; Standards; Cloud storage; access control management; data security; multi-cloud systems (ID#: 16-9725)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7365920&isnumber=7365758
W. Zhijun and W. Caiyun, “Security-as-a-Service in Big Data of Civil Aviation,” Computer and Communications (ICCC), 2015 IEEE International Conference on, Chengdu, 2015, pp. 240-244. doi: 10.1109/CompComm.2015.7387574
Abstract: In recent years, Civil aviation industry has achieved a rapid development. It produces a large amount of data during the development process, including many confidential data which are related to the civil aviation system, critical information that is about the civil aviation industry development and large amounts of personal privacy data. Civil Aviation network security issues in big data environment have become increasingly prominent, this paper proposes data protection and privacy preserving services architecture based on Civil Aviation Security data, authentication through OpenSSL identity and attribute-based authorization. Finally, the policy achieves access control of big data and ensures the security of Civil Aviation big data.
Keywords: Big Data; aerospace industry; authorisation; data privacy; OpenSSL identity; access control; attribute-based authorization; civil aviation big data security; civil aviation industry development; civil aviation network security; confidential data; data protection; development process; personal privacy data; privacy preserving services architecture; security-as-a-service; Authentication; Authorization; Big data; Ciphers; Protocols; Servers; Civil Aviation; OpenSSL; attribute; big data (ID#: 16-9726)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7387574&isnumber=7387523
Y.-k. Lee, J.-d. Lim, Y.-s. Jeon, and J.-n. Kim, “Technology Trends of Access Control in IoT and Requirements Analysis,” Information and Communication Technology Convergence (ICTC), 2015 International Conference on, Jeju, 2015, pp. 1031-1033. doi: 10.1109/ICTC.2015.7354730
Abstract: Since IoT devices can cause problems, such as invasion of privacy and threat to our safety, security in IoT is the most important element. IoT is an environment in which various devices to communicate an environment in which various devices communicate with one another without user intervention or with minimal user intervention. Therefore, authentication and access control technology between IoT devices are important element in the IoT security. In this paper, we describe our survey of access control technique in IoT environment and requirements of it.
Keywords: Internet of Things; authorisation; data privacy; formal specification; IoT devices; IoT security; access control; authentication; minimal user intervention; privacy invasion; requirements analysis; safety threat; security threat; Access control; Consumer electronics; Context; Internet; Market research; Servers; IoT security; IoT(Internet of Things); access control in IoT (ID#: 16-9727)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7354730&isnumber=7354472
C. Jiang, Y. Pang, and A. Wu, “A Novel Robust Image-Hashing Method for Content Authentication,” Security and Privacy in Social Networks and Big Data (SocialSec), 2015 International Symposium on, Hangzhou, 2015, pp. 22-27. doi: 10.1109/SocialSec2015.15
Abstract: Image hash functions find extensive application in content authentication, database search, and digital forensic. This paper develops a novel robust image-hashing method based on genetic algorithm (GA) and Back Propagation (BP) Neural Network for content authentication. Lifting wavelet transform is used to extract image low frequency coefficients to create the image feature matrix. A GA-BP network model is constructed to generate image-hashing code. Experimental results demonstrate that the proposed hashing method is robust against random attack, JPEG compression, additive Gaussian noise, and so on. Receiver operating characteristics (ROC) analysis over a large image database reveals that the proposed method significantly outperforms other approaches for robust image hashing.
Keywords: Gaussian noise; authorisation; backpropagation; cryptography; data compression; genetic algorithms; image coding; neural nets; sensitivity analysis; wavelet transforms; GA-BP network model; JPEG compression; ROC; additive Gaussian noise; back propagation neural network; content authentication; database search; digital forensic; genetic algorithm; image database; image feature matrix; image hash functions; image low frequency coefficients extract; image-hashing code; lifting wavelet transform; receiver operating characteristics analysis; robust image-hashing method; Authentication; Feature extraction; Genetic algorithms; Robustness; Training; Wavelet transforms; BP network; discrimination; genetic algorithm; image hash (ID#: 16-9728)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371895&isnumber=7371823
V. Beltran and E. Bertin, “Identity Management for Web Business Communications,” Intelligence in Next Generation Networks (ICIN), 2015 18th International Conference on, Paris, 2015, pp. 103-107. doi: 10.1109/ICIN.2015.7073814
Abstract: WebRTC brings a wide range of possibilities to corporate communications. Nevertheless, the Web nature of this disruptive technology makes it necessary to deeply study its integration into the protected, closed corporate networks. In particular, Identity Management (IdM) in WebRTC communications should comply with each enterprise’s security and privacy policies. We discuss the key differences between the WebRTC identity model and typical enterprise IdM.
Keywords: Internet; business communication; data privacy; security of data; Web real-time communication; WebRTC communications; WebRTC identity model; corporate communications; enterprise IdM; enterprise privacy policies; enterprise security policies; identity management; protected closed corporate networks; Authentication; Authorization; Business communication; Next generation networking; Protocols; WebRTC; Communications; Enterprise; Identity; Service webification; WebRTC (ID#: 16-9729)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7073814&isnumber=7073795
A. Soceanu, M. Vasylenko, A. Egner, and T. Muntean, “Managing the Privacy and Security of eHealth Data,” 2015 20th International Conference on Control Systems and Computer Science, Bucharest, 2015, pp. 439-446. doi: 10.1109/CSCS.2015.76
Abstract: The large scale adoption of mobile medicine, supported by an increasing number of medical devices and remote access to health services, correlated with the continuous involvement of the patients in their own healthcare, led to the emergence of tremendous amounts of clinical data. They need to be securely transferred, archived and accessed. This paper refers to a new approach for protecting the privacy and security of clinical data through the use of a state of the art encryption scheme and attribute-based access control authorization framework. As personal medical records are often used by different entities (e.g. Doctors, pharmacists, nurses, etc.), there is a need for different degrees of authorization access for specific parts of the personal dossier. Appropriate cryptographic tools are presented for allowing partial visibility and valid protection on authorized parts for hierarchical privacy protection of eHealth data. The encryption process relies on ARCANA, a security platform developed at ERISCS research laboratory from University Aix-Marseille. It provides the appropriate cryptographic tools for secure hierarchical access to healthcare data. This ensures that the access of various entities to the healthcare data is accurately and hierarchically controlled. The access control framework used in this research is based on XACML, a standard access control decision model specified by OASIS. The applicability and feasibility of XACML-based policies to regulate the access to patient data are demonstrated through SAFAX. SAFAX is a new public authorization framework developed by the Eindhoven University of Technology tested among others on eHealth case studies, in cooperation with Munich University of Applied Sciences. It is envisioned that the usage of data encryption and public authorization solutions to regulate access control on patients clinical data will have a big impact on the patient’s trust in electronic healthcare systems and will speed up their large sca- e adoption.
Keywords: authorisation; cryptography; data privacy; health care; ARCANA; ERISCS research laboratory; Eindhoven University of Technology; OASIS; University Aix-Marseille; XACML-based policies; attribute-based access control authorization framework; clinical data privacy; clinical data security; cryptographic tools; ehealth data; encryption scheme; health services; healthcare; medical devices; mobile medicine; public authorization solutions; remote access; Authentication; Authorization; Data privacy; Medical services; Standards; ABAC; Patient Consent; Privacy; Security; XACML; eHealth; incremental cryptography (ID#: 16-9730)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168466&isnumber=7168393
W. Zegers, S. Y. Chang, Y. Park, and J. Gao, “A Lightweight Encryption and Secure Protocol for Smartphone Cloud,” Service-Oriented System Engineering (SOSE), 2015 IEEE Symposium on, San Francisco Bay, CA, 2015, pp. 259-266. doi: 10.1109/SOSE.2015.47
Abstract: User data on mobile devices are always transferred into Cloud for flexible and location-independent access to services and resources. The issues of data security and privacy data have been often reverted to contractual partners and trusted third parties. As a matter of fact, to project data, data encryption and user authentication are fundamental requirements between the mobile devices and the Cloud before a data transfer. However, due to limited resources of the smartphones and the unawareness of security from users, data encryption has been the last priority in mobile devices, and the authentication between two entities always depends on a trusted third party. In this paper, we propose a lightweight encryption algorithm and a security handshaking protocol for use specifically between in mobile devices and in Cloud, with the intent of securing data on the user side before it is migrated to cloud storages. The proposed cryptographic scheme and security protocol make use of unique device specific identifiers and user supplied credentials. It aims to achieve a user oriented approach for Smartphone Cloud. Through experiments, we demonstrated that the proposed cryptographic scheme requires less power consumption on mobile devices.
Keywords: authorisation; cloud computing; cryptographic protocols; data privacy; smart phones; cloud storages; contractual partners; cryptographic scheme; data encryption; data security; data transfer; lightweight encryption algorithm; location-independent access; mobile devices; privacy data; project data; secure protocol; security handshaking protocol; security protocol; smart phone cloud; trusted third party; user authentication; user data; Authentication; Encryption; Mobile communication; Protocols; Smart phones; Android; Cloud; Cryptography; Mobile devices and smartphones; Security (ID#: 16-9731)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133539&isnumber=7133490
C. Jin, C. Xu, L. Jiang, and F. Li, “ID-Based Deniable Threshold Ring Authentication,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1779-1784. doi: 10.1109/HPCC-CSS-ICESS.2015.149
Abstract: Deniable threshold ring authentication allows at least t members of a group participants to authenticate a message m without revealing which t members have generated the authenticator, and the verifier cannot convince any third party the message m is authenticated. It can be applied in anonymous and privacy communication scenarios. In this paper, we present a non-interactive identity-based deniable threshold ring authentication scheme. Our scheme is provably secure in the random oracle model under the bilinear Diffie-Hellman assumption. To the best of our knowledge, our scheme is the first identity based deniable threshold ring authentication scheme. Our scheme is very efficient since it only requires one pairing operation in authentication phase and one pairing operation in verification phase regardless of the number of the ring.
Keywords: authorisation; cryptography; message authentication; ID-based deniable threshold ring authentication; anonymous communication scenario; authentication phase; bilinear Diffie-Hellman assumption; group members; noninteractive identity-based deniable threshold ring authentication scheme; pairing operation; privacy communication scenario; random oracle model; verification phase; Authentication; Computer science; Games; Polynomials; Public key; Receivers; Anonymous; Deniable threshold ring authentication; Identity-based cryptography; Privacy; Random oracle model (ID#: 16-9732)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336429&isnumber=7336120
A. S. Raja and S. Abd Razak, “Analysis of Security and Privacy in Public Cloud Environment,” Cloud Computing (ICCC), 2015 International Conference on, Riyadh, 2015, pp. 1-6. doi: 10.1109/CLOUDCOMP.2015.7149630
Abstract: Computing as a utility, is a long held dream that comes true in the form of evolutional paradigm known as Cloud computing. It provides a gigantic storage with ubiquitous platform access and minimal hardware requirement at user end. Ultimate features and multidisciplinary utilization made its future incontestable, and equally attractive in academia and industry. With the immense growth in the area is proportionally rising the security concern. Cloud user can really relish the maximum advantage of cloud computing if the security and privacy concerns that inherit with storing sensitive and personal identifiable information (PII) in cloud are categorically addressed. To provide flexible user authentication and preserve user privacy digital identity management services are vital. Anonymous authentication, revocation, unlinkability and delegation of authentication for multiple cloud services are obligatory user privacy parameters that require to be addressed through identity management services in cloud. In this paper we analyzed the existing work and emphasized the requirement of user privacy preserving identity management system for public cloud environment.
Keywords: authorisation; cloud computing; data privacy; PII; anonymous authentication; authentication delegation; cloud user; minimal hardware requirement; obligatory user privacy parameters; privacy analysis; public cloud environment; revocation; security analysis; sensitive-personal identifiable information storage; ubiquitous platform access; unlinkability; user privacy digital identity management service preservation; Authentication; Cloud computing; Computational modeling; Organizations; Privacy; Smart cards (ID#: 16-9733)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149630&isnumber=7149613
M. Ahmadi, M. Chizari, M. Eslami, M. J. Golkar, and M. Vali, “Access Control and User Authentication Concerns in Cloud Computing Environments,” Telematics and Future Generation Networks (TAFGEN), 2015 1st International Conference on, Kuala Lumpur, 2015, pp. 39-43. doi: 10.1109/TAFGEN.2015.7289572
Abstract: Cloud computing is a newfound service that has a rapid growth in IT industry during recent years. Despite the several advantages of this technology there are some issues such as security and privacy that affect the reliability of cloud computing models. Access control and user authentication are the most important security issues in cloud computing. Therefore, the research has been prepared to provide the overall information about this security concerns and specific details about the identified issues in access control and user authentication researches. Therefore, cloud computing benefits and disadvantages have been explained in the first part. The second part reviewed some of access control and user authentication algorithms and identifying benefits and weaknesses of each algorithm. The main aim of this survey is considering limitations and problems of previous research in the research area to find out the most challenging issue in access control and user authentication algorithms.
Keywords: authorisation; cloud computing; data privacy; IT industry; access control; cloud computing environment; cloud computing model; privacy; security concerns; security issues; user authentication algorithm; Access control; Authentication; Cloud computing; Computational modeling; Encryption; Servers; Access Control; Cloud Computing; Privacy; Security; User Authentication (ID#: 16-9734)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289572&isnumber=7289553
S. A. El-Booz, G. Attiya, and N. El-Fishawy, “A Secure Cloud Storage System Combining Time-based One Time Password and Automatic Blocker Protocol,” 2015 11th International Computer Engineering Conference (ICENCO), Cairo, 2015, pp. 188-194. doi: 10.1109/ICENCO.2015.7416346
Abstract: Cloud storages in cloud data centers can be useful for enterprises and individuals to store and access their data remotely anywhere anytime without any additional burden. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the major problem of cloud data storage is security. As data is stored in geographically distributed data centers, how users will get the confirmation about storing data. Moreover, cloud users must be able to use the cloud storage just like the local storage, without worrying about the need to verify the data integrity and data consistency. Some researchers have been conducted with the aid of Third Party Auditor (TPA) to verify the data stored in the cloud and be sure that it is not tampered. However, the TPA is leased by the provider and after a time cloud service provider may contract with the TPA to conceal the loss of data from the user to prevent the defamation. This paper presents a novel secure cloud storage system to ensure the protection of organizations’ data from both the cloud provider and the third party auditor and from some users who take advantage of the old accounts to access the data stored on the cloud. The proposed system enhances the authentication level of security by using two authentication techniques; Time-based One Time Password (TOTP) for cloud users verification and Automatic Blocker Protocol (ABP) to fully protect the system from unauthorized third party auditor. The experimental results demonstrate the effectiveness and efficiency of the proposed system when auditing shared data integrity.
Keywords: authorisation; cloud computing; storage management; ABP; TPA; authentication techniques; automatic blocker protocol; cloud data centers; cloud storage system; cloud users verification; data maintenance; third party auditor; time-based one time password; Authentication; Contracts; Cryptography; Automatic Blocker Protocol (ABP); Cloud Computing; One Time Password (OTP); Privacy Preserving; Third Party Auditor (TPA); public auditability (ID#: 16-9735)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7416346&isnumber=7416313
A. A. Malik, H. Anwar, and M. A. Shibli, “Federated Identity Management (FIM): Challenges and Opportunities,” 2015 Conference on Information Assurance and Cyber Security (CIACS), Rawalpindi, 2015, pp. 75-82. doi: 10.1109/CIACS.2015.7395570
Abstract: Federated Identity Management (FIM) is a method that facilitates management of identity processes and policies among the collaborating entities. It also enables secure resource sharing among these entities, but it hasn’t been as widely adopted as expected. So, in this paper we have identified factors that are pivotal for a holistic FIM framework or model. These factors include trust management and trust establishment techniques, preservation of user privacy, consistent access rights across Circles of Trust (CoTs), continuous monitoring of collaborating entities and adaptation to unanticipated events. On the basis of these factors, we have presented an extensive comparative analysis on existing FIM frameworks and models that identify current challenges and areas of improvement in this field. We’ve also analyzed these frameworks and models against a set of attacks to gauge their strengths and weaknesses.
Keywords: authorisation; data privacy; trusted computing; CoT; FIM framework; access right; circles of trust; entity collaboration; federated identity management; trust establishment; trust management; user privacy preservation; Adaptation models; Authentication; Metadata; Organizations; Privacy; Runtime; Adaptation to Unanticipated Events; Centralized/ Distributed trust management; Circle of Trust (CoT); Consistent Access Rights across CoTs; Continuous Trust Monitoring; Federated Identity Management (FIM); Static/ Dynamic trust establishment; User Privacy (ID#: 16-9736)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7395570&isnumber=7395552
P. Dzurenda, J. Hajny, V. Zeman, and K. Vrba, “Modern Physical Access Control Systems and Privacy Protection,” Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, Prague, 2015, pp. 1-5. doi: 10.1109/TSP.2015.7296213
Abstract: The paper deals with current state of card based PAC (Physical Access Control) systems, especially their level of security and provided mechanisms for protecting users’ privacy. We propose to use ABCs (Attribute-Based Credentials) to create Privacy-PAC system that provides greater protection of user privacy compared to classic systems. We define basic requirements for Privacy-PAC and provide a comparison of the current ABC systems by their usability in Privacy-PAC. Moreover, we show performance benchmarks of cryptographic primitives used in ABCs which were implemented on Multos and Java Card platforms.
Keywords: Java; authorisation; cryptography; data privacy; user interfaces; ABC; Java Card platforms; Multos platforms; Privacy-PAC system; attribute-based credentials; cryptographic primitives; modern physical access control systems; privacy protection; users privacy; Access control; Authentication; Ciphers; Privacy; Protocols; Privacy; anonymity; cryptography; physical access; security (ID#: 16-9737)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296213&isnumber=7296206
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Authentication & Authorization with Privacy, 2015 (Part 2) |
Authorization and authentication are cornerstones of computer security. As systems become larger, faster, and more complex, authorization and authentication methods and protocols are proving to have limits and challenges. The research cited here explores new methods and techniques for improving security in cloud environments. This work was presented in 2015.
K. Yang, D. Forte and M. M. Tehranipoor, “Protecting Endpoint Devices in IoT Supply Chain,” Computer-Aided Design (ICCAD), 2015 IEEE/ACM International Conference on, Austin, TX, 2015, pp. 351-356. doi: 10.1109/ICCAD.2015.7372591
Abstract: The Internet of Things (IoT), an emerging global network of uniquely identifiable embedded computing devices within the existing Internet infrastructure, is transforming how we live and work by increasing the connectedness of people and things on a scale that was once unimaginable. In addition to increased communication efficiency between connected objects, the IoT also brings new security and privacy challenges. Comprehensive measures that enable IoT device authentication and secure access control need to be established. Existing hardware, software, and network protection methods, however, are designed against fraction of real security issues and lack the capability to trace the provenance and history information of IoT devices. To mitigate this shortcoming, we propose an RFID-enabled solution that aims at protecting endpoint devices in IoT supply chain. We take advantage of the connection between RFID tag and control chip in an IoT device to enable data transfer from tag memory to centralized database for authentication once deployed. Finally, we evaluate the security of our proposed scheme against various attacks.
Keywords: Internet of Things; authorisation; data privacy; production engineering computing; radiofrequency identification; supply chain management; Internet infrastructure; Internet of things; IoT device authentication; IoT supply chain; RFID tag; RFID-enabled solution; centralized database; communication efficiency; control chip; data transfer; endpoint device protection; privacy challenges; secure access control; security challenges; security issues; uniquely identifiable embedded computing devices; Authentication; Hardware; Internet; Privacy; Radiofrequency identification; Supply chains; Authentication; Endpoint Device; Internet of Things (IoT); Supply Chain Security; Traceability (ID#: 16-9738)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7372591&isnumber=7372533
M. Guerar, M. Migliardi, A. Merlo, M. Benmohammed and B. Messabih, “A Completely Automatic Public Physical Test to Tell Computers and Humans Apart: A Way to Enhance Authentication Schemes in Mobile Devices,” High Performance Computing & Simulation (HPCS), 2015 International Conference on, Amsterdam, 2015, pp. 203-210. doi: 10.1109/HPCSim.2015.7237041
Abstract: Nowadays, data security is one of the most - if not the most important aspects in mobile applications, web and information systems in general. On one hand, this is a result of the vital role of mobile and web applications in our daily life. On the other hand, though, the huge, yet accelerating evolution of computers and software has led to more and more sophisticated forms of threats and attacks which jeopardize user's credentials and privacy. Today's computers are capable of automatically performing authentication attempts replaying recorded data. This fact has brought the challenge of access control to a whole new level, and has urged the researchers to develop new mechanisms in order to prevent software from performing automatic authentication attempts. In this research perspective, the Completely Automatic Public Turing test to tell Computers and Humans Apart (CAPTCHA) has been proposed and widely adopted. However, this mechanism consists of a cognitive intelligence test to reinforce traditional authentication against computerized attempts, thus it puts additional strain on the legitimate user too and, quite often, significantly slows the authentication process. In this paper, we introduce a Completely Automatic Public Physical test to tell Computers and Humans Apart (CAPPCHA) as a way to enhance PIN authentication scheme for mobile devices. This test does not introduce any additional cognitive strain on the user as it leverages only his physical nature. We prove that the scheme is even more secure than CAPTCHA and our experiments show that it is fast and easy for users.
Keywords: Turing machines; authorisation; cognition; data privacy; mobile computing; CAPPCHA; CAPTCHA; Completely Automatic Public Physical test to tell Computers and Humans Apart; PIN authentication scheme; Web systems; authentication schemes; cognitive intelligence test; completely automatic public Turing test to tell computers and humans apart; completely automatic public physical test; information systems; mobile applications; mobile devices; user credentials; user privacy; Authentication; CAPTCHAs; Computers; Sensors; Smart phones (ID#: 16-9739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237041&isnumber=7237005
D. ElMenshawy, “Touchscreen Patterns Based Authentication Approach for Smart Phones,” Science and Information Conference (SAI), 2015, London, 2015, pp. 1311-1315. doi: 10.1109/SAI.2015.7237312
Abstract: Recently, smart phones have been used not only for communication, but also for, storing confidential and business information. As a result, the theft or hacking of a mobile phone can lead to disastrous implications, such as intrusion of privacy and monetary loss. In this paper, the application of different biometric features for authentication in smart phones is presented. Also, the differences between various touchscreen patterns in terms of data capturing and template creation are shown. In the experiments, device orientation and speed are used to present the effectiveness and efficiency of using biometrics for authentication in smart phones. Two applications were implemented to collect the different biometric features. After that, kmeans clustering technique was applied on the collected data and the accuracy was measured. The main conclusion is that biometrics related to touch behavior is feasible to authenticate users.
Keywords: authorisation; biometrics (access control); computer crime; data privacy; message authentication; mobile computing; pattern clustering; smart phones; touch sensitive screens; biometric features; business information storing; confidential information storing; data capturing; device orientation; device speed; hacking; k means clustering technique; mobile phone; monetary loss; privacy intrusion; smart phones; template creation; theft; touchscreen patterns based authentication approach; Accuracy; Authentication; Sensor phenomena and characterization; Smart phones; Authentication; Biometrics; Security; Smart Phones; Touchscreen (ID#: 16-9740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237312&isnumber=7237120
S. C. Patel, R. S. Singh and S. Jaiswal, “Secure and Privacy Enhanced Authentication Framework for Cloud Computing,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, Coimbatore, 2015, pp. 1631-1634. doi: 10.1109/ECS.2015.7124863
Abstract: Cloud computing is a revolution in information technology. The cloud consumer outsources their sensitive data and personal information to cloud provider's servers which is not within the same trusted domain of data-owner so most challenging issues arises in cloud are data security users privacy and access control. In this paper we also have proposed a method to achieve fine grained security with combined approach of PGP and Kerberos in cloud computing. The proposed method provides authentication, confidentiality, integrity, and privacy features to Cloud Service Providers and Cloud Users.
Keywords: authorisation; cloud computing; data integrity; data privacy; outsourcing; personal information systems; sensitivity; trusted computing; Kerberos approach; PGP approach; access control; authentication features; cloud computing; cloud consumer; cloud provider servers; cloud service providers; cloud users; confidentiality features; data security user privacy; data-owner; information technology; integrity features; personal information outsourcing; privacy enhanced authentication framework; privacy features; secure authentication framework; sensitive data outsourcing; Access control; Authentication; Cloud computing; Cryptography; Privacy; Servers; Cloud computing; Kerberos; Pretty Good Privacy; access control; authentication; privacy; security (ID#: 16-9741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124863&isnumber=7124722
F. Yao, S. Y. Yerima, B. J. Kang and S. Sezer, “Event-Driven Implicit Authentication for Mobile Access Control,” Next Generation Mobile Applications, Services and Technologies, 2015 9th International Conference on, Cambridge, 2015, pp. 248-255. doi: 10.1109/NGMAST.2015.47
Abstract: In order to protect user privacy on mobile devices, an event-driven implicit authentication scheme is proposed in this paper. Several methods of utilizing the scheme for recognizing legitimate user behavior are investigated. The investigated methods compute an aggregate score and a threshold in real-time to determine the trust level of the current user using real data derived from user interaction with the device. The proposed scheme is designed to: operate completely in the background, require minimal training period, enable high user recognition rate for implicit authentication, and prompt detection of abnormal activity that can be used to trigger explicitly authenticated access control. In this paper, we investigate threshold computation through standard deviation and EWMA (exponentially weighted moving average) based algorithms. The result of extensive experiments on user data collected over a period of several weeks from an Android phone indicates that our proposed approach is feasible and effective for lightweight real-time implicit authentication on mobile smartphones.
Keywords: authorisation; data privacy; human computer interaction; message authentication; mobile computing; mobile radio; moving average processes; telecommunication security; trusted computing; EWMA; abnormal activity detection; aggregate score; event-driven implicit authentication scheme; explicitly authenticated access control; exponentially weighted moving average based algorithms; legitimate user behavior recognition; mobile access control; mobile devices; standard deviation; threshold computation; trust level; user interaction; user privacy protection; user recognition rate; Aggregates; Authentication; Browsers; Context; History; Mobile handsets; Training; behavior-based authentication; implicit authentication (ID#: 16-9742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373251&isnumber=7373199
D. Goyal and M. B. Krishna, “Secure Framework for Data Access Using Location Based Service in Mobile Cloud Computing,” 2015 Annual IEEE India Conference (INDICON), New Delhi, 2015, pp. 1-6. doi: 10.1109/INDICON.2015.7443761
Abstract: Mobile Cloud Computing (MCC) extends the services of cloud computing with respect to mobility in cloud and user device. MCC offloads the computation and storage to the cloud since the mobile devices are resource constrained with respect to computation, storage and bandwidth. The task can be partitioned to offload different sub-tasks to the cloud and achieve better performance. Security and privacy are the primary factors that enhance the performance of MCC applications. In this paper we present a security framework for data access using Location-based service (LBS) that acts as an additional layer in authentication process. User having valid credentials in location within the organization are enabled as authenticated user.
Keywords: authorisation; cloud computing; data privacy; message authentication; mobile computing; resource allocation; LBS; MCC; data access; data privacy; location based service; mobile cloud computing; security framework; task partitioning; user authentication process; Cloud computing; Mobile communication; Mobile computing; Organizations; Public key; Cloud Computing; Encryption; Geo-encryption; Location-based Service; Mobile Cloud Computing; Security in MCC (ID#: 16-9743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7443761&isnumber=7443105
N. W. Lo, C. K. Yu and C. Y. Hsu, “Intelligent Display Auto-Lock Scheme for Mobile Devices,” Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 48-54. doi: 10.1109/AsiaJCIS.2015.30
Abstract: In recent years people in modern societies have heavily relied on their own intelligent mobile devices such as smartphones and tablets to get personal services and improve work efficiency. In consequence, quick and simple authentication mechanisms along with energy saving consideration are generally adopted by these smart handheld devices such as screen auto-lock schemes. When a smart device activates its screen lock mode to protect user privacy and data security on this device, its screen auto-lock scheme will be executed at the same time. Device user can setup the length of time period to control when to activate the screen lock mode of a smart device. However, it causes inconvenience for device users when a short time period is set for invoking screen auto-lock. How to get balance between security and convenience for individual users to use their own smart devices has become an interesting issue. In this paper, an intelligent display (screen) auto-lock scheme is proposed for mobile users. It can dynamically adjust the unlock time period setting of an auto-lock scheme based on derived knowledge from past user behaviors.
Keywords: authorisation; data protection; display devices; human factors; mobile computing; smart phones; authentication mechanisms; data security; energy saving; intelligent display auto-lock scheme; intelligent mobile devices; mobile users; personal services; screen auto-lock schemes; smart handheld devices; tablets; unlock time period; user behaviors; user convenience; user privacy protection; user security; work efficiency improvement; Authentication; IEEE 802.11 Standards; Mathematical model; Smart phones; Time-frequency analysis; Android platform; display auto-lock; smartphone (ID#: 16-9744)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153935&isnumber=7153836
I. M. Abumuhfouz and K. Lim, “Protecting Vehicular Cloud Against Malicious Nodes Using Zone Authorities,” SoutheastCon 2015, Fort Lauderdale, FL, 2015, pp. 1-2. doi: 10.1109/SECON.2015.7132956
Abstract: Vehicle's resources, sensors and in-vehicle's technologies allow it to collect data about its status, driver, surrounding vehicles, roads, etc. Recently, Vehicular Cloud (VC) has emerged as a promising technology that utilizes vehicle's underutilized devices and their data as a main source of decisions for clients such as intelligent transportation system, automakers, third part applications, business companies and others. However, malicious nodes may take advantages of VC weak protection and present threats to their data, resources and services. We study the problem of the malicious nodes in VC and we propose a secure framework that leverages keys management and revocation mechanisms to protect VC against malicious nodes. The framework uses multiple zone authorities, where each one controls a zone (area) consists of road side units (RSUs), vehicles and the clients at that zone. Each zone authority works as a gateway that authenticates the operations of that zone, controls the services' requested and the data flow, and finally preserves the privacy of the vehicles, the clients and the cloud entities involved. Revocation mechanisms are used to generate revoked lists of malicious vehicles and clients implemented using skip lists. The framework efficiently prevents malicious nodes from using the vehicular cloud in light, secure and efficient way.
Keywords: authorisation; cloud computing; data privacy; VC; in-vehicle technologies; keys management mechanism; malicious nodes protection; revocation mechanism; vehicle devices; vehicle resources; vehicle sensors; vehicular cloud protection; zone authority; Authentication; Cloud computing; Companies; Safety; Sensors; Vehicles; Malicious nodes; Revocation; Security in Vehicular cloud; Skip list; Vehicular cloud (ID#: 16-9745)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7132956&isnumber=7132866
B. A. Delail and C. Y. Yeun, “Recent Advances of Smart Glass Application Security and Privacy,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015, pp. 65-69. doi: 10.1109/ICITST.2015.7412058
Abstract: The recent developments in technology have led to new emerging wearable devices such as Smart Glasses. Usually, consumer electronics devices are designed for the benefits and functionalities they can provide, and the security aspect is incorporated later. In its core, a Smart Glass is based on components available in modern smartphones. This includes the CPU, Sensors and Operating System. Thus would share similar security threats. This paper identifies security threats and privacy issues of the Smart Glass from two different perspectives, as well as propose preliminary solutions to overcome such risks. This includes, a suggested two-factor authentication for smart glasses based on PIN or Voice combined with an Iris scan. The purpose of this work is to examine the current state-of-the-art of smart glasses applications, and analyse existing and potential security and privacy issues. Therefore, we investigate issues from a system perspective, and the users point of view. In addition, we survey existing solutions for each issue.
Keywords: authorisation; data privacy; operating systems (computers); smart phones; wearable computers; CPU; Iris scan; PIN; consumer electronics devices; operating system; privacy issues; security threats; sensors; smart glass application privacy; smart glass application security; smartphones; two-factor authentication; wearable devices; Authentication; Cameras; Glass; Iris recognition; Privacy; Smart phones; Augmented Reality; Internet of Things; Privacy; Security; Smart Glass; Wearable Technology (ID#: 16-9746)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7412058&isnumber=7412034
S. V. Baghel and D. P. Theng, “A Survey for Secure Communication of Cloud Third Party Authenticator,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, Coimbatore, 2015, pp. 51-54. doi: 10.1109/ECS.2015.7124959
Abstract: Cloud computing is an information technology where user can remotely store their outsourced data so as enjoy on demand high quality application and services from configurable resources. Using information data exchange, users can be worried from the load of local data storage and protection. Thus, allowing freely available auditability for cloud data storage is more importance so that user gives change to check data integrity through external audit party. In the direction of securely establish efficient third party auditor (TPA), which has next two primary requirements to be met: (1) TPA should able to audit outsourced data without demanding local copy of user outsourced data; (2) TPA process should not bring in new threats towards user data privacy. To achieve these goals this system will provide a solution that uses Kerberos as a Third Party Auditor/ Authenticator, RSA algorithm for secure communication, MD5 algorithm is used to verify data integrity, Data centers is used for storing of data on cloud in effective manner with secured environment and provides Multilevel Security to Database.
Keywords: authorisation; cloud computing; computer centres; data integrity; data protection; outsourcing; public key cryptography; MD5 algorithm; RSA algorithm; TPA; cloud third party authenticator; data centers; data outsourcing; external audit party; information data exchange; information technology; local data protection; local data storage; multilevel security; on demand high quality application; on demand services; secure communication; third party auditor; user data privacy; user outsourced data; Algorithm design and analysis; Authentication; Cloud computing; Heuristic algorithms; Memory; Servers; Cloud Computing; Data center; Multilevel database; Public Auditing; Third Party Auditor (ID#: 16-9747)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124959&isnumber=7124722
V. V. Kumar and A. Murugavel, “Ensuring Consistency File Authentication over Encrypted Files in the Cloud,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, Coimbatore, 2015, pp. 1-5. doi: 10.1109/ICIIECS.2015.7192941
Abstract: Cloud security becomes the biggest research issue in the today's world where the cloud stored file content access need to be limited to only the authorized users and also it need to be protected from the cloud servers where the file contents are stored. In the existing method, a protocol is introduced which is based on the Deterministic Finite Automata (DFA) authentication which allows a clients to authenticate the server behaviours whether they are involved in any malicious activities are not. However this lacks from the file update, in which it will be complex to update the file contents which are modified and accessed by the clients concurrently. In this work, the fork consistency approach is integrated with the existing protocol to achieve a consistency property.
Keywords: cloud computing; cryptography; finite automata; user interfaces; DFA authentication; cloud security; deterministic finite automata; encrypted files; file authentication; file content access; user authorization; Cloud computing; Conferences; Encryption; Protocols; Servers; BLS scheme; Data privacy; Deterministic Finite Automaton; fork consistency; paillier encryption (ID#: 16-9748)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7192941&isnumber=7192777
Y. Iso and T. Saito, “A Proposal and Implementation of an ID Federation that Conceals a Web Service from an Authentication Server,” 2015 IEEE 29th International Conference on Advanced Information Networking and Applications, Gwangiu, 2015,
pp. 347-351. doi: 10.1109/AINA.2015.205
Abstract: Recently, it is becoming more common for a website to authenticate its users with an external identity provider by using Open ID Authentication or Security Assertion Markup Language. However, such authentication schemes tell the identity provider where the user is going. Consequently, for instance, an identity provider can track its users and refuse access to services offered by competitors. In this paper, we propose an authentication method whereby an identity provider cannot track users.
Keywords: Web services; XML; authorisation; ID Federation; OpenID authentication; Web service; authentication method; authentication server; identity provider; security assertion markup language; Authentication; Browsers; Cryptography; Privacy; Servers; Uniform resource locators; Federated identity; OpenID; Single Sign-On (ID#: 16-9749)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097990&isnumber=7097928
C. Pavlovski, C. Warwar, B. Paskin and G. Chan, “Unified Framework for Multifactor Authentication,” Telecommunications (ICT), 2015 22nd International Conference on, Sydney, NSW, 2015, pp. 209-213. doi: 10.1109/ICT.2015.7124684
Abstract: The progression towards the use of mobile network devices in all facets of personal, business and leisure activity has created new threats to users and challenges to the industry to preserve security and privacy. Whilst mobility provides a means for interacting with others and accessing content in an easy and malleable way, these devices are increasingly being targeted by malicious parties in a variety of attacks. In addition, web technologies and applications are supplying more function and capability that attracts users to social media sites, e-shopping malls, and for managing finances (banking). The primary mechanism for authentication still employs a username and password based approach. This is often extended with additional (multifactor) authentication tools such as one time identifiers, hardware tokens, and biometrics. In this paper we discuss the threats, risks and challenges with user authentication and present the techniques to counter these problems with several patterns and approaches. We then outline a framework for supplying these authentication capabilities to the industry based on a unified authentication hub.
Keywords: Internet; authorisation; mobile computing; Web applications; Web technologies; authentication capabilities; e-shopping malls; finance management; mobile network devices; multifactor authentication tool; password based approach; social media sites; unified authentication hub; user authentication; username based approach; Authentication; Banking; Biometrics (access control); Business; Mobile communication; Mobile handsets; mobile networks; multifactor authentication; security; unified threat management (ID#: 16-9750)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124684&isnumber=7124639
X. Li, Z. Zheng and X. Zhang, “A Secure Authentication and Key Agreement Protocol for Telecare Medicine Information System,” Next Generation Mobile Applications, Services and Technologies, 2015 9th International Conference on, Cambridge, 2015, pp. 275-281. doi: 10.1109/NGMAST.2015.75
Abstract: Telecare Medicine Information System enables patients to get healthcare services at home expediently and efficiently. Authentication and key agreement protocol suited for TMIS protect patient's privacy via the unsecure network. Recently, numerous protocols have been proposed intend to safeguard the communication between patients and server. However, most of them have high computation overhead and security problems. In this paper, we aim to propose a secure and effective authentication and key agreement protocol using smartcard and password for TMIS. The protocol is based on elliptic curve cryptography. Through security analysis we illustrate that our protocol is secure to resist some known attacks and provide user anonymity. Furthermore, by comparing with other related protocols we show our protocol is superior in security and performance aspects.
Keywords: authorisation; cryptographic protocols; data privacy; health care; medical information systems; public key cryptography; telemedicine; Telecare Medicine Information System; elliptic curve cryptography; healthcare services; key agreement protocol; password; patient privacy protection; secure authentication; security analysis; smartcard; Authentication; Elliptic curve cryptography; Elliptic curves; Medical services; Protocols; Servers; authentication; elliptic curve cryptography; key agreement; smartcard (ID#: 16-9751)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373255&isnumber=7373199
S. Senthilkumar, M. Viswanatham and M. Vinothini, “HS-TBAC a Highly Secured Token Based Access Control for Outsourced Data in Cloud,” International Confernce on Innovation Information in Computing Technologies, Chennai, 2015, pp. 1-3. doi: 10.1109/ICIICT.2015.7396082
Abstract: Cloud computing is recently developed internet based computing paradigm where ranges of services such as data storage, applications deployment, servers, etc. are delivered over the internet. On the basis of response, cloud allocates the services through internet. The important feature of quality of service in cloud computing is secured way of protecting information over internet. The cloud service provider (CSP) should afford security for the data stored and applications developed over the cloud in terms of Privacy and Access Control. In this paper, we offer an enhanced framework with the advent of Token Generation (TG), Mutual Authentication and Cryptography. We include a secured authentication mechanism among consumer and Registration Authority (RA) in our scheme. Secondly a Token Generator (TG) is utilized for issuing tokens to the consumer for accessing their resources from the cloud depends upon their access privileges.
Keywords: authorisation; cloud computing; cryptography; CSP; HS-TBAC scheme; RA; access control; cloud service provider; cryptography; data privacy; data security; highly secured token based access control; information protection; mutual authentication; quality of service; registration authority; secured authentication mechanism; token generation; Access control; Authentication; Cloud computing; Cryptography; Generators; Servers; Cloud Service Provider; Cryptography; Mutual authentication; Registered Authority (RA); Token Generator (TG) (ID#: 16-9752)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396082&isnumber=7396045
A. Skarmeta, J. L. Hernández-Ramos and J. Bernal Bernabe, “A Required Security and Privacy Framework for Smart Objects,” ITU Kaleidoscope: Trust in the Information Society (K-2015), 2015, Barcelona, 2015, pp. 1-7. doi: 10.1109/Kaleidoscope.2015.7383648
Abstract: The large scale deployment of the Internet of Things (IoT) increases the urgency to adequately address trust, security and privacy issues. We need to see the IoT as a collection of smart and interoperable objects that are part of our personal environment. These objects may be shared among or borrowed from users. In general, they will have only temporal associations with their users and their personal identities. These temporary associations need to be considered while at the same time taking into account security and privacy aspects. In this work, we discuss a selection of current activities being carried out by different standardization bodies for the development of suitable technologies to be deployed in IoT environments. Based on such technologies, we propose an integrated design to manage security and privacy concerns through the lifecycle of smart objects. The presented approach is framed within our ARM-compliant security framework, which is intended to promote the design and development of secure and privacy-aware IoT-enabled services.
Keywords: Internet of Things; data privacy; trusted computing; ARM-compliant security framework; IoT; architectural reference model; privacy framework; smart object; trust issue; Authentication; Authorization; Biological system modeling; Ecosystems; Object recognition; Privacy; Security; Trust (ID#: 16-9753)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7383648&isnumber=7383613
K. S. Sang and B. Zhou, “BPMN Security Extensions for Healthcare Process,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 2340-2345. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.346
Abstract: The modelling of healthcare process is inherently complicated due to its multi-disciplinary character. Business Process Model and Notation (BPMN) has been considered and applied to model and demonstrate the flexibility and variability of the activities that involved in healthcare process. However, with the growing usage of digital information and IoT technology in the healthcare system, the issue of information security and privacy becomes the main concern in term of both store and management of electronic health record (EHR). Therefore, it is very important to capture the security requirements at conceptual level in order to identify the security needs in the first place. BPMN is lacking of the ability to model and present security concepts such as confidentiality, integrity, and availability in a suitable way. This will increase the vulnerability of the system and make the future development of security for the system more difficult. In this paper we provide a solution to model the security concepts in BPMN by extending it with new designed security elements, which can be integrated with the BPMN diagram smoothly.
Keywords: business data processing; data integrity; data privacy; electronic health records; health care; security of data; BPMN diagram; BPMN security extensions; EHR data management; EHR data storage; IoT technology; business process model and notation; data availability; data confidentiality; digital information; electronic health record; health care process modelling; information privacy; information security; system vulnerability; Authentication; Authorization; Business; Medical services; Standards; BPMN; Healthcare; Internet of Things; Security Requirement (ID#: 16-9754)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363392&isnumber=7362962
T. Ignatenko, “Biometrics in Claim-Based Authentication Framework,” 2015 IEEE 16th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Stockholm, 2015, pp. 385-389. doi: 10.1109/SPAWC.2015.7227065
Abstract: Nowadays biometric authentication and identification becomes a part of every-day life. As a result, many services emerge that rely on biometric-based access control. However, from a user point of view, supplying biometric information to many different service providers imposes high privacy and security risks. In this paper we focus on the use of biometrics in a claimed-based authentication framework. The frameworks allows for a limited number of identity providers, thus overcoming the need to supply biometric data to multiple untrustworthy service provides. We argue that in this case we can deploy the information-theoretic framework for privacy-preserving biometric systems for joint authentication and identification studied in Willems and Ignatenko [2010].
Keywords: authorisation; biometrics (access control); data privacy; biometric authentication; biometric identification; biometric information; biometric-based access control; claimed-based authentication framework; identity providers; information-theoretic framework; multiple untrustworthy service; privacy risks; privacy-preserving biometric systems; security risks; service providers; Authentication; Bioinformatics; Biometrics (access control); Databases; Joints; Privacy; Biometric authentication; claim-based authentication; privacy; trust (ID#: 16-9755)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7227065&isnumber=7226983
J. L. Fernández-Alemán, A. B. Sánchez García, G. García-Mateos and A. Toval, “Technical Solutions for Mitigating Security Threats Caused by Health Professionals in Clinical Settings,” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, 2015, pp. 1389-1392. doi: 10.1109/EMBC.2015.7318628
Abstract: The objective of this paper is to present a brief description of technical solutions for health information system security threats caused by inadequate security and privacy practices in healthcare professionals. A literature search was carried out in ScienceDirect, ACM Digital Library and IEEE Digital Library to find papers reporting technical solutions for certain security problems in information systems used in clinical settings. A total of 17 technical solutions were identified: measures for password security, the secure use of e-mail, the Internet, portable storage devices, printers and screens. Although technical safeguards are essential to the security of healthcare organization's information systems, good training, awareness programs and adopting a proper information security policy are particularly important to prevent insiders from causing security incidents.
Keywords: authorisation; digital libraries; health care; medical computing; medical information systems; professional aspects; security of data; ACM Digital Library; IEEE Digital Library; Internet; ScienceDirect; e-mail; health information system security threats; health professionals; healthcare organization information systems; healthcare professionals; mitigating security threats; password security; portable storage devices; technical safeguards; Authentication; Cryptography; Information systems; Medical services; Printers; Privacy (ID#: 16-9756)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7318628&isnumber=7318236
A. Gholami, J. Dowling and E. Laure, “A Security Framework for Population-Scale Genomics Analysis,” High Performance Computing & Simulation (HPCS), 2015 International Conference on, Amsterdam, 2015, pp. 106-114. doi: 10.1109/HPCSim.2015.7237028
Abstract: Biobanks store genomic material from identifiable individuals. Recently many population-based studies have started sequencing genomic data from biobank samples and cross-linking the genomic data with clinical data, with the goal of discovering new insights into disease and clinical treatments. However, the use of genomic data for research has far-reaching implications for privacy and the relations between individuals and society. In some jurisdictions, primarily in Europe, new laws are being or have been introduced to legislate for the protection of sensitive data relating to individuals, and biobank-specific laws have even been designed to legislate for the handling of genomic data and the clear definition of roles and responsibilities for the owners and processors of genomic data. This paper considers the security questions raised by these developments. We introduce a new threat model that enables the design of cloud-based systems for handling genomic data according to privacy legislation. We also describe the design and implementation of a security framework using our threat model for BiobankCloud, a platform that supports the secure storage and processing of genomic data in cloud computing environments.
Keywords: authorisation; bioinformatics; cloud computing; data protection; legislation; BiobankCloud; Europe; biobank-specific laws; clinical data; clinical treatment; cloud computing environments; cloud-based systems; diseases; genomic data cross-linking; genomic data handling; genomic data processing; genomic data sequencing; genomic material storage; population-scale genomic analysis; privacy legislation; security framework; sensitive data protection; threat model; Authentication; Bioinformatics; Computational modeling; Data privacy; Genomics; Privacy; Access Control; Cloud Computing; Security (ID#: 16-9757)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237028&isnumber=7237005
T. S. Fatayer and K. A. A. Timraz, “MLSCPC: Multi-Level Security Using Covert Channel to Achieve Privacy Through Cloud Computing,” Computer Networks and Information Security (WSCNIS), 2015 World Symposium on, Hammamet, 2015, pp. 1-6. doi: 10.1109/WSCNIS.2015.7368307
Abstract: Cloud provider is central processing with shared resources to serve the vendors on demand over the internet where, Cloud model is divided into deployment and service model. In this paper, we address the service model especially privacy where, the privacy allows customers to ensure that their data in cloud not access by unauthorized persons. We implement a novel approach called multi-level security using covert channel to achieve privacy through cloud computing (which we coin it MLSCPC) that enables cloud provider to protect user information from unauthorized persons. We use covert channel to enable cloud provider and customers to communicate in the existence of unauthorized people (e.g. attackers). We have developed a prototype of approach to evaluate its security, feasibility and performance. Our results show that cloud provider can grant customers resources in acceptable time and low computation overhead. Also, attackers don't affect our approach whereby, the cloud provider can grant resources to customers in existent of attackers.
Keywords: authorisation; cloud computing; data protection; MLSCPC; cloud provider; covert channel; data privacy; deployment model; feasibility evaluation; multilevel security; performance evaluation; security evaluation; service model; unauthorized persons; user information protection; Cloud computing; Computational modeling; Data privacy; Privacy; Security; Authentication; Cloud Computing; Covert Channel; Time; Traffic; privacy; security (ID#: 16-9758)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7368307&isnumber=7368275
N. Bruce, Y. J. Kang, M. Sain and H. J. Lee, “An Approach to Designing a Network Security-Based Application for Communications Safety,” 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Paris, 2015, pp. 1002-1009. doi: 10.1145/2808797.2808808
Abstract: The evolution of digital communication includes both applications and devices running them. In this context, specific applications are needed to enhance a safeguard communication ensuring protection and trust services to the users. The growing need to address privacy concerns when social network data is released for mining purposes has recently led to considerable interest in various network security-based applications. In this paper, we develop an efficiency and adaptive network security-based application to ensure the privacy and data integrity across channel communications. This approach is designed on a model of clustering configuration of the involved members. The cluster members interact with the cluster leader for data exchange and sends to the base station. This scheme provides a strong mutual authentication framework that suits for real heterogeneous wireless applications. In addition, we contribute with a mathematical analysis to the delay and optimization analysis in a clustering topology node-based, also we compared the scheme with existing ones using security standards services. Finally, performance of this scheme is evaluated in term of computation and communication overhead; results show the present framework is efficiency and can be safeguard for Network Security-based applications.
Keywords: authorisation; computer network security; data mining; data privacy; optimisation; pattern clustering; social networking (online); trusted computing; channel communications; cluster leader; cluster members; clustering topology node; communications safety; data exchange; data integrity; digital communication; heterogeneous wireless applications; mathematical analysis; mining purposes; mutual authentication framework; network security-based application; optimization analysis; privacy concerns; safeguard communication; security standards; social network data; trust services; Authentication; Chlorine; Manganese; Protocols; Servers; Wireless sensor networks; algorithms; applications; mutual authentication; network security; protocols (ID#: 16-9759)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7403668&isnumber=7403513
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Big Data Security Issues in the Cloud 2015 |
Big data security in the Cloud is a growing area of interest for cybersecurity researchers. The work presented here ranges from
cyber-threat detection in critical infrastructures to privacy protection. This work was presented in 2015.
S. K. Madria, “Tutorial I: Security and Privacy of Big Data in a Cloud Environment,” Innovations in Information Technology (IIT), 2015 11th International Conference on, Dubai, 2015, pp. XXXV-XXXVI. doi: 10.1109/INNOVATIONS.2015.7381498
Abstract: Summary form only given. Security and privacy of big data is of primary concern for many applications. For example, in case of smart meters, data of the consumers must be protected else private information can be leaked. Similarly, due to the cost-efficiency, reduced overhead management and dynamic resource needs, content owners are outsourcing their data to the cloud who can act as a service provider on their behalf. However, by outsourcing their data to the cloud, the owners may lose access control and privacy of data as cloud becomes a third party. By using these data storage services, the data owners can relieve the burden of local data storage and maintenance. However, since data owners and the cloud servers are not in the same trusted domain, the outsourced data may be at risk as the cloud server may no longer be fully trusted. Therefore, data integrity is of critical importance. Cloud should let the owners or a trusted third party to check for the integrity of their data storage without demanding a local copy of the data. Owners often replicate their data on the cloud servers across multiple data centers to provide a higher level of scalability, availability, and durability. However, the data owners need to be strongly convinced that the cloud is storing data copies agreed on in the service level contract, and data-updates have been correctly executed on all the remotely stored copies. In this tutorial, some of these problems will be explored. Some of the topics to be covered include: Security and Privacy Issues in Big Data Management, Secure Data Processing and Access Control of Big Data in Cloud, Data Integrity Verification of Big Data in Cloud, and Security and Privacy of Sensing Data for Big Data Applications.
Keywords: Big Data; authorisation; cloud computing; data privacy; Big Data privacy; Big Data security; access control; cloud environment; data availability; data durability; data integrity; data maintenance; data outsourcing; data scalability; data storage; sensing data; Big data; Cloud computing; Computer science; Data privacy; Memory; Security; Servers (ID#: 16-9808)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381498&isnumber=7381480
G. Smorodin and O. Kolesnichenko, “Big Data as the Big Game Changer,” Application of Information and Communication Technologies (AICT), 2015 9th International Conference on, Rostov on Don, 2015, pp. 40-43. doi: 10.1109/ICAICT.2015.7338512
Abstract: Big Data is the phenomenon of the Information era. Big Data is a new dimension to explore, collecting Big Data we fix the time. Big Data has some functions, including impact on society, form spatio-temporal structures, change the world and future, and integration society with IT technologies. Most important aspect is risk in Cloud computing. To leverage risks, secure Cloud services and get additional benefits an Integrated Approach should be applied. It is important to separate the various kinds of “Security” needs when considering Cloud computing issues. Also Security Analyst should be included into Data Science Team. Data-driven economy is based on three points: open data, legislation for Big Data, and education. For students is very important practical training that engages students into the culture of Big Data Analytics. This opportunity provides the EMC Academic Alliance Russia & CIS through the establishment of ad-hoc Big Data Analytics Teams among universities. The results of the first stage of launched in 2015 the Big Data Analytics Multicenter Study are presented.
Keywords: Big Data; cloud computing; data analysis; security of data; Big Data analytics; Big Data-driven ideology; Big Data-driven world; IT technologies; big game changer; data-driven economy; education; information era; legislation; open data; risks; secure cloud services; security; spatio-temporal structures; Big data; Blogs; Force; Terrorism; Cloud computing; Data Analytics Multicenter Study; Federation Business Data Lake; Security Integrated Approach (ID#: 16-9809)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338512&isnumber=7338496
D. S. Terzi, R. Terzi and S. Sagiroglu, “A Survey on Security and Privacy Issues in Big Data,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015, pp. 202-207. doi: 10.1109/ICITST.2015.7412089
Abstract: Due to the reasons such as the rapid growth and spread of network services, mobile devices, and online users on the Internet leading to a remarkable increase in the amount of data. Almost every industry is trying to cope with this huge data. Big data phenomenon has begun to gain importance. However, it is not only very difficult to store big data and analyze them with traditional applications, but also it has challenging privacy and security problems. For this reason, this paper discusses the big data, its ecosystem, concerns on big data and presents comparative view of big data privacy and security approaches in literature in terms of infrastructure, application, and data. By grouping these applications an overall perspective of security and privacy issues in big data is suggested.
Keywords: Big Data; data privacy; security of data; Internet; big data analysis; big data phenomenon; big data privacy; big data security; huge data; mobile devices; network services; online users; Big data; Cloud computing; Cryptography; Data privacy; Distributed databases; Monitoring; Hadoop security; anonymization; auditing; big data; cloud security; key management; monitoring (ID#: 16-9810)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7412089&isnumber=7412034
Kavitha S, Yamini S and Raja Vadhana P, “An Evaluation on Big Data Generalization Using k-Anonymity Algorithm on Cloud,” Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, Coimbatore, 2015, pp. 1-5. doi: 10.1109/ISCO.2015.7282237
Abstract: Nowadays data security plays a major issue in cloud computing and it remains a problem in data publishing. Lot of people share the data over cloud for business requirements which can be used for data analysis brings privacy as a big concern. In order to protect privacy in data publishing the anonymization technique is enforced on data. In this technique the data can be either generalized or suppressed using various algorithms. Top Down Specialization (TDS) in k-Anonymity is the majorly used generalization algorithm for data anonymization. In cloud the privacy is given through this algorithm for data publishing but another bigger problem is scalability of data. When data is tremendously increased on cloud which is shared for the data analysis there anonymization process becomes tedious. Big Data helps here in a way that large scale data can be partitioned using mapreduce framework on cloud. In our approach the data is anonymized using two phases Map phase and Reduce phase using Two Phase Top Down Specialization (Two Phase TDS) algorithm and the scalability and efficiency of Two Phase TDS is experimentally evaluated.
Keywords: Big Data; cloud computing; data analysis; data privacy; Big Data generalization; Mapreduce framework; business requirements; data anonymization; data publishing; data scalability; data security; k-anonymity algorithm; large scale data; map phase; privacy protection; reduce phase; two phase TDS algorithm; two phase top down specialization; Algorithm design and analysis; Games; ISO Standards; Indexing; Privacy; Sugar; Data anonymization; Data privacy; Data publishing; Generalization; k-Anonymity
(ID#: 16-9811)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282237&isnumber=7282219
S. Fugkeaw and H. Sato, “Privacy-Preserving Access Control Model for Big Data Cloud,” 2015 International Computer Science and Engineering Conference (ICSEC), Chiang Mai, 2015, pp. 1-6. doi: 10.1109/ICSEC.2015.7401416
Abstract: Due to the proliferation of advanced analytic applications built on a massive scale of data from several data sources, big data technology has emerged to shift the paradigm of data management. Big data management is usually taken into data outsourcing environment such as cloud computing. According to the outsourcing environment, security and privacy management becomes one of the critical issues for business decision. Typically, cryptographic-based access control is employed to support privacy-preserving authentication and authorization for data outsourcing scenario. In this paper, we propose a novel access control model combining Role-based Access Control (RBAC) model, symmetric encryption, and ciphertext attribute-based encryption (CP-ABE) to support fine-grained access control for big data outsourced in cloud storage systems. We also demonstrate the efficiency and performance of our proposed scheme through the implementation.
Keywords: Big Data; authorisation; cloud computing; cryptography; data privacy; message authentication; outsourcing; CP-ABE; RBAC model; advanced analytic applications; authorization; big data cloud; ciphertext attribute-based encryption; cloud storage systems; cryptographic-based access control; data management; data outsourcing environment; fine-grained access control; privacy-preserving authentication; role-based access control model; symmetric encryption; Access control; Big data; Cloud computing; Data models; Encryption; Access Control; Cloud Computing; RBAC (ID#: 16-9812)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7401416&isnumber=7401392
M. Xiao, M. Wang, X. Liu and J. Sun, “Efficient Distributed Access Control for Big Data in Clouds,” 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Hong Kong, 2015, pp. 202-207. doi: 10.1109/INFCOMW.2015.7179385
Abstract: The term big data refers to the massive amounts of digital information, which can be efficiently stored and processed on a cloud computing platform. However, security and privacy issues are magnified by high volume, variety, and velocity of big data. Ciphertext-Policy Attribute-Based Encryption (CP-ABE) is a promising cryptographic primitive for the security of cloud storage system and can bring together data leakage prevention and fine-grained access control. The existing researches on applying CP-ABE to cloud storage system mainly focus on the efficiency of decryption and user revocation, and some special improvements have been done to alleviate the workloads of data owners and users, such as proxy re-encryption and decryption outsourcing. However, the complexity of user revocation is still linearly correlated with the number of ciphertexts and users in the system. Therefore, in a big data environment with mass data and users, user revocation is still a challenge. In this paper, we propose a distributed, scalable and fine-grained access control scheme with efficient decryption and user revocation for the big data in clouds. We also present a new multi-authority CP-ABE scheme for supporting the efficient decryption outsourcing, user revocation and dynamically joining and exiting of attribute authorities. In our scheme, user revocation is only related to revoked user and can achieve both forward security and backward security. The system analysis shows that our scheme is efficient and provably secure in the generic group model.
Keywords: Big Data; authorisation; cloud computing; cryptography; backward security; ciphertext-policy attribute-based encryption; cloud storage system; data leakage prevention; decryption outsourcing; distributed access control; fine-grained access control; forward security; proxy re-encryption; Cryptography; Outsourcing; Servers; CP-ABE; access control; big data; decryption out-sourcing; user revocation (ID#: 16-9813)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7179385&isnumber=7179273
H. Jean-Baptiste, M. Qiu, K. Gai and L. Tao, “Meta Meta-Analytics for Risk Forecast Using Big Data Meta-Regression in Financial Industry,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 272-277. doi: 10.1109/CSCloud.2015.69
Abstract: The growing trend of the e-banking has driven the implementations of big data in financial industry. Data analytic is considered one of the most critical aspects in current economic development, which is broadly accepted in various financial domains, such as risk forecast and risk management. However, gaining an accurate risk prediction is still a challenging issue for current financial service institutions and the hazards can be caused in various perspectives. This paper proposes an approach using meta meta-analytics for risks forecast in big data. The proposed model is Meta Meta-Analytics Risk Forecast Model (MMA-RFM) with a crucial algorithm Regression with Meta Meta-Analytics Algorithm (RMMA). The proposed schema has been examined by the experimental evaluation in which it performs an optimized performance.
Keywords: Big Data; banking; data analysis; financial data processing; meta data; regression analysis; risk management; MMA-RFM; R-MMA; big data metaregression; data analytics; e-banking; financial industry; financial service institutions; meta meta-analytics risk forecast model; regression with meta meta-analytics algorithm; risk management; Analytical models; Big data; Mathematical model; Prediction algorithms; Predictive models; Reliability; Risk management; Meta meta-analytics; big data; metaregression; risk forecast
(ID#: 16-9814)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371493&isnumber=7371418
X. Feng, B. Onafeso and E. Liu, “Investigating Big Data Healthcare Security Issues with Raspberry Pi,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 2329-2334. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.344
Abstract: Big Data on Cloud application is growing rapidly. When cloud is attacked, one of the solutions is to get digital forensics evidence. This paper proposed data collection via raspberry pi (RP) devices, assume in a healthcare situation [18].The significance of this work is it could be expanded into a digital device array that takes big data security issues into account. There are many potential impacts in health area. The field of Digital Forensics Science has been tagged as are active science by some who believe research and study in the field often arise as a result of the need to respond to event which brought about the needs for investigation, this work was carried as a proactive research that will add knowledge to the field of Digital Forensic Science. The raspberry pi is a cost effective, pocket sized computer that has gained global recognition since its development in 2008, with the wide spread usage of the device for different computing purposes. It is safe to assume that the device will be a topic of forensic investigation in the nearest future. This work has used a systematic approach to study the structure and operation of the device and has established security issues that the widespread usage of the device can pose, such as health or smart city. As well as its evidential information that will be useful in the event that the device becomes a subject of digital forensics investigation in the foreseeable future.
Keywords: Big Data; cloud computing; digital forensics; health care; Raspberry Pi devices; big data healthcare security issues; cloud application; data collection; digital device array; digital forensics evidence; digital forensics science; healthcare situation; smart city; Big data; Computers; DNA; Digital forensics; Law enforcement; Security; Big Data Forensics; Healthcare; IoTS; Raspberry-pi Application (ID#: 16-9815)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363390&isnumber=7362962
G. C. Fox, J. Qiu, S. Kamburugamuve, S. Jha and A. Luckow, “HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack,” Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on, Shenzhen, 2015, pp. 1057-1066. doi: 10.1109/CCGrid.2015.122
Abstract: We review the High Performance Computing Enhanced Apache Big Data Stack HPC-ABDS and summarize the capabilities in 21 identified architecture layers. These cover Message and Data Protocols, Distributed Coordination, Security & Privacy, Monitoring, Infrastructure Management, DevOps, Interoperability, File Systems, Cluster & Resource management, Data Transport, File management, NoSQL, SQL (NewSQL), Extraction Tools, Object-relational mapping, In-memory caching and databases, Inter-process Communication, Batch Programming model and Runtime, Stream Processing, High-level Programming, Application Hosting and PaaS, Libraries and Applications, Workflow and Orchestration. We summarize status of these layers focusing on issues of importance for data analytics. We highlight areas where HPC and ABDS have good opportunities for integration.
Keywords: Big Data; SQL; cache storage; data privacy; monitoring; open systems; parallel processing; security of data; Apache Big Data stack; DevOps; HPC-ABDS; NewSQL; NoSQL; batch programming model; data transport; distributed coordination; file management; file systems; high performance computing; in-memory caching; infrastructure management; interoperability; message and data protocols; object-relational mapping; privacy; resource management; security; stream processing; Big data; Cloud computing; Distributed databases; Google; Programming; Security; Apache Big Data Stack; HPC (ID#: 16-9816)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152592&isnumber=7152455
D. Zhang, B. H. Yan, Z. Feng, K. Y. Qi and Z. Y. Su, “Inverse Clustering-Based Job Placement Method for Efficient Big Data Analysis,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1796-1799. doi: 10.1109/HPCC-CSS-ICESS.2015.124
Abstract: To efficiently exploit the inherent values of big data, the large-scale data center with multiple compute nodes is deployed. In this scenario, the job placement method becomes the key issue to match the compute nodes with the data analysis jobs, to balance the workloads among the nodes and meet the resource requirements for various jobs. In this work, an inverse clustering-based job placement method is proposed. Jobs are represented as feature vectors of resource utilizations and priorities. Then contrary to the regular clustering procedure, the proposed inverse clustering method organizes jobs with the largest different feature vectors into the same groups. Jobs in the same groups are placed on to the same nodes. Consequently, jobs assigned on the same nodes utilize different types of resources and are labeled with different priorities. In our simulation experiments, a global load and priority balances are achieved with the proposed inverse clustering method.
Keywords: Big Data; computer centres; data analysis; pattern clustering; resource allocation; Big Data analysis; compute nodes; data center; feature vector; global load balance; inverse clustering-based job placement method; priority balance; resource utilization; workload balancing; Adaptation models; Big data; Cloud computing; Computers; Dynamic scheduling; Optimization; Resource management; big data; job placement; resource scheduling (ID#: 16-9817)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336432&isnumber=7336120
L. Voronova and N. Kazantsev, “The Ethics of Big Data: Analytical Survey,” 2015 IEEE 17th Conference on Business Informatics, Lisbon, 2015, pp. 57-63. doi: 10.1109/CBI.2015.27
Abstract: The number of recent publications on the matter of ethical challenges of the implementation of Big Data has signified the growing interest to all the aspects of this issue. The proposed study specifically aims at analyzing ethical issues connected with Big Data.
Keywords: Big Data; ethical aspects; ethical issues; Big data; Business; Cloud computing; Data protection; Ethics; Security; Cloud Computing; Ethical Issues; Ethics; Safety (ID#: 16-9818)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7264769&isnumber=7264389
H. Liang and K. Gai, “Internet-Based Anti-Counterfeiting Pattern with Using Big Data in China,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conferen on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1387-1392. doi: 10.1109/HPCC-CSS-ICESS.2015.137
Abstract: Cloud-based trading platforms have become a broadly accepted business approach in China. Flexible and scalable online services have brought enormous benefits for e-commerce. However, many cloud-based e-commerce providers are encountering a serious challenge from counterfeits, which is already harmful for many Chinese e-commerce companies. This paper addresses anti-counterfeit issues and proposes a novel mechanism for proactively prevent counterfeits in the Chinese context. The proposed paradigm also considers the cost-benefit and profit-maximizations. The model was evaluated by the case study research with examining various use cases. Four use cases are represented in this paper and the outcomes of the use cases proved the efficiency of the proposed model.
Keywords: Big Data; business data processing; cloud computing; cost-benefit analysis; electronic commerce; profitability; Chinese e-commerce companies; Internet-Based anticounterfeiting pattern; business approach; cloud-based e-commerce providers; cloud-based trading platforms; cost-benefit analysis; flexible-scalable online services; proactive counterfeit prevention; profit-maximization; Authentication; Big data; Business; Cloud computing; Counterfeiting; Economics; Mathematical model; Anti-counterfeiting pattern; big data; cost-benefit principle; profit-maximization model (ID#: 16-9819)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336362&isnumber=7336120
F. Rashid, A. Miri and I. Woungang, “Proof of Storage for Video Deduplication in the Cloud,” 2015 IEEE International Congress on Big Data, New York, NY, 2015, pp. 499-505. doi: 10.1109/BigDataCongress.2015.79
Abstract: With the advent of cloud computing and its technologies, including data deduplication, more freedom are offered to the users in terms of cloud storage, processing power and efficiency, and data accessibility. The digital data has attained unexceptional growth due to the common use of internet and digital devices giving rise to Big Data problem world wise. These huge volumes of data need some practical platforms for the storage, processing and availability and cloud technology offers all the potentials to fulfil these requirements. Data deduplication is referred to as a strategy offered to cloud storage providers (CSPs) to eliminate the duplicate data and keep only a single unique copy of it for storage space saving purpose to condense Big Data issues. But these benefits also come with data security and privacy issues associated with the cloud technology since the data owner looses the physical control of its data once uploaded in the cloud storage and the CSP gains a complete ownership of the data. In this paper, assuming that the CSP is semi-honest (i.e. Honest but curious and cannot be completely trusted), a proof of retrievability (POR) and a proof of ownership (POW) are proposed for video deduplication in cloud storage environments. The POW protocol is meant to be used by the CSP to authenticate the true owner of the data video before releasing it whereas the POR protocol is meant to allow the user to check that his/her data video stored in the cloud is secured against any malicious user or the semi-honest CSP. These schemes are proposed as complement to our earlier proposed scheme for securing the video deduplication in the cloud storage through the H.264 compression algorithm. Some experimental results are provided, showing the effectiveness of our proposed POR and POW protocols.
Keywords: cloud computing; data compression; data privacy; information retrieval; security of data; video coding; Big Data; CSP; H.264 compression algorithm; Internet; POR; POW; cloud storage providers; data accessibility; data security; digital devices; proof of ownership; proof of retrievability; video deduplication; Big data; Cloud computing; Compression algorithms; Encoding; Encryption; Protocols; H.264 compression algorithm; Merkle hash tree; error correcting codes; proof of ownership; proof of retrieval; video compression (ID#: 16-9820)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207263&isnumber=7207183
R. Zhai, K. Zhang and M. Liu, “Static Group Privacy Protection Mechanism Based on Cloud Model,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 970-974. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.146
Abstract: In recent years, a variety of privacy events emerges and bring huge losses. With the in-depth application of data mining, big data, cloud computing and other technology, privacy protection issue becomes more and more challenging. Therefore, we propose a privacy protection mechanism for sensitive group information. A reasonable counterfeit data set is constructed based on cloud model for sensitive features of group data to disguise real sensitive group features. The mechanism takes the data dependencies between multiple attributes into consideration, and reduces the amount of fake data added to improve the availability of data. The method we proposed is proved to be effective through analysis and experiments.
Keywords: cloud computing; data mining; data privacy; big data; cloud model; group data; privacy events; real sensitive group features; sensitive group information; static group privacy protection mechanism; Big data; Cloud computing; Data models; Data privacy; Privacy; Security; Sociology; group privacy; sensitive features (ID#: 16-9821)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363187&isnumber=7362962
F. J. N. d. Santos and S. G. Villalonga, “Exploiting Local Clouds in the Internet of Everything Environment,” 2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, Turku, 2015, pp. 296-300. doi: 10.1109/PDP.2015.117
Abstract: The Internet of Everything is opening new opportunities and challenges which will be faced during the following years. Huge amounts of data will be generated and consumed, so Internet of Things frameworks will need to provide new capabilities related to Big Data analysis, scalability and performance. We believe the formation of local clouds of devices, close to the location where data is created and consumed, is a good solution to overcome these issues which may impact in security as well. The combination of local and remote resources together with the appropriate allocation algorithms for their management will provide the means to enable the new required features, going beyond the current state of the art and still leaving enough evolution capacity for future scenarios.
Keywords: Internet of Things; cloud computing; data handling; Internet of Things frameworks; Internet of everything environment; big data analysis; local clouds; remote resources; security; Big data; Cloud computing; Clouds; Logic gates; Mobile communication; Resource management; Virtualization; Allocation Algorithms; Big Data; Internet of Everything; Local Clouds; Mobile Computing (ID#: 16-9822)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092735&isnumber=7092002
C. Huang and R. Lu, “EFPA: Efficient and Flexible Privacy-Preserving Mining of Association Rule in Cloud,” 2015 IEEE/CIC International Conference on Communications in China (ICCC), Shenzhen, China, 2015, pp. 1-6. doi: 10.1109/ICCChina.2015.7448753
Abstract: With the explosive growth of data and the advance of cloud computing, data mining technology has attracted considerable interest recently. However, the flourish of data mining technology still faces many challenges in big data era, and one of the main security issues is to prevent privacy disclosure when running data mining in cloud. In this paper, we propose an efficient and flexible protocol, called EFPA, for privacy-preserving association rule mining in cloud. With the protocol, plenty of participants can provide their data and mine the association rules in cloud together without privacy leakage. Detailed security analysis shows that the proposed EFPA protocol can achieve privacy-preserving mining of association rules in cloud. In addition, performance evaluations via extensive simulations also demonstrate the EFPA’s effectiveness in term of low computational costs.
Keywords: Cloud computing; Computational modeling; Cryptography; Data privacy; Protocols; Association Rule Mining; Big Data; Cloud; Privacy-preserving (ID#: 16-9823)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7448753&isnumber=7448573
G. Xiong, T. Ji, X. Zhang, F. Zhu and W. Liu, “Cloud Operating System for Industrial Application,” Service Operations and Logistics, and Informatics (SOLI), 2015 IEEE International Conference on, Hammamet, 2015, pp. 43-48. doi: 10.1109/SOLI.2015.7367408
Abstract: With the rapid development of latest information technology, it is inevitable to apply IoT (Internet of Things), cloud computing and big data into the industrial fields of national key sectors including transportation, electricity, metallurgy, petroleum, chemical, manufacturing, military and so on. Wireless sensor network, industrial Internet, embedded system, software for industrial control and management, and smart terminal are gradually introduced into the industrial systems, which would make the past relatively closed industrial systems more open and intelligent, and contribute to the coming forth industrial revolution. In this paper, the authors mainly discuss issues about cloud Operating System (OS) for industrial application, including cloud computing and cloud operating system introduction, current status analysis of cloud OS and the transformation trend to industrial 4.0. Then, we independently design the main content of this cloud OS, and its application prospect and expected result are given. The study provides theoretical guidance and practical challenge for the development of cloud OS oriented to industrial area.
Keywords: Big Data; Internet of Things; cloud computing; embedded systems; industrial control; information technology; operating systems (computers);wireless sensor networks; Internet of Things; IoT; big data; cloud computing; cloud operating system; embedded system; industrial Internet; industrial application; industrial control software; industrial systems; information technology; national key sectors; wireless sensor network; Cloud computing; Hardware; Industries; Manufacturing; Operating systems; Security; Servers; Big data; Cloud Computing; Cloud Operating System; G-Cloud OS for Industrial Application; Industrial 4.0;Internet of Things (ID#: 16-9824)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7367408&isnumber=7367396
K. Gai, M. Qiu, H. Zhao and W. Dai, “Anti-Counterfeit Scheme Using Monte Carlo Simulation for E-commerce in Cloud Systems,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015,
pp. 74-79. doi: 10.1109/CSCloud.2015.75
Abstract: E-commerce using cloud-based trading platforms has become a popular approach with the growth of global development in recent years. However, the existence of counterfeits on the platform has threatened the benefits of all stakeholders. This paper proposes a novel scheme named Anti-Counterfeit Deterministic Prediction Model (ADPM), which is designed for detecting counterfeits by using Monte Carlo Model (MCM) to predict the potential malicious information in e-commerce. We consider the discriminations of the fake merchandises a crucial issue in preventing counterfeits on the online business platforms. The proposed mechanism provides a paradigm of machine-learning with using a novel algorithm that derives from MCM. The main algorithm used in our proposed mechanism is Monte Carlo Model-based Prediction Analysis Algorithm (M-PAA). Our experiment has evaluated that the proposed approach can provision the predictions of the insecure information in e-commerce.
Keywords: Monte Carlo methods; cloud computing; electronic commerce; financial data processing; learning (artificial intelligence); security of data; ADPM; Monte Carlo model-based prediction analysis algorithm; Monte Carlo simulation; anti-counterfeit deterministic prediction model; anti-counterfeit scheme; cloud system; cloud-based trading platform; e-commerce; machine learning; malicious information prediction; Adaptation models; Algorithm design and analysis; Cloud computing; Mathematical model; Prediction algorithms; Predictive models; Monte Carlo model; anti-counterfeit model; big data prediction; cloud systems; e-commerce (ID#: 16-9825)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371462&isnumber=7371418
I. Butun, B. Kantarci and M. Erol-Kantarci, “Anomaly Detection and Privacy Preservation in Cloud-Centric Internet of Things,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015, pp. 2610-2615. doi: 10.1109/ICCW.2015.7247572
Abstract: Internet of Things (IoT) concept provides a number of opportunities to improve our daily lives while also creating a potential risk of increasing the vulnerability of personal information to security and privacy breaches. Data collected from IoT is usually offloaded to the Cloud which may further leave data prone to a variety of attacks if security and privacy issues are not handled properly. Anomaly detection has been one of the widely adopted security measures in wired and wireless networks. However, it is not straight forward to apply most of the anomaly detection techniques to IoT and cloud. One of the main challenges is deriving outlier features from the vast volume of data pumped from IoT to the cloud. Other challenges include the large number of sources generating data, heterogenous connectivity and traffic patterns of IoT devices, cloud services being offered at geographically remote places and causing IoT data to be stored in different countries with different legislations. This paper, for the first time, presents the challenges and opportunities in anomaly detection for IoT and cloud. It first introduces the prominent features and application fields of IoT and Cloud, then discusses security and privacy risks to personal information and finally focuses on solutions from anomaly detection perspective.
Keywords: Internet of Things; Web services; cloud computing; data privacy; security of data; IoT concept; IoT devices; anomaly detection techniques; cloud services; cloud-centric Internet of Things; personal information; privacy breach; privacy preservation; privacy risks; security breach; security risks; wired networks; wireless networks; Big data; Cloud computing; Internet of things; Privacy; Security; Sensors; Wireless sensor networks (ID#: 16-9826)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247572&isnumber=7247062
M. Bahrami, “Cloud Computing for Emerging Mobile Cloud Apps,” Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2015 3rd IEEE International Conference on, San Francisco, CA, 2015, pp. 4-5. doi: 10.1109/MobileCloud.2015.40
Abstract: The tutorial will begin with an explanation of the concepts behind cloud computing systems, cloud software architecture, the need for mobile cloud computing as an aspect of the app industry to deal with new mobile app design, network apps, app designing tools, and the motivation for migrating apps to cloud computing systems. The tutorial will review facts, goals and common architectures of mobile cloud computing systems, as well as introduce general mobile cloud services for app developers and marketers. This tutorial will highlight some of the major challenges and costs, and the role of mobile cloud computing architecture in the field of app design, as well as how the app-design industry has an opportunity to migrate to cloud computing systems with low investment. The tutorial will review privacy and security issues. It will describe major mobile cloud vendor services to illustrate how mobile cloud vendors can improve mobile app businesses. We will consider major cloud vendors, such as Microsoft Windows Azure, Amazon AWS and Google Cloud Platform. Finally, the tutorial will survey some of the cutting edge practices in the field, and present some opportunities for future development.
Keywords: cloud computing; data privacy; mobile computing; software architecture; Amazon AWS; Google Cloud Platform; Microsoft Windows Azure; application designing tools; application migration; application-design industry; cloud software architecture; mobile application business improvement; mobile cloud application design; mobile cloud computing; mobile cloud computing architecture; mobile cloud services; mobile cloud vendor services; network apps; privacy issues; security issues; Big data; Cloud computing; Computer architecture; Conferences; Industries; Mobile communication; Tutorials; Mobile App Design; Mobile Cloud Computing; Cloud Architecture; Mobile Security; Mobile Privacy (ID#: 16-9827)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130863&isnumber=7130853
Ravindra Babu Bellam, M. P. Coyle, P. Krishnan and E. G. Rajan, “Issues While Migrating Medical Imaging Services on Cloud Based Infrastructure,” Next Generation Computing Technologies (NGCT), 2015 1st International Conference on, Dehradun, 2015, pp. 109-114. doi: 10.1109/NGCT.2015.7375093
Abstract: In the major medical health-care organizations, Medical imaging play an important role to know about a patient health condition. Usually, in traditional IT health care environment, medical imaging involves a very complex and large amount of medical images (X-rays, CT/MRI scans) to be preserved, analyzed, and transferred. This Medical image data base (MIDB) management requires more technology investments and time. Cloud is the ultimate solution to minimize these costs and handles more efficiently with an acceptable level of security risk. In this paper, we are going to suggest two categories of issues related to medical image cloud, which provide right directions to an enterprise IT and cloud professionals (such as IBM, SIMENS, AMAZON, GOGRID) and medical actors (such as health professionals, hospitals, patients) to modernize computing resources and set some open conventions, those support cooperation and collaborative workflows on medical image cloud sharing. Additionally, these issues are very useful to design and develop a best fit cloud environment for the medical image cloud that allows medical actors to retrieve and review the medical images at all times from all the locations in the globe. This significantly minimizes technology costs and lead to fast and reliable patient health care management. Finally, these issues can also provide how healthcare industry can take maximum advantage of cloud computing to thrive.
Keywords: biomedical imaging; cloud computing; health care; medical computing; visual databases; MIDB management; cloud based infrastructure; medical image cloud sharing; medical image database; medical imaging services; patient health care management; Big data; Cloud computing; Computational modeling; Medical diagnostic imaging; Medical services; Organizations; Medical image data base; Medical imaging; healthcare; medical actors; the medical image cloud (ID#: 16-9828)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7375093&isnumber=7375067
S. H. Kim and I. Y. Lee, “Data Block Management Scheme Based on Secret Sharing for HDFS,” 2015 10th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA), Krakow, 2015, pp. 51-56. doi: 10.1109/BWCCA.2015.70
Abstract: In the cloud computing environment, data are encrypted to be stored in many distributed servers. Global Internet service providers such as Google and Yahoo recognized the importance of an Internet service platform and have used low-priced commercial-node-based and large-scale cluster-based cloud computing platform technologies through R&D. As various data services have been available in the distributed computing environment, the distributed management of big data has become a major issue. In the various uses of big data, security vulnerability and privacy invasion may occur due to malicious attackers or inner users. In particular, various types of security vulnerability occur in the block access token, which is used for the permission control of the data block in Hadoop. To supplement the security vulnerability, a secret-sharing-based block access token management technique is suggested in this paper.
Keywords: Big Data; Internet; cloud computing; cryptography; network servers; parallel processing; Google; HDFS; Hadoop; Internet service platform; R&D; Yahoo; cloud computing environment; data block management scheme; data encryption; data services; distributed big data management; distributed computing environment; distributed servers; global Internet service providers; large-scale cluster-based cloud computing platform technologies; low-priced commercial-node-based cloud computing platform technologies; malicious attackers; permission control; privacy invasion; secret sharing; secret-sharing-based block access token management technique; security vulnerability; Authentication; Cloud computing; Cryptography; Distributed databases; Proposals; Servers; Block Access Token; Cloud computing (ID#: 16-9829)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7424800&isnumber=7424228
Kanmani P and Anusha S, “A Novel Integrity Scheme for Secure Cloud Storage,” Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, Coimbatore, 2015, pp. 1-3.
doi: 10.1109/ISCO.2015.7282357
Abstract: Cloud computing is a promising technology designed to provide computing services over the internet. It is a model which enables the user to access the computing resources with minimal management effort and no service provider interaction. Cloud computing provides rapid provisioning of resources to the user by pooling the resources together and the user can access the resources on demand. The main threat to cloud computing is data security and integrity since the public cloud is connected to internet and many users can access the resources at the same time. This paper aims at providing various security issues in cloud computing. It also aims at developing a data integrity proof for the data stored in the cloud server. Cloud storage move the user’s data to big data centers, which are remotely located, on which user does not have any control. It may not be fully trustworthy because client doesn’t have copy of all stored data. However, this unique feature of the cloud poses many new security challenges which need to be clearly understood and resolved.
Keywords: Big Data; cloud computing; data integrity; security of data; storage management; user interfaces; Internet; big data centers; data security; novel integrity scheme; secure cloud storage; user access; Encryption; Postal services; Protocols; Cloud computing; integrity; security (ID#: 16-9830)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282357&isnumber=7282219
F. Rashid, A. Miri and I. Woungang, “A Secure Video Deduplication Scheme in Cloud Storage Environments Using H.264 Compression,” Big Data Computing Service and Applications (BigDataService), 2015 IEEE First International Conference on, Redwood City, CA, 2015, pp. 138-146. doi: 10.1109/BigDataService.2015.15
Abstract: Due to the rapidly increasing amounts of digital data produced worldwide, multi-user cloud storage systems are becoming very popular and Internet users are approaching cloud storage providers (CSPs) to upload their data in the clouds. Among these data, digital videos are fairly huge in terms of storage cost and size, and techniques that can help reducing the cloud storage cost and size are always desired. This paper argues that data reduplication can ease the problem of BigData storage by identifying and removing the duplicate copies from the cloud storages. Although reduplication maximizes the storage space and minimizes the storage costs, it comes with serious issues of data privacy and security. Though the users desire to save some cost by allowing the CSP to deduplicate their data, they do not want the CSP to wane the privacy of their data. In this paper, a scheme is proposed that achieves a secure video reduplication in cloud storage environments. Its design consists of embedding a partial convergent encryption along with a unique signature generation scheme into a H.264 video compression scheme. The partial convergent encryption scheme is meant to ensure that the proposed scheme is secured against a semi-honest CSP, the unique signature generation scheme is meant to enable a classification of the encrypted compressed video data in such a way that the reduplication can be efficiently performed on them. Experimental results and security analysis are provided to validate the stated goals.
Keywords: Big Data; cloud computing; cryptography; data compression; digital signatures; video coding; Big Data storage; CSP; H.264 video compression; cloud storage provider; data reduplication; partial convergent encryption scheme; signature generation scheme; video deduplication scheme security; Cloud computing; Compression algorithms; Encryption; Streaming media; Transforms; BigData security; H.264 video compression; cloud storage provider; group of pictures (GOP);partial convergent encryption; signature generation; video deduplication (ID#: 16-9831)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184874&isnumber=7184847
C. Liu, B. Petroski, G. Cordone, G. Torres and S. Schuckers, “Iris Matching Algorithm on Many-Core Platforms,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7225264
Abstract: Biometrics matching has been widely adopted as a secure way for identification and verification purpose. However, the computation demand associated with running this algorithm on a big data set poses great challenge on the underlying hardware platform. Even though modern processors are equipped with more cores and memory capacity, the software algorithm still requires careful design in order to utilize the hardware resource effectively. This research addresses this issue by investigating the biometric application on many-core platforms. Biometrics algorithm, specifically Daugman’s iris matching algorithm, is used to benchmark and compare the performance of several many-core platforms. The results show the ability of the iris matching application to efficiently scale and fully exploit the capabilities offered by many-core platforms and provide insights in how to migrate the biometrics computation onto high-performance many-core architectures.
Keywords: Big Data; image matching; iris recognition; multiprocessing systems; security of data; Daugman iris matching algorithm; big data set; biometrics matching; high-performance many-core architectures; Coprocessors; Graphics processing units; Hardware; Instruction sets; Iris; Iris recognition; Kernel; Daugman’s algorithm; GPU; Iris matching; Many-core; Single-Chip Cloud Computer; Xeon Phi (ID#: 16-9832)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225264&isnumber=7190491
Y. Lu, Q. Xie and L. Wang, “A Novel User Model Based on Searchable Encryption Scheme,” 2015 Third International Conference on Advanced Cloud and Big Data, Yangzhou, 2015, pp. 247-253. doi: 10.1109/CBD.2015.47
Abstract: With the development of cloud computing and cloud storage technology, the privacy disclosure of the query and the destruction of data integrity on the outsourced data are two increasingly serious security issues in the cloud services. In order to resolve these two problems, we propose a novel user model based on searchable encryption scheme. And the model introduces a trusted third authority (TA), which is independent from the honest but curious cloud server. It allows multi-keyword search and informs data integrity verify results on the cipher text and outsourced data to users simultaneously. Finally, we demonstrate that the new scheme we proposed can enable the user has capability of searchable encryption and integrity verify on the outsourced data, reducing the users’ computation overhead.
Keywords: cloud computing; cryptography; query processing; TA; cloud computing development; cloud server; cloud services; cloud storage technology; data integrity destruction; novel user model; outsourced data; searchable encryption scheme; security issues; trusted third authority; Cloud computing; Computational modeling; Data models; Encryption; Indexes; Servers; data integrity; searchable encryption; user management (ID#: 16-9833)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7435481&isnumber=7435433
Sheng-Wei Huang, Ce-Kuen Shieh, Che-Ching Liao, Chui-Ming Chiu, Ming-Fong Tsai and Lien-Wu Chen, “A Cloud-Based Efficient On-Line Analytical Processing System with Inverted Data Model,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 341-345. doi: (not provided)
Abstract: On-line analytical processing (OLAP) provides analysis of multi-dimensional data stored in a database and achieves great success in many applications such as sales, marketing, financial data analysis. OLAP operation is a dominant part of data analysis especially when addressing a large amount of data. With the emergence of the MapReduce paradigm and cloud technology, OLAP operation can be processed on big data that resides in scalable, distributed storage. However, current MapReduce implementations of OLAP operation processing have a major performance drawback caused by improper processing procedure. This is crucial when dimension or dependent attributes are large, which is a common case for most data warehouses hold nowadays. To solve this issue, this paper proposes a methodology to accelerate the performance of OLAP operation processing on big data. We have conducted the experiments on the basic algebra of OLAP operation with different data sizes to demonstrate the effectiveness of our system.
Keywords: cloud computing; data mining; data warehouses; MapReduce paradigm; OLAP operation processing; cloud based efficient online analytical processing system; cloud technology; data analysis; inverted data model; multidimensional data; Algebra; Analytical models; Asia; Computational modeling; Databases; Software (ID#: 16-9834)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332592&isnumber=7332527
D. Puthal, B. P. S. Sahoo, S. Mishra and S. Swain, “Cloud Computing Features, Issues, and Challenges: A Big Picture,” Computational Intelligence and Networks (CINE), 2015 International Conference on, Bhubaneshwar, 2015, pp. 116-123. doi: 10.1109/CINE.2015.31
Abstract: Since the phenomenon of cloud computing was proposed, there is an unceasing interest for research across the globe. Cloud computing has been seen as unitary of the technology that poses the next-generation computing revolution and rapidly becomes the hottest topic in the field of IT. This fast move towards Cloud computing has fuelled concerns on a fundamental point for the success of information systems, communication, virtualization, data availability and integrity, public auditing, scientific application, and information security. Therefore, cloud computing research has attracted tremendous interest in recent years. In this paper, we aim to precise the current open challenges and issues of Cloud computing. We have discussed the paper in three-fold: first we discuss the cloud computing architecture and the numerous services it offered. Secondly we highlight several security issues in cloud computing based on its service layer. Then we identify several open challenges from the Cloud computing adoption perspective and its future implications. Finally, we highlight the available platforms in the current era for cloud research and development.
Keywords: cloud computing; research and development; software architecture; IT; cloud computing architecture; cloud research and development; data availability; data integrity; information security; information systems; public auditing; scientific application; service layer; virtualization; Bandwidth; Cloud computing; Computational modeling; Educational institutions; Security; Servers; Software as a service; Cloud security; Data integrity; Public auditing; Virtualization; Workflow scheduling (ID#: 16-9835)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053814&isnumber=7053782
V. K. Pant, J. Prakash and A. Asthana, “Three Step Data Security Model for Cloud Computing Based on RSA and Steganography,” Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, Noida, 2015, pp. 490-494. doi: 10.1109/ICGCIoT.2015.7380514
Abstract: Cloud computing is based on network and computer applications. In cloud data sharing is an important activity. Small, medium, and big organization are use cloud to store their data in minimum rental cost. In present cloud proof their importance in term of resource and network sharing, application sharing and data storage utility. Hence, most of customers want to use cloud facilities and services. So the security is most essential part of customer’s point of view as well as vendors. There are several issues that need to be attention with respect to service of data, security or privacy of data and management of data. The security of stored data and information is one of the most crucial problem in cloud computing. Using good protection techniques of access control we can resolved many security problems. Accept that managing privacy and security of information in web highly challenging. This paper describes how to secure data and information in cloud environment in time of data sharing or storing by using our proposed cryptography and steganography technique.
Keywords: cloud computing; public key cryptography; steganography; RSA; access control; application sharing; cloud computing; cloud data sharing; cloud facilities; computer applications; cryptography; data security model; data storage utility; network sharing; Computers; Cryptography; Decryption; Encryption; RSA; Steganography; data security (ID#: 16-9836)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380514&isnumber=7380415
L. Zhang, Z. Wang, Y. Mu and Y. Hu, “Fully Secure Hierarchical Inner Product Encryption for Privacy Preserving Keyword Searching in Cloud,” 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, 2015, pp. 449-453. doi: 10.1109/3PGCIC.2015.63
Abstract: Cloud computing provides dynamically scalable resources provisioned as a service over networks. But untrustworthy Cloud Service Provider(CSP) offers a big obstacle for the adoption of the cloud service since CSP can access data in Cloud without data owner’s permission. Hierarchical Inner Product Encryption (HIPE) covers all applications of anonymous encryption, fully private communication and search on encrypted data, which provide trusted data access control policy to CSP. However, the existing works only achieve either selectively attribute-hiding or adaptively attribute-hiding under some strong assumptions in the public key setting. To overcome them, a novel HIPE in private key setting is issued. The new scheme achieves both fully secure and security reduction under the natural assumption-Decisional Linear (DLIN) assumption in the standard model.
Keywords: authorisation; cloud computing; data privacy; private key cryptography; public key cryptography; CSP; DLIN assumption; HIPE; adaptively attribute-hiding; anonymous encryption; fully private communication; fully secure hierarchical inner product encryption; hierarchical inner product encryption; natural assumption-decisional linear assumption; privacy preserving keyword searching; private key setting; public key setting; selectively attribute-hiding; standard model; trusted data access control policy; untrustworthy cloud service provider; Cloud computing; Computational modeling; Encryption; Standards; Cloud security; Searching encryption; the DLIN assumption (ID#: 16-9837)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7424606&isnumber=7424499
R. Archana, C. Mythili and S. N. Kalyani, “Security Mechanism for Android Cloud Computing,” Communication Technologies (GCCT), 2015 Global Conference on, Thuckalay, 2015, pp. 133-138. doi: 10.1109/GCCT.2015.7342639
Abstract: Today android devices face many resource challenges like battery life, storage, bandwidth etc. cloud computing offers advantages to users by allowing them to use infrastructures, platforms and software by cloud providers elastically in an on-demand fashion at low cost. Android Cloud Computing (ACC) provides android users with data storage and processing services in cloud, obviating the need to have a powerful device configuration(e.g. CPU speed, Memory, Capacity etc) and all resource-intensive computing can be performed in cloud. Nowadays more and more commercial applications are shifting to android and these days security becomes a big issue. With increasing use of mobile android devices, the requirement of cloud computing in android arises. In this paper, a brief review for how ACC is emerging in real world and further important issues towards security are discussed. Here security mechanism based on onion routing is proposed, which will secure the data on Android Cloud Computing.
Keywords: cloud computing; security of data; smart phones; ACC; Android cloud computing; Android devices; data processing services; data storage services; device configuration; onion routing; resource-intensive computing; security mechanism; Androids; Cloud computing; Humanoid robots; Organizations; Security; Servers; Smart phones; Android; Android Cloud Computing; Android Users; Cloud; Computing; Onion Routing; Security (ID#: 16-9838)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342639&isnumber=7342608
S. H. Khan and M. A. Akbar, “Multi-Factor Authentication on Cloud,” Digital Image Computing: Techniques and Applications (DICTA), 2015 International Conference on, Adelaide, SA, 2015, pp. 1-7. doi: 10.1109/DICTA.2015.7371288
Abstract: Due to the recent security infringement incidents of single factor authentication services, there is an inclination towards the use of multi-factor authentication (MFA) mechanisms. These MFA mechanisms should be available to use on modern hand-held computing devices like smart phones due to their big share in computational devices market. Moreover, the high social acceptability and ubiquitous nature has attracted the enterprises to offer their services on modern day hand-held devices. In this regard, the big challenge for these enterprises is to ensure security and privacy of users. To address this issue, we have implemented a verification system that combines human inherence factor (handwritten signature biometrics) with the standard knowledge factor (user specific passwords) to achieve a high level of security. The major computational load of the aforementioned task is shifted on a cloud based application server so that a platform-independent user verification service with ubiquitous access becomes possible. Custom applications are built for both the iOS and Android based devices which are linked with the cloud based two factor authentication (TFA) server. The system is tested on-the-run by a diverse group of users and 98.4% signature verification accuracy is achieved.
Keywords: cloud computing; data privacy; message authentication; ubiquitous computing; Android based device; cloud based application server; hand-held computing device; handwritten signature biometrics; human inherence factor; iOS; multifactor authentication; security infringement; signature verification; single factor authentication service; smart phone; two factor authentication server; user privacy; user security; Authentication; Biometrics (access control); Hidden Markov models; Performance evaluation; Servers; Smart phones (ID#: 16-9839)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371288&isnumber=7371204
Y. H. Tung, S. S. Tseng and Y. Y. Kuo, “A Testing-Based Approach to SLA Evaluation on Cloud Environment,” Network Operations and Management Symposium (APNOMS), 2015 17th Asia-Pacific, Busan, 2015, pp. 495-498. doi: 10.1109/APNOMS.2015.7275375
Abstract: A service level agreement (SLA) is a negotiated agreement between consumers and service providers in order to guarantee the quality of the negotiated service level. Therefore, many companies used contract to specify the desired service level agreement. SLA may specify the levels of availability, serviceability, performance, operation, security, or other attributes of the service. However, due to the big human efforts to monitor the performance, how to evaluate the SLA in service delivery becomes an important issue. To evaluate SLA automatically, in this paper, we proposed a testing-based SLA evaluation approach based upon the quality model ISO/IEC 25010 that contains eight characteristics: functional, performance, compatibility, usability, reliability, security, maintainability and portability. Nowadays, cloud computing is emerged as a new technology to improve the computational complexity of enterprise information systems. By adopting features of cloud computing, we have implemented a prototype system which integrates open-source software, Jenkins, as controller and other third party softwares as testers to automate SLA evaluation processes according to the testing-based SLA evaluation approach. The experiments have been conducted to evaluate the performance of our approach and prototype system. The results indicate that our prototype system can provide quality and stable service.
Keywords: IEC standards; ISO standards; business data processing; cloud computing; computational complexity; contracts; public domain software; IEC 25010 quality model; ISO 25010 quality model; Jenkins; cloud environment; consumers; enterprise information system computational complexity improvement; open-source software; performance monitoring; service level agreement; service providers; testing-based SLA quality evaluation approach; Cloud computing; IEC Standards; ISO Standards; Prototypes; Security; Testing; ISO/IEC 25010; quality indicators; testing-based (ID#: 16-9840)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275375&isnumber=7275336
Y. Jin, C. Tian, H. He and F. Wang, “A Secure and Lightweight Data Access Control Scheme for Mobile Cloud Computing,” Big Data and Cloud Computing (BDCloud), 2015 IEEE Fifth International Conference on, Dalian, 2015, pp. 172-179. doi: 10.1109/BDCloud.2015.57
Abstract: By moving data storage and processing from lightweight mobile devices to powerful and centralized computing platforms located in clouds, Mobile Cloud Computing (MCC) can greatly enhance the capability of mobile devices. However, when data owners outsource sensitive data to mobile cloud for sharing, the data is outside of their trusted domain and can potentially be granted to untrusted parties which include the service providers. Data security and flexible access control have become the most pressing demands for MCC. To address this issue, we design a secure and lightweight data access control scheme based on Ciphertext-Policy Attribute-based Encryption (CP-ABE) algorithm, which can protect the confidentiality of outsourced data and provide fine-grained data access control in MCC. The scheme can obviously improve the overall system performance by greatly reducing the computation overheads in encryption and decryption operations, provide flexible and expressive data access control policy, and meanwhile enable data owners to securely outsource most of the computation overheads at mobile devices to cloud servers. The security and performance evaluation show that our scheme is secure, highly efficient and well suited for lightweight mobile devices.
Keywords: authorisation; cloud computing; cryptography; mobile computing; storage management; trusted computing; CP-ABE algorithm; MCC; centralized computing platform; ciphertext-policy attribute-based encryption algorithm; computation overhead; data access control policy; data confidentiality; data processing; data security; data storage; decryption operation; encryption operation; fine-grained data access control; flexible access control; lightweight data access control scheme; lightweight mobile device; mobile cloud computing; outsourced data; overall system performance; performance evaluation; sensitive data; trusted domain; untrusted party; Access control; Algorithm design and analysis; Encryption; Mobile communication; Mobile handsets; Servers; access control; attribute-based encryption (ID#: 16-9841)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7310735&isnumber=7310694
Z. Cui, H. Lv, C. Yin, G. Gao and C. Zhou, “Efficient Key Management for IOT Owner in the Cloud,” Big Data and Cloud Computing (BDCloud), 2015 IEEE Fifth International Conference on, Dalian, 2015,
pp. 56-61. doi: 10.1109/BDCloud.2015.40
Abstract: IOT (internet of things) owner may not want their sensitive data to be public in the cloud. However, the client operated by IOT owner may be too lightweight to provide the encryption/decryption service. To remove the issue, we propose a novel solution to minimize the access control cost for IOT owner. First, we present a security model for IOT with minimal cost of IOT owner client without encryption, in which we transfer the encryption/decryption from the client to the cloud. Second, we propose an access control model to minimize the key management cost for IOT owner. Third, we provide an authorization update method to minimize the cost dynamically. In our method, the sensitive data from IOT owner is only available to the authorized user. Each IOT owner needs only to manage a single password, by which the IOT owner can always manage his/her sensitive data and authorization no matter the authorization policy how to change. Experimental results show that our approach significantly outperforms most of existing methods with efficient key management for IOT owner.
Keywords: Internet of Things; authorisation; cloud computing; cryptography; IoT; access control cost; authorization update method; cloud computing; decryption service; encryption service; key management cost; password management; security model; Authorization; Cloud computing; Encryption; Servers; Authorization update; IOT owner key management; Internet of things; Sensitive data
(ID#: 16-9842)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7310716&isnumber=7310694
H. Zhao and F. Lei, “A Novel Video Authentication Scheme with Secure CS-Watermark in Cloud,” Multimedia Big Data (BigMM), 2015 IEEE International Conference on, Beijing, 2015, pp. 294-299. doi: 10.1109/BigMM.2015.12
Abstract: Data secure processing is the important issue of video authentication in cloud environment. This research presents a novel scheme to protect integrity of video content for common video data operations by using a semi-fragile CS-watermark technology. In proposed scheme, the CS-watermark data are generated from the block compressed sensing (CS) measurements which rely on the knowledge of the measurement matrix used for sensing I frame’s DCT coefficients. Our analysis and results indicate that the CS-watermark data can accurately verify the integrity of the original video content, and have higher security than other watermarking methods.
Keywords: authorisation; cloud computing; compressed sensing; discrete cosine transforms; video watermarking; DCT coefficients; block compressed sensing measurements; cloud environment; data secure processing; measurement matrix; secure CS-watermark; video authentication scheme; video content integrity; video data operations; Authentication; Discrete cosine transforms; Intellectual property; Size measurement; Sparse matrices; Watermarking; compressed sensing; measurement matrix; video data authentication; watermark (ID#: 16-9843)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153903&isnumber=7153824
X. Liu, Y. Xia, Y. Xiang, M. M. Hassan and A. Alelaiwi, “A Secure and Efficient Data Sharing Framework with Delegated Capabilities in Hybrid Cloud,” Security and Privacy in Social Networks and Big Data (SocialSec), 2015 International Symposium on, Hangzhou, 2015, pp. 7-14. doi: 10.1109/SocialSec2015.13
Abstract: Hybrid cloud is a widely used cloud architecture in large companies that can outsource data to the public cloud, while still supporting various clients like mobile devices. However, such public cloud data outsourcing raises serious security concerns, such as how to preserve data confidentiality and how to regulate access policies to the data stored in public cloud. To address this issue, we design a hybrid cloud architecture that supports data sharing securely and efficiently, even with resource-limited devices, where private cloud serves as a gateway between the public cloud and the data user. Under such architecture, we propose an improved construction of attribute-based encryption that has the capability of delegating encryption/decryption computation, which achieves flexible access control in the cloud and privacy-preserving in data utilization even with mobile devices. Extensive experiments show the scheme can further decrease the computational cost and space overhead at the user side, which is quite efficient for the user with limited mobile devices. In the process of delegating most of the encryption/decryption computation to private cloud, the user can not disclose any information to the private cloud. We also consider the communication security that once frequent attribute revocation happens, our scheme is able to resist some attacks between private cloud and data user by employing anonymous key agreement.
Keywords: cloud computing; cryptography; data privacy; mobile computing; outsourcing; peer-to-peer computing; software architecture; anonymous key agreement; attribute-based encryption; data confidentiality; data security; data sharing framework; encryption/decryption computation; hybrid cloud architecture; mobile device; Cloud computing; Data privacy; Encryption; Mobile handsets; Outsourcing; anonymous key agreement protocol; hybrid cloud (ID#: 16-9844)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371893&isnumber=7371823
S.-J. Yang and I.-C. Cheng, “Design Issues of Trustworthy Cloud Platform Based on IP Monitoring and File Risk,” Big Data and Cloud Computing (BDCloud), 2015 IEEE Fifth International Conference on, Dalian, 2015, pp. 110-117. doi: 10.1109/BDCloud.2015.52
Abstract: With the rising popularity of Web applications and cloud computing technology, a secure cloud computing environment is one of the main concerns. Currently, enterprise cloud platforms often rely on their maintenance and operation, and medium and small companies cannot effectively save manpower information security costs, and thus has affected the willingness of enterprises to use cloud services. In view of this, the cloud service provider (CSP) must be ready to support a full range of security services, such as firewalls, intrusion detection systems, VPN, etc, in order to providing good quality of service. The purpose of this paper is to explore IP address monitor and employ file risk concept to design Trustworthy Cloud Platform (TWCP). Hence, the TWCP can provide the security risk assess models for monitoring security and trust for cloud multi-tenants. Also, this cloud platform can provide value-added audit reports under the virtual cloud environment. It will be essential to a customer’s loyalty and retention. In addition, each tenant in a virtual machine records all the abnormal file names and attributes, and illegal IP addresses daily, and also imports into CSP’s database from these file lists and IP events. Then, the proposed TWCP will perform assessing tasks for analyzing daily log reports and e-mail the security risk status to every tenant. Finally, this paper performs simulations under IaaS, the experimental results indicate the TWCP can obtain a higher IP monitoring ratio and lower the file risk value, and thus allow all tenants to get more trustable TWCP to enhance all tenant’s overall service quality and operational efficiency.
Keywords: cloud computing; security of data; system monitoring; trusted computing; virtual machines; CSP; IP address monitor; IP monitoring ratio; IaaS; TWCP; VPN; Web applications; cloud computing environment security; cloud computing technology; cloud multitenants; cloud service provider; daily log report analysis; design issues; e-mail; enterprise cloud platforms; file risk concept; firewalls; intrusion detection systems; manpower information security costs; operational efficiency; quality of service; security monitoring; security services; trust monitoring; trustworthy cloud platform; value-added audit reports; virtual cloud environment; virtual machine; Algorithm design and analysis; Cloud computing; Companies; IP networks; Monitoring; Risk management; Security; Cloud Services; File Risk Value; IP Monitoring; TWCP (ID#: 16-9845)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7310725&isnumber=7310694
B. Yang, G. Song, Y. Zheng and Y. Wu, “QoSC: A QoS-Aware Storage Cloud Based on HDFS,” Security and Privacy in Social Networks and Big Data (SocialSec), 2015 International Symposium on, Hangzhou, 2015, pp. 32-38. doi: 10.1109/SocialSec2015.14
Abstract: Storage QoS is a key issue for a storage cloud infrastructure. This paper presents QoSC, a QoS-aware storage cloud for storing massive data over the dynamic network, based on the Hadoop distributed file system (HDFS). QoSC employs a data redundancy policy based on recovery volumes and a QoS-aware data placement strategy. We consider the QoS of a storage node as a combination of the transfer bandwidth, the availability of service, the workload (CPU utilization), and the free storage space. We have deployed QoSC on the campus network of Zhejiang University, and have conducted a group of experiments on file storage and retrieval. The experimental results show that QoSC improves the performance of file storage and retrieval and balances the workload among DataNodes, by being aware of QoS of DataNodes.
Keywords: cloud computing; data handling; distributed databases; parallel processing; quality of service; HDFS; Hadoop distributed file system; QoS-aware storage cloud; QoSC infrastructure; data redundancy policy; free storage space; quality of service; workload CPU utilization; Bandwidth; Cloud computing; Distributed databases; Extraterrestrial measurements; Quality of service; Redundancy; Distributed Storage; Hadoop; QoS; Reputation (ID#: 16-9846)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371897&isnumber=7371823
Y. Tao, H. Dai, B. Sun, S. Zhao, M. Qiu and Z. Yu, “A Head Record Cache Structure to Improve the Operations on Big Files in Cloud Storage Servers,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 46-51. doi: 10.1109/HPCC-CSS-ICESS.2015.231
Abstract: Cache and prefetching is now widely used in storage systems for disks to speed up accessing data. Although cache and prefetching are well accepted technologies in storage field, they are unsuitable for cloud storage systems because most of requests in cloud is for large files, which leads to undesirable hit rate and speed performance. To solve this issue, an improved head record cache (HRC) structure model is proposed in this paper based on reshuffling disk cache structure and prefetching technologies, aiming at improving reading performance in a cloud environment. Compared to previous researches, this model has better read performance in a cloud environment, since HRC increases hit rate. The experimental results demonstrate that the system has 18% better reading performance than traditional cloud storage system.
Keywords: cache storage; cloud computing; HRC structure model; cloud environment; cloud storage server; disk cache structure; head record cache structure; head record cache structure model; Cloud computing; Electronic mail; File systems; Indexes; Magnetic heads; Prefetching; Servers (ID#: 16-9847)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336142&isnumber=7336120
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Coding Theory and Security 2015 |
Coding theory examines the properties of codes and their aptness for a specific application. For the Science of Security, coding theory is relevant to compositionality, resilience, cryptography, and metrics. The work cited here was presented in 2015.
H. Cai, H. Liu, Q. Yuan, M. Steinebach and X. Wang, “A Novel Image Secret Sharing Scheme With Meaningful Shares,” Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, South Brisbane, QLD, 2015, pp. 1767-1771. doi: 10.1109/ICASSP.2015.7178274
Abstract: In this paper a novel (t, n) threshold image secret sharing scheme is proposed. Based on the idea that there is close connection between secret sharing and coding theory, coding method on GF(2m) is applied in our scheme instead of the classical Lagrange's interpolation method in order to deal with the fidelity loss problem in the recovery. All the generated share images are meaningful and the size of each share image is the same as the secret image. The analysis proves our scheme is perfect and ideal and also has high security. The experiment results demonstrate that all the shares have high quality and the secret image can be recovered exactly.
Keywords: cryptography; image coding; coding method; image secret sharing scheme; interpolation method; meaningful shares; Encoding; Encryption; Image coding; Interpolation; Visualization; coding theory; image encryption; image secret sharing; multimedia security (ID#: 16-10949)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178274&isnumber=7177909
X. Guang, J. Lu and F. W. Fu, “Variable-Security-Level Secure Network Coding,” Information Theory Workshop - Fall (ITW), 2015 IEEE, Jeju, 2015, pp. 34-38. doi: 10.1109/ITWF.2015.7360729
Abstract: In network coding theory, when wiretapping attacks occur, secure network coding is introduced to prevent information from being leaked to adversaries. In practical network communications, secure constraints vary with time. How to effectively deal with information transmission and information security simultaneously under different security-levels is introduced in this paper as variable-security-level secure network coding problem. In order to solve this problem efficiently, we propose the concept of local-kernel-preserving variable-security-level secure linear network codes, which have the same local encoding kernel at each internal node. We further present an approach to construct such a family of SLNCs and give an algorithm for efficient implementation. This approach saves the storage space at both source node and internal nodes, and resources and time on networks. Subsequently, an example is given to illustrate our constructive algorithm. Finally, the performance of the proposed algorithm is analyzed, including the field size, computational and storage complexities.
Keywords: cryptography; network coding; information security; information transmission; internal node; internal nodes; linear network codes; local kernel preserving variable security; network coding theory; secure constraints; source node; storage space; variable security level secure network coding; wiretapping attacks; Complexity theory; Conferences; Encoding; Information rates; Kernel; Network coding (ID#: 16-10950)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7360729&isnumber=7360717
A. Sghaier, M. Zghid and M. Machhout, “Proposed Efficient Arithmetic Operations Architectures for Hyper Elliptic Curves Cryptosystems (HECC),” Systems, Signals & Devices (SSD), 2015 12th International Multi-Conference on, Mahdia, 2015, pp. 1-5. doi: 10.1109/SSD.2015.7348108
Abstract: Because it offers several benefits over other public-key cryptosystems much effort are done to make Hyper Elliptic Curve Cryptosystems (HECC) more practical, such as RSA, it offers a comparable level of security with a smaller key size. For this reason, HECCs can be used in embedded environments where speed, energy, power, chip and memory area are constrained. However, HEC use a complex mathematical background, so it's difficult to be implemented on hardware. They can be defined over real numbers, complex numbers and any other field. So we need arithmetic operations (addition, subtraction, multiplication and division) which have much application in cryptography and coding theory. We have to note that the overall performance of HECC is mainly determined by the speed of arithmetic operations. The most algorithms that manipulate these operations use polynomial coefficients in base 2 and they are defined over finite fields. But, the problem is clearly viewed over real field and simple to be presented. Arithmetic operations are based on the complexity of a mathematical problem, and to have an optimized architecture we need to optimize arithmetic operations. In this paper we describe a high performance, area efficient implementation of arithmetic operations in HECC over real field and a new design methodology is presented. The proposed architectures operations are implemented in FPGA.
Keywords: digital arithmetic; embedded systems; field programmable gate arrays; polynomials; public key cryptography; FPGA; HECC; arithmetic operation architectures; coding theory; cryptography; embedded environments; finite fields; hyperelliptic curves cryptosystem; mathematical problem; polynomial coefficients; Decision support systems; Elliptic curve cryptography; Elliptic curves; Jacobian matrices; Polynomials; Yttrium; Discrete Logarithm Problem (DLP); Elliptic Curve (EC); Hyper Elliptic Curve Cryptosystems (HECC); HyperElliptic Curve (HEC); Jacobian group; MA; Rivest, Shamir and Adelman (RSA) (ID#: 16-10951)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348108&isnumber=7348090
Q. Zhang, S. Kadhe, M. Bakshi, S. Jaggi and A. Sprintson, “Talking Reliably, Secretly, and Efficiently: A “Complete” Characterization,” Information Theory Workshop (ITW), 2015 IEEE, Jerusalem, 2015, pp. 1-5. doi: 10.1109/ITW.2015.7133143
Abstract: We consider reliable and secure communication of information over a multipath network. A transmitter Alice sends messages to the receiver Bob in the presence of a hidden adversary Calvin. The adversary Calvin can both eavesdrop and jam on (possibly non-identical) subsets of transmission links. The goal is to communicate reliably (intended receiver can understand the messages) and secretly (adversary cannot understand the messages). Two kinds of jamming, additive and overwrite, are considered. Additive jamming corresponds to wireless network model while overwrite jamming corresponds to wired network model and storage systems. The multipath network consists of C parallel links. Calvin can both jam and eavesdrop any zio number of links, can eavesdrop (but not jam) any zi/o number of links, and can jam (but not eavesdrop) any zo/i number of links. We present the first “complete” information-theoretic characterization of maximum achievable rate as a function of the number of links that can be jammed and/or eavesdropped for equal and unequal link capacity multipath networks under additive and overwrite jamming in the large alphabet regime. Our achievability and converse proofs require non-trivial combination of information theoretic and coding theoretic ideas and our achievability schemes are computationally efficient. The PHaSE-Saving techniques1 are used for achievability while a “stochastic” singleton bound is obtained for converse.
Keywords: jamming; network coding; radio networks; telecommunication security; C parallel links; PHaSE-Saving techniques; additive jamming; coding theory; communication security; first complete information-theoretic characterization; hidden adversary Calvin; overwrite jamming; stochastic singleton bound; storage systems; transmission links; unequal link capacity multipath networks; wired network model; wireless network model; Additives; Computer hacking; Computers; Decoding; Jamming; Reliability theory
(ID#: 16-10952)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133143&isnumber=7133075
T. T. Luong, N. N. Cuong and L. T. Dung, “A New Statement About Direct Exponent of an MDS Matrix in Block Ciphers,” Knowledge and Systems Engineering (KSE), 2015 Seventh International Conference on, Ho Chi Minh City, 2015, pp. 340-343. doi: 10.1109/KSE.2015.68
Abstract: MDS code has been studied for a long time in the theory of error-correcting code and has been applied widely in cryptography. Some authors studied and proposed some methods for constructing MDS matrices which do not based on MDS code. Some MDS matrix transformations have been studied and direct exponent is such a transformation. In this paper we present some new results on direct exponent transformation to show the k* number (cycle) that direct p exponent of the MDS matrix fork times results in the original MDS matrix. In addition, the results are shown to have important applications in block ciphers.
Keywords: cryptography; matrix algebra; MDS code; MDS matrix transformations; block ciphers; direct exponent transformation; direct p-exponent; error-correcting code theory; k*-number cycle; Ciphers; Error correction codes; Information security; Linear codes; Resistance; Direct Exponent Matrix; Direct Square Matrix; MDS Matrix (ID#: 16-10953)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371809&isnumber=7371739
T. T. Luong, N. N. Cuong and L. T. Dung, “The Preservation of the Coefficient of Fixed Points of an MDS Matrix under Direct Exponent Transformation,” Advanced Technologies for Communications (ATC), 2015 International Conference on, Ho Chi Minh City, 2015, pp. 111-116. doi: 10.1109/ATC.2015.7388301
Abstract: MDS (Maximum Distance Separable) code has been studied for a long time in the theory of error-correcting code and has been applied widely in cryptography. Some authors studied and proposed some methods for constructing MDS matrices which do not base on MDS codes. Some MDS matrix transformations have been studied and direct exponent is such a transformation. In this paper, we present some new results on the preservation of the number of fixed points of an MDS matrix under direct exponent transformation. In addition, the important applications of these results will be shown in block ciphers.
Keywords: block codes; cryptography; error correction codes; matrix algebra; MDS matrix fixed point coefficient preservation; block ciphers; direct exponent transformation; error-correcting code theory; maximum distance separable code; Ciphers; Error correction codes; Information security; Matrices; Resistance; Direct Exponent Matrix; Direct Square Matrix; MDS Matrix (ID#: 16-10954)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7388301&isnumber=7388293
A. Motamedi, M. Najafi and N. Erami, “Parallel Secure Turbo Code for Security Enhancement in Physical Layer,” 2015 Signal Processing and Intelligent Systems Conference (SPIS), Tehran, 2015, pp. 179-184. doi: 10.1109/SPIS.2015.7422336
Abstract: Turbo code has been one of the important subjects in coding theory since 1993. This code has low Bit Error Rate (BER) but decoding complexity and delay are big challenges. On the other hand, considering the complexity and delay of separate blocks for coding and encryption, if these processes are combined, the security and reliability of communication system are guaranteed. In this paper a secure decoding algorithm in parallel on General-Purpose Graphics Processing Units (GPGPU) is proposed. This is the first prototype of a fast and parallel Joint Channel-Security Coding (JCSC) system. Despite of encryption process, this algorithm maintains desired BER and increases decoding speed. We considered several techniques for parallelism: (1) distribute decoding load of a code word between multiple cores, (2) simultaneous decoding of several code words, (3) using protection techniques to prevent performance degradation. We also propose two kinds of optimizations to increase the decoding speed: (1) memory access improvement, (2) the use of new GPU properties such as concurrent kernel execution and advanced atomics to compensate buffering latency.
Keywords: channel coding; decoding; error statistics; graphics processing units; turbo codes; bit error rate; buffering latency; code words; communication system; concurrent kernel execution; distribute decoding load; general-purpose graphics processing units; joint channel-security coding system; memory access improvement; multiple cores; parallel secure turbo code; physical layer; secure decoding algorithm; security enhancement; Bit error rate; Decoding; Graphics processing units; Parallel processing; Security; Throughput; Turbo codes; CUDA; GPU; Turbo code; parallelism; security (ID#: 16-10955)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7422336&isnumber=7422296
C. Yin, H. Lv, Z. Cui, T. Li, L. Rao and Z. Wang, “ICRS: An Optimized Algorithm to Improve Performance in Distributed Storage System,” Intelligent Human-Machine Systems and Cybernetics (IHMSC), 2015 7th International Conference on, Hangzhou, 2015, pp. 561-564. doi: 10.1109/IHMSC.2015.219
Abstract: Erasure codes, such as Reed-Solomn (RS) and CRS Codes, are being extensively deployed in distributed storage system since they offer significantly higher reliability than data replication methods at much lower storage overheads. But RS and CRS codes always impose a huge burden on system's performance while encoding and decoding when they provide significant savings in storage space. This paper puts forward an optimized algorithm named ICRS (Improved CRS) based on erasure coding technology, which is committed to improve the security and the utilization of storage space. By studying existing high reliability and space saving rate of coding technology, we imported coding mechanism into distributed storage systems. We have verified ICRS algorithm by theory analysis and simulation test. Through theory analysis, we can conclude that ICRS algorithm can improve the performance of encoding and decoding because they can shorten the computation times. We apply ICRS algorithm into our storage system model named Robot to test the performance. At the same time, we compare RS codes and CRS codes in Robot. The test results show that decoding speed can rise up nearly two times than the past serial decoding speed.
Keywords: decoding; encoding; performance evaluation; reliability; storage allocation; CRS code; ICRS; RS code; Reed-Solomon code; data replication method; distributed storage system; improved CRS; robot; space saving rate; storage system model; Algorithm design and analysis; Decoding; Distributed databases; Encoding; Reliability; Robots; Servers; ICRS; performance; reliability
(ID#: 16-10956)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335035&isnumber=7334774
D. K. N. Wood and D. R. E. Harang, “Grammatical Inference and Language Frameworks for LANGSEC,” Security and Privacy Workshops (SPW), 2015 IEEE, San Jose, CA, 2015, pp. 88-98. doi: 10.1109/SPW.2015.17
Abstract: Formal Language Theory for Security (LANGSEC) has proposed that formal language theory and grammars be used to define and secure protocols and parsers. The assumption is that by restricting languages to lower levels of the Chomsky hierarchy, it is easier to control and verify parser code. In this paper, we investigate an alternative approach to inferring grammars via pattern languages and elementary formal system frameworks. We summarize inferability results for subclasses of both frameworks and discuss how they map to the Chomsky hierarchy. Finally, we present initial results of pattern language learning on logged HTTP sessions and suggest future areas of research.
Keywords: formal languages; grammars; hypermedia; inference mechanisms; security of data; Chomsky hierarchy; HTTP sessions; LANGSEC; formal language theory for security; grammatical inference; language frameworks; parsers; secure protocols; Finite element analysis; Formal languages; Grammar; Polynomials; Protocols; Security; Standards; elementary formal system (EFS); language identification; pattern language (ID#: 16-10957)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163212&isnumber=7163193
B. Zhu, A. Jiang, X. Bai and D. Bai, “A Method Research of Image Encryption Based on Chaotic and Secret Sharing,” Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015 IEEE International Conference on, Shenyang, 2015, pp. 1823-1828. doi: 10.1109/CYBER.2015.7288224
Abstract: In view of the digital image in the process of transmission, it is easy to leak information, in order to protect the security of image information. This paper proposes an image encryption method which is a combination of image sharing technology and chaos theory. First, dividing the secret image into blocks and coding the blocks, restructuring the blocks by pseudo random sequence which is generated by Logistic chaotic map; Then, processing the restructuring image by the image sharing technology; Finally, the images those be processed by image sharing are embedded into the cover image for network transmission. The experimental results show that the method can be better for digital image encryption.
Keywords: block codes; chaos; cryptography; image coding; random sequences; block coding; chaos theory; image encryption method; image restructuring; image sharing technology; logistic chaotic map; pseudo random sequence; secret sharing; Chaos; Encryption; Image restoration; Indexes; Logistics; Logistic map; chaos encryption; image sharing (ID#: 16-10958)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288224&isnumber=7287893
F. C. Colon Osorio, H. Qiu and A. Arrott, “Segmented Sandboxing - A Novel Approach to Malware Polymorphism Detection,” 2015 10th International Conference on Malicious and Unwanted Software (MALWARE), Fajardo, 2015, pp. 59-68. doi: 10.1109/MALWARE.2015.7413685
Abstract: Malware polymorphic and metamorphic obfuscation techniques combined with so-called “sandboxing evasion techniques“ continue to erode the effectiveness of both static detection (signature matching), and dynamic detection (sandboxing). Specifically, signature based techniques are overwhelmed by the sheer number of samples generated from a single seminal binary through the use of polymorphic variations (encryption, ISP obfuscation together with ISP emulators, semantically neutral transformations, and so forth). Anti-virus security vendors often report more than 100,000 new Malware signatures a day. In most cases, the preponderance of these variations can be attributed to just a handful of seminal Malware families. In 2011, FireEye reported that over 50% of observed successful Malware infections were attributable to just 13 Malware families (seminals).1 Similarly, sandboxing2, also known as dynamic Malware detection, has suffered from its own set of limitations. Mainly, (1) Malware writers embed in their code the ability to discover virtualized environments by checking for live internet access, or certain system properties inherent to virtualized environments, (2) Wait and seek (aka dormant Malware), a technique where knowing the execution time limitations of sandboxes, the Malware just waits, and (3) evasion techniques based on diverse communication. While the benefits of either dynamic or static approaches for Malware detection look quite tempting from each of their counterpart's perspectives, their weakness are daunting in their own right as well. In this manuscript we attempted to combine the best part of both approaches, while minimizing the disadvantages of either of them. We call this mixed approach “static Malware detection with segmented sandboxing“. It was first developed by modeling the problem from a classical automata theory that leads from a formal problem formulation to a practical solution implementation. Preliminary results have shown that this approach is extremely effective in at least two significant ways. First, it sequentially minimizes both false negatives (misses) and false positives (FPs) enabling response resources to be focused on a more complete set of attacks with far less distraction from false alarms. Second, it overcomes many of the known limitations of sandboxing technology.
Keywords: automata theory; digital signatures; invasive software; FireEye; ISP emulators; ISP obfuscation; anti-virus security vendors; diverse communication; dormant malware; dynamic detection; dynamic malware detection; encryption; malware infections; malware polymorphism detection; malware signatures; metamorphic obfuscation techniques; polymorphic variations; sandboxing evasion techniques; segmented sandboxing; semantically neutral transformations; seminal binary; signature based techniques; signature matching; static detection; static malware detection; virtualized environments; Automata; Engines; Malware; Manuals; Semantics; Software (ID#: 16-10959)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7413685&isnumber=7413673
D. Fangquan, D. Chaoqun, Z. Yao and L. Teng, “Binary-Oriented Hybrid Fuzz Testing,” Software Engineering and Service Science (ICSESS), 2015 6th IEEE International Conference on, Beijing, 2015, pp. 345-348. doi: 10.1109/ICSESS.2015.7339071
Abstract: In software security testing, fuzz testing and symbolic execution are two main testing techniques. Fuzz testing finds program bugs by executing the target program with random inputs it generates while monitoring the execution for abnormal behaviors. Though fuzz testing is able to explore deep into a program's state space efficiently, it usually cannot guarantee the code coverage ratio in many situations. Symbolic execution treats a program's input as symbols and executes the program with such symbolic input. Symbolic execution tries to explore the whole state space of a program by analyzing all of the program's paths. Symbolic execution always guarantees the code coverage ratio in theory, but for real programs it has many problems such as path explosion. Our new method, hybrid fuzz testing, combines the benefits of fuzz testing and symbolic execution. Symbolic execution will start to work and to generate new program input when fuzz testing cannot increase the code coverage ratio. The experiment result implies that our hybrid fuzz testing method can obviously increase the code coverage ratio efficiently in reasonable resources using.
Keywords: fuzzy set theory; program testing; system monitoring; binary-oriented hybrid fuzz testing techniques; code coverage ratio; program state space; software security testing; symbolic execution; Assembly; Computer bugs; Computers; Instruments; Reactive power; Registers; Testing; case generation; code coverage; fuzz testing; symbolic execution (ID#: 16-10960)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7339071&isnumber=7338993
J. Li, X. Xu, L. Liao and L. Li, “Concolic Execute Fuzzing Based on Control-Flow Analysis,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 385-389. doi: 10.1109/CIS.2015.99
Abstract: This paper proposes a method which utilizing taint analysis to reduce the unnecessary analysis routine, concentrating on the control-flow altering input using concolic (concrete and symbolic) execution procedure. A prototype, Concolic Fuzz is implemented based on this method, which is built on Pin platform at x86 binary level and using Z3 as the SMT (Satisfiability Modulo Theories) solver. The results of experiments verify that our approach is effective in increasing code coverage with remarkably lower resource and time cost than the standard fuzzing and concolic testing tools. The scale of fuzzing range and symbols are reduced, so as the computing resource and time consumption, especially when the input data is in highly structured and complex file format.
Keywords: computability; Concolic Fuzz; SMT solver; Satisfiability Modulo Theories; code coverage; complex file format; computing resource; concolic execution procedure; concolic testing tools; control flow analysis; fuzzing range; lower resource; standard fuzzing; taint analysis; time consumption; Concrete; Instruments; Performance analysis; Registers; Security; Software; Testing; concolic execution; controlflow; dynamic taint analysis; fuzzing test (ID#: 16-10961)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7397113&isnumber=7396229
B. Fang, Z. Qian, W. Zhong and W. Shao, “Iterative Precoding for MIMO Wiretap Channels Using Successive Convex Approximation,” Antennas and Propagation (APCAP), 2015 IEEE 4th Asia-Pacific Conference on, Kuta, 2015, pp. 65-66. doi: 10.1109/APCAP.2015.7374273
Abstract: In this paper, we study the precoding problem for physical layer security in a general multiple-input multiple-output (MIMO) wiretap channel. Since the resultant secrecy capacity maximization (SCM) problem is nonconvex in general, we solve it by employing a successive convex approximation method, where the nonconvex part of the formulated problem is approximated by its first-order Taylor expansion. Thus, the SCM problem can be iteratively solved through convex programming of its convexified version. Finally, an iterative precoding algorithm with provable convergence is presented. Numerical simulations are also provided to verify the proposed algorithm.
Keywords: MIMO communication; approximation theory; channel coding; concave programming; convex programming; iterative methods; precoding; telecommunication security; MIMO wiretap channels; SCM problem; first-order Taylor expansion; iterative precoding algorithm; multiple-input multiple-output wiretap channel; numerical simulations; physical layer security; provable convergence; secrecy capacity maximization problem; successive convex approximation method; MIMO wiretap channel; convex optimization; successive convex approximation (ID#: 16-10962)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374273&isnumber=7374246
W. Kang and N. Liu, “A Permutation-Based Code for the Wiretap Channel,” Information Theory (ISIT), 2015 IEEE International Symposium on, Hong Kong, 2015, pp. 2306-2310. doi: 10.1109/ISIT.2015.7282867
Abstract: In this paper, we propose a permutation-based code for the wiretap channel. We begin with an arbitrary channel code from Alice to Bob and then perform a series of permutations to enlarge the code to achieve secrecy to Eve. We show that the proposed code achieves the same performance as the traditional random code, in the sense that it achieves the random coding bound for the probability of decoding error at Bob and an exponentially vanishing information leakage at Eve. Thus, the permutation-based code we propose offers an alternative method of code construction for the wiretap channel.
Keywords: channel coding; decoding; error statistics; random codes; telecommunication security; arbitrary channel code; decoding error probability; information leakage; permutation-based code; random coding; wiretap channel; Ciphers; Decoding; Electronic mail; Encoding; Iterative decoding; Tin; Zinc (ID#: 16-10963)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282867&isnumber=7282397
H. C. Huang, C. C. Lin and Y. H. Chen, “Fidelity Enhancement of Reversible Data Hiding for Images with Prediction-Based Concepts,” 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), Adelaide, SA, 2015, pp. 13-16. doi: 10.1109/IIH-MSP.2015.60
Abstract: Reversible data hiding has attracted more and more attention in recent years. With reversible data hiding, it requires embedding secret data into original image with devised algorithm at the encoder, and marked image can then be delivered to the decoder. At the decoder, both the secret data and original image should be perfectly separated from marked image to keep the reversibility. There are several practical ways to make reversible data hiding possible, and one of the latest methods belongs to the prediction-based method. By carefully manipulating differences between predicted and original images, reversible data hiding can be achieved. We propose an enhanced method for manipulating the difference histogram, and we observe the better performances than existing scheme in literature. Possible ways for enhancing embedding capacity are also pointed out for the extension of our method in the future.
Keywords: data encapsulation; image coding; prediction theory; security of data; decoder; difference histogram; encoder; fidelity enhancement; marked image; prediction-based method; reversibility; reversible data hiding; secret data; Copyright protection; Decoding; Histograms; Image quality; Multimedia communication; Receivers; Watermarking; capacity; prediction; quality (ID#: 16-10964)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7415746&isnumber=7415733
T. C. Gulcu and A. Barg, “Achieving Secrecy Capacity of the Wiretap Channel and Broadcast Channel with a Confidential Component,” Information Theory Workshop (ITW), 2015 IEEE, Jerusalem, 2015, pp. 1-5. doi: 10.1109/ITW.2015.7133098
Abstract: We show that capacity of the general (not necessarily degraded or symmetric) wiretap channel under a “strong secrecy constraint” can be achieved using an explicit scheme based on polar codes. We also extend our construction to the case of broadcast channels with confidential messages defined by Csiszár and Körner, achieving the entire capacity region of this communication model. This submission is an extended abstract of the paper by the same authors (see arXiv:1410.3422).
Keywords: broadcast channels; codes; telecommunication security; wireless channels; broadcast channel; communication model; polar codes; secrecy capacity; secrecy constraint; wiretap channel; Channel coding; Decoding; Receivers; Reliability; Security; Transmitters (ID#: 16-10965)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133098&isnumber=7133075
S. Renatus, C. Bartelheimer and J. Eichler, “Improving Prioritization of Software Weaknesses Using Security Models with AVUS,” Source Code Analysis and Manipulation (SCAM), 2015 IEEE 15th International Working Conference on, Bremen, 2015,
pp. 259-264. doi: 10.1109/SCAM.2015.7335423
Abstract: Testing tools for application security have become an integral part of secure development life-cycles. Despite their ability to spot important software weaknesses, the high number of findings require rigorous prioritization. Most testing tools provide generic ratings to support prioritization. Unfortunately, ratings from established tools lack context information especially with regard to the security requirements of respective components or source code. Thus experts often spend a great deal of time re-assessing the prioritization provided by these tools. This paper introduces our lightweight tool AVUS that adjusts context-free ratings of software weaknesses according to a user-defined security model. We also present a first evaluation applying AVUS to a well-known open source project and the findings of a popular, commercially available application security testing tool.
Keywords: program testing; public domain software; safety-critical software; source code (software); AVUS; application security testing tool; context-free rating; open source project; security model; software testing tool; software weakness; source code; user-defined security model; Complexity theory; Context; Kernel; Measurement; Security; Testing; Secure software development; contextual enrichment; security metrics; vulnerability scoring (ID#: 16-10966)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335423&isnumber=7335391
S. Y. Kim, S. Gu, H.-h. Jeong and K.-A. Sohn, “A Network Clustering Based Software Attribute Selection for Identifying Fault-Prone Modules,” IT Convergence and Security (ICITCS), 2015 5th International Conference on, Kuala Lumpur, 2015, pp. 1-5. doi: 10.1109/ICITCS.2015.7292921
Abstract: The software defect can damage the reliability and the quality of the software. The static code software metrics have been widely used and played an important role in software defect prediction. Instead of using whole features, it is quite necessary to remove the redundant features and select some meaningful features to improve the prediction performance. This study focuses on the effective attribute selection technique for the software fault classification. We proposed the software attributes network that indicates the mutual information between features and the clustering based attribute selection techniques. The results demonstrate that the proposed network clustering based feature selection performs the best on fault-prone modules prediction. The comparative feature selection techniques are examined to evaluate the result. Furthermore, the best-performed software attributes and the relations between them are shown and carefully analyzed.
Keywords: pattern clustering; software fault tolerance; software metrics; software quality; comparative feature selection techniques; fault-prone module identification; mutual information; network clustering; software attribute selection technique; software defect prediction; software quality; software reliability; static code software metrics; Clustering algorithms; Complexity theory; Feature extraction; Measurement; Microwave integrated circuits; Software; Support vector machines (ID#: 16-10967)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292921&isnumber=7292885
M. Trapp, M. Rossberg and G. Schaefer, “Program Partitioning Based on Static Call Graph Analysis for Privilege Separation,” 2015 IEEE Symposium on Computers and Communication (ISCC), Larnaca, 2015, pp. 613-618. doi: 10.1109/ISCC.2015.7405582
Abstract: The major cause of IT security incidents are software issues, hence this article presents an automated approach for source code partitioning and privilege separation. Based on static call graph analysis, functions and program parts of a monolithic software are separated in several processes and grouped by the privilege they need. For the partitioning we introduce a metric that estimates the potential security gain by considering the complexity and privilege distribution of the separated software. Furthermore, we present a partitioning heuristic that uses this metric to create a secure software partitioning.
Keywords: graph theory; program diagnostics; security of data; software metrics; IT security incident; monolithic software; partitioning heuristic; privilege separation; program partitioning; secure software partitioning; source code partitioning; static call graph analysis; Computers; Context; Measurement; Permission; Process control; Software (ID#: 16-10968)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405582&isnumber=7405441
H. Mercier, M. Augier and A. K. Lenstra, “STEP-Archival: Storage Integrity and Anti-Tampering Using Data Entanglement,” Information Theory (ISIT), 2015 IEEE International Symposium on, Hong Kong, 2015, pp. 1590-1594. doi: 10.1109/ISIT.2015.7282724
Abstract: We present STEP-archives, a model for censorship-resistant storage systems where an attacker cannot censor or tamper with data without causing a large amount of obvious collateral damage. MDS erasure codes are used to entangle unrelated data blocks, in addition to providing redundancy against storage failures. We show a tradeoff for the attacker between attack complexity, irrecoverability, and collateral damage. We also show that the system can efficiently recover from attacks with imperfect irrecoverability, making the problem asymmetric between attackers and defenders. Finally, we present sample heuristic attack algorithms that are efficient and irrecoverable (but not collateral-damage-optimal), and demonstrate how some strategies and parameter choices allow to resist these sample attacks.
Keywords: data integrity; data protection; information retrieval; redundancy; STEP-archival; antitampering; censorship-resistant storage system; data entanglement; heuristic attack algorithm; imperfect irrecoverability; storage integrity; Censorship; Complexity theory; Decoding; Grippers; Memory; Resistance; Security; Distributed storage; MDS codes; anti-tampering; data entanglement
(ID#: 16-10969)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282724&isnumber=7282397
P. Jilna, P. P. Deepthi, S. M. Sameer, P. S. Sathidevi and A. P. Vijitha, “FPGA Implementation of an Elliptic Curve Based Integrated System for Encryption and Authentication,” Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, Kozhikode, 2015, pp. 1-6. doi: 10.1109/SPICES.2015.7091513
Abstract: The resource constrained applications in the present day communication networks demand the use of new cryptographic protocols and hardware with reduced computational and structural complexity. The use of standard, standalone cryptographic primitives are not suitable for such applications. This paper proposes the implementation of a new integrated system for both encryption and authentication based on elliptic curves. An algorithm for pseudo random sequence generation based on cryptographic one way function of elliptic curve point multiplication is developed. This is combined with an elliptic curve based message authentication code to form the integrated system. EC point multiplication operation is preferred as cryptographic one way function for use in this system due to its high security per bit of the key. The hardware is implemented on a Virtex 5 FPGA using Xilinx ISE. In the proposed hardware implementation a single point multiplication unit is time shared between the operations of pseudo random sequence generation and authentication to reduce the overall hardware complexity. A comparison of the resource requirement of the proposed implementation with existing standalone methods is also done.
Keywords: computational complexity; field programmable gate arrays; public key cryptography; random sequences; EC point multiplication operation; FPGA implementation; Virtex 5 FPGA; Xilinx ISE; communication networks; cryptographic one-way function; cryptographic protocols; elliptic curve point multiplication; elliptic curve-based integrated system; elliptic curve-based message authentication code; encryption; hardware complexity reduction; hardware implementation; integrated system; pseudorandom sequence generation; reduced computational complexity; resource-constrained application; single-point multiplication unit; structural complexity; Authentication; Complexity theory; Elliptic curves; Encryption; Hardware; Random sequences; Elliptic Curve Cryptography; MAC; Pseudo random sequence (ID#: 16-10970)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091513&isnumber=7091354
M. Benssalah, M. Djeddou and K. Drouiche, “Pseudo-Random Sequence Generator Based on Random Selection of an Elliptic Curve,” Computer, Information and Telecommunication Systems (CITS), 2015 International Conference on, Gijon, 2015, pp. 1-5. doi: 10.1109/CITS.2015.7297719
Abstract: Pseudo-random numbers generators (PRNG) are one of the main security tools in Radio Frequency IDentification (RFID) technology. Thus, a weak internal embedded generator can directly cause the entire application to be insecure and it makes no sense to employ robust protocols for the security issue. In this paper, we propose a new PRNG constructed by randomly selecting points from two elliptic curves, suitable for ECC based applications. The main contribution of this work is the increasing of the generator internal states by extending the set of its output realizations to two curves randomly selected. The main advantages of this PRNG in comparison to previous works are the large periodicity, a better distribution of the generated sequences and a high security level based on the elliptic curve discrete logarithm problem (ECDLP). Further, the proposed PRNG has passed the different Special Publication 800-22 NIST statistical test suite. Moreover, the proposed PRNG presents a scalable architecture in term of security level and periodicity at the expense of increasing the computation complexity. Thus, it can be adapted for ECC based cryptosystems such as RFID tags and sensors networks and other applications like computer physic simulations, and control coding.
Keywords: computational complexity; cryptographic protocols; public key cryptography; radiofrequency identification; random number generation; statistical analysis; ECC based cryptosystem; ECDLP; PRNG; RFID technology; computation complexity; elliptic curve discrete logarithm problem; embedded generator; pseudo-random sequence generator; radio frequency identification technology; random selection; robust protocols; security tools; sensors networks; special publication 800-22 NIST statistical test; Complexity theory; Elliptic curve cryptography; Elliptic curves; Generators; Space exploration; Cryptosystem; ECC; RFID (ID#: 16-10971)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7297719&isnumber=7297712
A. Sghaier, M. Zghid and M. Machhout, “Proposed Efficient Arithmetic Operations Architectures for Hyper Elliptic Curves Cryptosystems (HECC),” Systems, Signals & Devices (SSD), 2015 12th International Multi-Conference on, Mahdia, 2015, pp. 1-5. doi: 10.1109/SSD.2015.7348108
Abstract: Because it offers several benefits over other public-key cryptosystems much effort are done to make Hyper Elliptic Curve Cryptosystems (HECC) more practical, such as RSA, it offers a comparable level of security with a smaller key size. For this reason, HECCs can be used in embedded environments where speed, energy, power, chip and memory area are constrained. However, HEC use a complex mathematical background, so it's difficult to be implemented on hardware. They can be defined over real numbers, complex numbers and any other field. So we need arithmetic operations (addition, subtraction, multiplication and division) which have much application in cryptography and coding theory. We have to note that the overall performance of HECC is mainly determined by the speed of arithmetic operations. The most algorithms that manipulate these operations use polynomial coefficients in base 2 and they are defined over finite fields. But, the problem is clearly viewed over real field and simple to be presented. Arithmetic operations are based on the complexity of a mathematical problem, and to have an optimized architecture we need to optimize arithmetic operations. In this paper we describe a high performance, area efficient implementation of arithmetic operations in HECC over real field and a new design methodology is presented. The proposed architectures operations are implemented in FPGA.
Keywords: digital arithmetic; embedded systems; field programmable gate arrays; polynomials; public key cryptography; FPGA; HECC; arithmetic operation architectures; coding theory; cryptography; embedded environments; finite fields; hyperelliptic curves cryptosystem; mathematical problem; polynomial coefficients; Decision support systems; Elliptic curve cryptography; Elliptic curves; Jacobian matrices; Polynomials; Yttrium; Discrete Logarithm Problem (DLP); Elliptic Curve (EC); FPGA; Hyper Elliptic Curve Cryptosystems (HECC); HyperElliptic Curve (HEC); Jacobian group; MA; Rivest, Shamir and Adelman (RSA) (ID#: 16-10972)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348108&isnumber=7348090
E. Pisek, S. Abu-Surra, R. Taori, J. Dunham and D. Rajan, “Enhanced Cryptcoding: Joint Security and Advanced Dual-Step Quasi-Cyclic LDPC Coding,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-7. doi: 10.1109/GLOCOM.2015.7417284
Abstract: Data security has always been a major concern and a huge challenge for governments and individuals throughout the world since early times. Recent advances in technology, such as the introduction of cloud computing, make it even a bigger challenge to keep data secure. In parallel, high throughput mobile devices such as smartphones and tablets are designed to support these new technologies. The high throughput requires power-efficient designs to maintain the battery-life. In this paper, we propose a novel Joint Security and Advanced Low Density Parity Check (LDPC) Coding (JSALC) method. The JSALC is composed of two parts: the Joint Security and Advanced LDPC-based Encryption (JSALE) and the dual-step Secure LDPC code for Channel Coding (SLCC). The JSALE is obtained by interlacing Advanced Encryption System (AES)-like rounds and Quasi-Cyclic (QC)-LDPC rows into a single primitive. Both the JSALE code and the SLCC code share the same base quasi-cyclic parity check matrix (PCM) which retains the power efficiency compared to conventional systems. We show that the overall JSALC Frame-Error-Rate (FER) performance outperforms other cryptcoding methods by over 1.5 dB while maintaining the AES-128 security level. Moreover, the JSALC enables error resilience and has higher diffusion than AES-128.
Keywords: channel coding; cryptography; cyclic codes; error statistics; parity check codes; AES-128 security level; FER; JSALC method; JSALE; PCM; SLCC; advanced encryption system; battery-life; cloud computing; cryptcoding; data security; dual-step secure LDPC code for channel coding; frame-error-rate; high throughput mobile device; joint security and advanced LDPC-based encryption; joint security and advanced dual-step quasicyclic LDPC coding; low density parity check coding; power-efficient design; quasicyclic parity check matrix; smartphone; tablet; Complexity theory; Encoding; Encryption; Parity check codes; Phase change materials (ID#: 16-10973)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417284&isnumber=7416057
A. Alzahrani and R. F. DeMara, “Hypergraph-Cover Diversity for Maximally-Resilient Reconfigurable Systems,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conferen on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1086-1092. doi: 10.1109/HPCC-CSS-ICESS.2015.294
Abstract: Scaling trends of reconfigurable hardware (RH) and their design flexibility have proliferated their use in dependability-critical embedded applications. Although their reconfigurability can enable significant fault tolerance, due to the complexity of execution time in their design flow, in-field reconfigurability can be infeasible and thus limit such potential. This need is addressed by developing a graph and set theoretic approach, named hypergraph-cover diversity (HCD), as a preemptive design technique to shift the dominant costs of resiliency to design-time. In particular, union-free hypergraphs are exploited to partition the reconfigurable resources pool into highly separable subsets of resources, each of which can be utilized by the same synthesized application netlist. The diverse implementations provide reconfiguration-based resilience throughout the system lifetime while avoiding the significant overheads associated with runtime placement and routing phases. Two novel scalable algorithms to construct union-free hypergraphs are proposed and described. Evaluation on a Motion-JPEG image compression core using a Xilinx 7-series-based FPGA hardware platform demonstrates a statistically significant increase in fault tolerance and area efficiency when using proposed work compared to commonly-used modular redundancy approaches.
Keywords: data compression; embedded systems; field programmable gate arrays; graph theory; image coding; motion estimation; reconfigurable architectures; HCD; Motion-JPEG image compression core; RH; Xilinx 7-series-based FPGA hardware platform; area efficiency; dependability-critical embedded applications; design flexibility; execution time; fault tolerance; hypergraph-cover diversity; in-field reconfigurability; maximally-resilient reconfigurable systems; preemptive design technique; reconfigurable hardware; reconfigurable resource partitioning; reconfiguration-based resilience; resiliency costs; routing phases; runtime placement; separable resource subsets; set theoretic approach; statistical analysis; synthesized application netlist; union-free hypergraphs; Circuit faults; Embedded systems; Fault tolerance; Fault tolerant systems; Field programmable gate arrays; Hardware; Runtime; Area Efficiency; Design Diversity; FPGAs; Fault Tolerance; Hypergraphs; Reconfigurable Systems; Reliability (ID#: 16-10974)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336313&isnumber=7336120
J. Li, K. Zhou and J. Ren, “Security and Efficiency Trade-Offs for Cloud Computing and Storage,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287434
Abstract: Cloud computing provides centralized data storage and online computing. For centralized data storage, to address both security and availability issue, distributed storage is an appealing option in that it can ensure file security without requiring data encryption. Instead of storing a file and its replications on multiple servers, we can break the file into components and store the components on multiple servers. In this paper, we develop the minimum storage regeneration (MSR) point based distributed storage schemes that can achieve optimal storage efficiency and storage capacity. Moreover, our scheme can detect and correct malicious nodes with much higher error correction efficiency. For online computing, we investigate outsourcing of general computational problems and propose efficient Cost-Aware Secure Outsourcing (CASO) schemes for problem transformation and outsourcing so that the cloud is unable to learn any key information from the transformed problem. We also propose a verification scheme to ensure that the end-users will always receive a valid solution from the cloud. Our extensive complexity and security analysis show that our proposed Cost- Aware Secure Outsourcing (CASO) scheme is both practical and effective.
Keywords: cloud computing; error correction; outsourcing; security of data; storage management; CASO scheme; MSR point based distributed storage scheme; availability issue; centralized data storage; cloud storage; complexity analysis; cost-aware secure outsourcing; efficiency trade-off; error correction efficiency; file security; malicious node correction; malicious node detection; minimum storage regeneration point; online computing; optimal storage efficiency; security analysis; security trade-off; storage capacity; verification scheme; Bandwidth; Complexity theory; Error correction; Error correction codes; Outsourcing; Security; Servers; Cloud computing; computation outsourcing; cost aware; distributed storage; efficiency; optimal scheme design; security (ID#: 16-10975)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287434&isnumber=7287407
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Cyber Physical Systems Resiliency 2015 |
The research work cited here looks at the Science of Security hard problem of Resiliency in the context of cyber physical systems. The work was presented in 2015.
K. G. Lyn, L. W. Lerner, C. J. McCarty and C. D. Patterson, “The Trustworthy Autonomic Interface Guardian Architecture for Cyber-Physical Systems,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 1803-1810. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.263
Abstract: The growing connectivity of cyber-physical systems (CPSes) has led to an increased concern over the ability of cyber-attacks to inflict physical damage. Current cyber-security measures focus on preventing attacks from penetrating control supervisory networks. These reactive techniques, however, are often plagued with vulnerabilities and zero-day exploits. Embedded processors in CPS field devices often possess little security of their own, and are easily exploited once the network is penetrated. We identify four possible outcomes of a cyber-attack on a CPS embedded processor. We then discuss five trust requirements that a device must satisfy to guarantee correct behavior through the device's lifecycle. Next, we examine the Trustworthy Autonomic Interface Guardian Architecture (TAIGA) which monitors communication between the embedded controller and physical process. This autonomic architecture provides the physical process with a last line of defense against cyber-attacks. TAIGA switches process control to a trusted backup controller if an attack causes a system specification violation. We conclude with experimental results of an implementation of TAIGA on a hazardous cargo-carrying robot.
Keywords: cyber-physical systems; trusted computing; CPS embedded processor; TAIGA; cyber-attacks; cyber-security measures; embedded controller; physical process; reactive techniques; trusted backup controller; trustworthy autonomic interface guardian architecture; Control systems; Process control; Program processors; Sensors; Trojan horses; Cyber-physical systems; autonomic control; embedded device security; resilience; trust (ID#: 16-11027)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363316&isnumber=7362962
A. Astarloa, N. Moreira, U. Bidarte, M. Urbina and D. Modrono, “FPGA Based Nodes for Sub-Microsecond Synchronization of Cyber-Physical Production Systems on High Availability Ring Networks,” 2015 International Conference on ReConFigurable Computing and FPGAs (ReConFig), Mexico City, 2015, pp. 1-6. doi: 10.1109/ReConFig.2015.7393316
Abstract: Cyber-Physical Production Systems are characterized by integrating sensors, processing and communication in Industrial Environments like in advanced manufacturing plants or in the new generation Smart Grids. In these contexts, the accuracy on the synchronization plays a vital role because it is the base for control operations and for the correlation among the distributed sensor data sampling. In this paper the application of the IEEE1588 Synchronization protocol over High Availability Ethernet networks is applied to a new generation Cyber-Physical Production Systems in order to achieve sub-microsecond synchronization. These CPPS can be used to build rings and to interconnect rings as well. These interconnections offer bumpless Ethernet redundancy, without the need of any additional network equipment. In order to measure the resilience and the accuracy of the 1588-aware high-availability network composed by these nodes, a distributed sensors implementation composed by HSR network nodes that benefits from reconfigurable technology (small FPGAs and powerful programmable SoCs)has been analyzed. As it has been verified, although in a case of network failure, the synchronization recovers automatically and the accuracy obtained is in the range of 1 μs, that offers a very good reference for many applications in the industry.
Keywords: LAN interconnection; cyber-physical systems; distributed sensors; field programmable gate arrays; local area networks; sampling methods; synchronisation; system-on-chip; 1588-aware high-availability network; CPPS; Ethernet redundancy; FPGA based node; HSR network node; IEEE1588 synchronization protocol; advanced manufacturing plant; cyber-physical production system; distributed sensor data sampling; distributed sensors implementation; high availability ethernet network; industrial environment; integrating sensor; interconnection; network equipment; network failure; powerful programmable SoC; reconfigurable technology; ring network; smart grid; submicrosecond synchronization; IP networks; Logic gates; Peer-to-peer computing; Ports (Computers); Sensors; Switches; Synchronization; Cyber-Physical Systems; Cyber-physical Production Systems; FPGA; High-availability Seamless Redundancy (HSR); IEC 61850; IEC 62439; IEC 62439-3-5; IEEE 1588; Industrie 4.0; Precise Time Protocol; Programmable SoC; Sensors Networks; Zynq (ID#: 16-11028)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7393316&isnumber=7393279
S. Gujrati, H. Zhu and G. Singh, “Composable Algorithms for Interdependent Cyber Physical Systems,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287431
Abstract: Cyber-Physical Systems (CPS) applications are being increasingly used to provide services in domains such as health-care, transportation, and energy. Providing such services may require interactions between applications, some of which may be unpredictable. Understanding and mitigating such interactions require that CPSs be designed as open and composable systems. Composition has been studied extensively in the literature. To complement this work, this paper studies composition of cyber algorithms with user behaviors in a CPS. Traditional middleware algorithms have been designed by abstracting away the underlying system and providing users with high-level APIs to interact with the physical system. In a CPS, however, users may interact directly with the physical system and may perform actions that are part of the services provided. We find that by accounting for user interactions and including them as part of the solution, one can design algorithms that are more efficient, predictable and resilient. To accomplish this, we propose a framework to model both the physical and the cyber systems. This framework allows specification of both physical algorithms and cyber algorithms. We discuss how such specifications can be composed to design middleware that leverages user actions. We show that such composite solutions preserve invariants of the component algorithms such as those related to functional properties and fault-tolerance. Our future work involves developing a comprehensive framework that uses compositionality is a key feature to address interdependent behavior of CPSs.
Keywords: formal specification; human computer interaction; middleware; object-oriented programming; open systems; software fault tolerance; user centred design; CPS applications; CPS interdependent behavior; component algorithm; composable algorithms; composable systems; cyber algorithm; energy domain; fault-tolerance; functional properties; health-care domain; high-level API; interdependent cyber-physical systems; middleware algorithm design; middleware design; open systems; physical system interaction; specification composition; transportation domain; user action; user behavior; user interaction; Algorithm design and analysis; Computational modeling; Middleware; Prediction algorithms; Sensors; Vehicles (ID#: 16-11029)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287431&isnumber=7287407
G. Martins, S. Bhatia, X. Koutsoukos, K. Stouffer, C. Tang and R. Candell, “Towards a Systematic Threat Modeling Approach for Cyber-Physical Systems,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287428
Abstract: Cyber-Physical Systems (CPS) are systems with seamless integration of physical, computational and networking components. These systems can potentially have an impact on the physical components, hence it is critical to safeguard them against a wide range of attacks. In this paper, it is argued that an effective approach to achieve this goal is to systematically identify the potential threats at the design phase of building such systems, commonly achieved via threat modeling. In this context, a tool to perform systematic analysis of threat modeling for CPS is proposed. A real-world wireless railway temperature monitoring system is used as a case study to validate the proposed approach. The threats identified in the system are subsequently mitigated using National Institute of Standards and Technology (NIST) standards.
Keywords: condition monitoring; object-oriented programming; railway engineering; security of data; wireless sensor networks; CPS; NIST standards; National Institute of Standards and Technology; computational component; cyber-physical systems; networking component; physical component; real-world wireless railway temperature monitoring system; systematic potential threat identification; systematic threat modeling approach; Adaptation models; Analytical models; Data models; Security; Software; Systematics; Unified modeling language; Case Study; Cyber-Physical Systems; Systematic Analysis; Threat Modeling (ID#: 16-11030)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287428&isnumber=7287407
A. Astarloa, N. Moreira, J. Lázaro, M. Urbina and A. Garcia, “1588-Aware High-Availability Cyber-Physical Production Systems,” Precision Clock Synchronization for Measurement, Control, and Communication (ISPCS), 2015 IEEE International Symposium on, Beijing, 2015, pp. 25-30. doi: 10.1109/ISPCS.2015.7324675
Abstract: In this paper an architecture for High-Availability Cyber-Physical Production Systems with sub-microsecond synchronization capabilities is presented. The proposed CPPS nodes are based on cost-affordable components. These CPPS can deal with most of the challenges set by Industry for a massive adoption of the distributed computing philosophy in critical systems like Smart-Grids or Advanced Manufacturing Plants. In order to measure the resilience and accuracy of the 1588-aware high-availability network composed by these nodes, a concept-proof experimental setup has been developed. As it has been verified, although in a case of network failure, the synchronization recovers automatically and the offset between the master's and slaves' PPS signals is maintained below 1 μs.
Keywords: computer aided manufacturing; computer networks; production engineering computing;1588-aware high-availability cyber-physical production systems; CPPS nodes; advanced manufacturing plants; distributed computing philosophy; smart-grids; submicrosecond synchronization; IP networks; Industries; Peer-to-peer computing; Ports (Computers); Sensors; Switches; Synchronization; Cyber-Physical Systems; Cyber-physical Production Systems; FPGA; High-availability Seamless Redundancy (HSR); IEC 61850; IEC 62439; IEC 62439-3-5; IEEE 1588; Industrie 4.0; Precise Time Protocol; Sensors Networks (ID#: 16-11031)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7324675&isnumber=7324666
B. Kantarci, “Cyber-Physical Alternate Route Recommendation System for Paramedics in an Urban Area,” Wireless Communications and Networking Conference (WCNC), 2015 IEEE, New Orleans, LA, 2015, pp. 2155-2160. doi: 10.1109/WCNC.2015.7127801
Abstract: Intelligent transportation systems aim at the betterment of the transportation in cooperation with the Information and Communication Technologies (ICTs). Besides, cyber-physical solutions have enabled interaction between the physical and computational components of systems. This paper studies the route selection of paramedics by the assistance of a cyber-physical system which consists of vehicular communications, alternate route optimization and user interaction components. To this end, an optimal alternate routing-tree recommendation framework is proposed by adopting the minimum Steiner tree approach. Initially the mathematical model is presented and is solved as a Mixed Integer Linear Programming (MILP) formulation. Then, in order to assure fast and efficient solution, simulated annealing-based alternate routing-tree recommendation is proposed for paramedics. Through simulations, the proposed approach is shown to be capable of guaranteeing alternate route selection for paramedics with low-delay, low-cost and high resilience.
Keywords: integer programming; intelligent transportation systems; linear programming; medical computing; mobile computing; recommender systems; simulated annealing; trees (mathematics); vehicle routing; vehicular ad hoc networks; ICTs; MILP formulation; alternate route optimization; cyber-physical alternate route recommendation system; information and communication technologies; intelligent transportation systems; mathematical model; minimum Steiner tree approach; mixed integer linear programming; paramedics route selection; simulated annealing-based alternate routing-tree recommendation; urban area; user interaction components; vehicular communications; Annealing; Conferences; Delays; Global Positioning System; Roads; Simulated annealing; Vehicles; Cyber-physical systems; routing; vehicular networks (ID#: 16-11032)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127801&isnumber=7127309
W. Chipman, C. Grimm and C. Radojicic, “Coverage of Uncertainties in Cyber-Physical Systems,” ZuE 2015; 8. GMM/ITG/GI-Symposium Reliability by Design; Proceedings of, Siegen, Germany, 2015, pp. 1-8. doi: (not provided)
Abstract: Cyber-physical systems (CPS) consist of software systems and the physical entities that the software controls. CPS have become ubiquitous; the systems can be found in diverse environments. Because of the multitude of components, failures, changes or inaccuracies are inevitable but with the multitude of components also comes the ability to build resilience into the system. An unfortunate side-effect of this resiliency is the addition of unforeseen changes and deviations to the behavior of the system. Many of these cyber-physical systems (CPS) control or contribute significantly to the control of critical systems. In order to achieve first 'time right' system deployment, the accuracy of the models, and the validation of the application fitness is at least as important as the CPS modeling and accuracy. In this paper we discuss and give an overview of methods that strive for validation of CPS systems with increased coverage. In particular, we focus on modeling, verification and validation of uncertainties both known and unknown.
Keywords: (not provided) (ID#: 16-11033)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348519&isnumber=7348508
P. E. Veríssimo, “MB4CP 2015 Keynote II: Resilience of Cyber-Physical Energy Systems,” Dependable Systems and Networks Workshops (DSN-W), 2015 IEEE International Conference on, Rio de Janeiro, 2015, pp. 3-3. doi: 10.1109/DSN-W.2015.42
Abstract: Electrical utility infrastructures have become largely computerized, remotely/automatically controlled, and interconnected, amongst each other and with other types of critical infrastructures, and we are witnessing the explosion of new paradigms: distributed generation, smart grids. In this accelerated mutation of power grids to cyber-physical systems, may it be that some things are “lost in translation”? Are we using the right models to represent, design, build and analyze cyber physical energy systems? Especially when what used to be an electrical infrastructure became quite susceptible to computer-borne problems such as digital accidental faults and malicious cyber-attacks? This talk will challenge the audience with some reflections and points for discussion along these topics.
Keywords: distributed power generation; electricity supply industry; smart power grids; cyber-physical energy systems; distributed generation; electrical utility infrastructures; power grids; smart grids; Computational modeling; Conferences; Distributed power generation; Explosions; Resilience; Security; Smart grids (ID#: 16-11034)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272543&isnumber=7272533
Z. Li and R. Kang, “Strategy for Reliability Testing and Evaluation of Cyber Physical Systems,” Industrial Engineering and Engineering Management (IEEM), 2015 IEEE International Conference on, Singapore, 2015, pp. 1001-1006. doi: 10.1109/IEEM.2015.7385799
Abstract: Internal and external factors that influence reliability of CPSs are analyzed in this paper. A strategy for reliability testing and evaluation of CPSs is put forward in the consideration of these factors, including the technology framework and processes. The main work comprises the testing and evaluation of component reliability covering hardware, software, and architecture, as well as the performance reliability including service reliability, cyber security reliability, resilience & elasticity reliability and vulnerability reliability. To give a general look of the system reliability, the four indices of performance reliability are synthesized by the multi-index method. The strategy proposed in the paper will make a great contribution to the complete, dynamic and continuous testing and evaluation for the CPS.
Keywords: cyber-physical systems; program testing; security of data; software reliability; CPS reliability testing; component reliability; cyber security reliability; elasticity reliability; multiindex method; performance reliability; service reliability; vulnerability reliability; Hardware; Reliability theory; Resilience; Software; Software reliability; Testing; Cyber Physical Systems; Evaluation; Reliability; Strategy; Testing (ID#: 16-11035)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7385799&isnumber=7385591
C. Aduba and C. h. Won, “Resilient Cumulant Game Control for Cyber-Physical Systems,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287422
Abstract: In this paper, we investigate the resilient cumulant game control problem for a cyber-physical system. The cyberphysical system is modeled as a linear hybrid stochastic system with full-state feedback. We are interested in 2-player cumulant Nash game for a linear Markovian system with quadratic cost function where the players optimize their system performance by shaping the distribution of their cost function through cost cumulants. The controllers are optimally resilient against control feedback gain variations.We formulate and solve the coupled first and second cumulant Hamilton-Jacobi-Bellman (HJB) equations for the dynamic game. In addition, we derive the optimal players strategy for the second cost cumulant function. The efficiency of our proposed method is demonstrated by solving a numerical example.
Keywords: Markov processes; game theory; optimisation; security of data; HJB equation; Hamilton-Jacobi-Bellman equation; Nash game; control feedback gain variation; cumulant game control resiliency; cyber-physical system; full-state feedback; linear Markovian system; linear hybrid stochastic system; quadratic cost function optimization; security vulnerability; Cost function; Cyber-physical systems; Games; Mathematical model; Nash equilibrium; Trajectory (ID#: 16-11036)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287422&isnumber=7287407
A. Dayal, A. Tbaileh, Y. Deng and S. Shukla, “Distributed VSCADA: An Integrated Heterogeneous Framework for Power System Utility Security Modeling and Simulation,” Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, Seattle, WA, 2015, pp. 1-6. doi: 10.1109/MSCPES.2015.7115408
Abstract: The economic machinery of the United States is reliant on complex large-scale cyber-physical systems which include electric power grids, oil and gas systems, transportation systems, etc. Protection of these systems and their control from security threats and improvement of the robustness and resilience of these systems, are important goals. Since all these systems have Supervisory Control and Data Acquisition (SCADA) in their control centers, a number of test beds have been developed at various laboratories. Usually on such test beds, people are trained to operate and protect these critical systems. In this paper, we describe a virtualized distributed test bed that we developed for modeling and simulating SCADA applications and to carry out related security research. The test bed is a virtualized by integrating various heterogeneous simulation components. This test bed can be reconfigured to simulate the SCADA of a power system, or a transportation system or any other critical systems, provided a back-end domain specific simulator for such systems are attached to it. In this paper, we describe how we created a scalable architecture capable of simulating larger infrastructures and by integrating communication models to simulate different network protocols. We also developed a series of middleware packages that integrates various simulation platforms into our test bed using the Python scripting language. To validate the usability of the test bed, we briefly describe how a power system SCADA scenario can be modeled and simulated in our test bed.
Keywords: SCADA systems; authoring languages; control engineering computing; middleware; power system security; power system simulation; Python scripting language; back-end domain specific simulator; complex large-scale cyber-physical systems; distributed VSCADA; economic machinery; heterogeneous simulation components; integrated heterogeneous framework; middleware packages; network protocols; power system utility security modeling; power system utility security simulation platform; supervisory control and data acquisition; system protection; transportation system; virtualized distributed test bed; Databases; Load modeling; Power systems; Protocols; Servers; Software; Cyber Physical Systems; Cyber-Security; Distributed Systems; NetworkSimulation; SCADA (ID#: 16-11037)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7115408&isnumber=7115373
A. Dayal, Yi Deng, A. Tbaileh and S. Shukla, “VSCADA: A Reconfigurable Virtual SCADA Test-Bed for Simulating Power Utility Control Center Operations,” Power & Energy Society General Meeting, 2015 IEEE, Denver, CO, 2015, pp. 1-5. doi: 10.1109/PESGM.2015.7285822
Abstract: Complex large-scale cyber-physical systems, such as electric power grids, oil & gas pipeline systems, transportation systems, etc. are critical infrastructures that provide essential services for the entire nation. In order to improve systems' security and resilience, researchers have developed many Supervisory Control and Data Acquisition (SCADA) test beds for testing the compatibility of devices, analyzed the potential cyber threats/vulnerabilities, and trained practitioners to operate and protect these critical systems. In this paper, we describe a new test bed architecture for modeling and simulating power system related research. Since the proposed test bed is purely software defined and the communication is emulated, its functionality is versatile. It is able to reconfigure virtual systems for different real control/monitoring scenarios. The unified architecture can seamlessly integrate various kinds of system-level power system simulators (real-time/non real-time) with the infrastructure being controlled or monitored with multiple communication protocols. We depict the design methodology in detail. To validate the usability of the test bed, we implement an IEEE 39-bus power system case study with a power flow analysis and dynamics simulation mimicking a real power utility infrastructure. We also include a cascading failure example to show how system simulators such as Power System Simulator for Engineering (PSS/E), etc. can seamlessly interact with the proposed virtual test bed.
Keywords: SCADA systems; critical infrastructures; electricity supply industry; power system control; power system security; power system simulation; protocols; reconfigurable architectures; IEEE 39-bus power system; SCADA; communication protocol; complex large scale cyber-physical system; critical infrastructure; potential cyber threat; power system modelling; power utility control center operation simulation; reconfigurable virtual SCADA test bed architecture; reconfigure virtual system; supervisory control and data acquisition; system level power system simulation; system resilience; system security improvement; vulnerabilities; Computer architecture; Power system dynamics; Protocols; Servers; Software; Cyber Physical Systems; Supervisory Control and Data Acquisition (SCADA) Systems; System Integration; Virtual Test bed (ID#: 16-11038)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7285822&isnumber=7285590
T. Gamage, G. Zweigle, M. Venkathasubramanian, C. Hauser and D. Bakken, “Towards Grid Resilience: A Proposal for a Progressive Control Strategy,” Green Technologies Conference (GreenTech), Proceeding of the 2015 Seventh Annual IEEE, New Orleans, LA, 2015, pp. 58-65. doi: 10.1109/GREENTECH.2015.25
Abstract: This white paper describes preliminary research on the use of progressive control strategies to improve the advanced electric power grid's resilience to major grid disturbances. The proposed approach calls to leverage real-time wide-area monitoring and control capabilities to provide globally coordinated distributed control actions under stressed conditions. To that end, the paper illustrates the proposed concept using case studies drawn from major North American blackouts, discusses design challenges, and proposes the design of a Grid Integrity Management System (GIMS) to manage the required communication and computation to meet these challenges.
Keywords: power grids; power system control; power system faults; power system measurement; power system reliability; GIMS; North American blackouts; electric power grid resilience; grid disturbance; grid integrity management system; progressive control strategy; wide area monitoring; Generators; Load modeling; Monitoring; Power system stability; Real-time systems; Stability analysis; QoS; RAS; cyber-physical systems; distributed control; model predictive control; security; smart grid (ID#: 16-11039)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150230&isnumber=7150207
C. Z. Bai, F. Pasqualetti and V. Gupta, “Security in Stochastic Control Systems: Fundamental Limitations and Performance Bounds,” American Control Conference (ACC), 2015, Chicago, IL, 2015, pp. 195-200. doi: 10.1109/ACC.2015.7170734
Abstract: This work proposes a novel metric to characterize the resilience of stochastic cyber-physical systems to attacks and faults. We consider a single-input single-output plant regulated by a control law based on the estimate of a Kalman filter. We allow for the presence of an attacker able to hijack and replace the control signal. The objective of the attacker is to maximize the estimation error of the Kalman filter - which in turn quantifies the degradation of the control performance - by tampering with the control input, while remaining undetected. We introduce a notion of ε-stealthiness to quantify the difficulty to detect an attack when an arbitrary detection algorithm is implemented by the controller. For a desired value of ε-stealthiness, we quantify the largest estimation error that an attacker can induce, and we analytically characterize an optimal attack strategy. Because our bounds are independent of the detection mechanism implemented by the controller, our information-theoretic analysis characterizes fundamental security limitations of stochastic cyber-physical systems.
Keywords: Kalman filters; stochastic systems; ε-stealthiness notion; Kalman filter estimation; arbitrary detection algorithm; control law; control performance; estimation error; optimal attack strategy; single-input single-output plant; stochastic control systems; stochastic cyber-physical systems; Cyber-physical systems; Degradation; Detectors; Estimation error; Random sequences; Upper bound
(ID#: 16-11040)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7170734&isnumber=7170700
C. Cheh, G. A. Weaver and W. H. Sanders, “Cyber-Physical Topology Language: Definition, Operations, and Application,” Dependable Computing (PRDC), 2015 IEEE 21st Pacific Rim International Symposium on, Zhangjiajie, 2015, pp. 60-69. doi: 10.1109/PRDC.2015.20
Abstract: Maintaining the resilience of a large-scale system requires an accurate view of the system's cyber and physical state. The ability to collect, organize, and analyze state central to a system's operation is thus important in today's environment, in which the number and sophistication of security attacks are increasing. Although a variety of “sensors” (e.g., Intrusion Detection Systems, log files, and physical sensors) are available to collect system state information, it's difficult for administrators to maintain and analyze the diversity of information needed to understand a system's security state. Therefore, we have developed the Cyber-Physical Topology Language (CPTL) to represent and reason about system security. CPTL combines ideas from graph theory and formal logics, and provides a framework to capture relationships among the diverse types of sensor information. In this paper, we formally define CPTL as well as operations on CPTL models that can be used to infer a system's security state. We then illustrate the use of CPTL in both the enterprise and electrical power domains and provide experimental results that illustrate the practicality of the approach.
Keywords: cyber-physical systems; formal logic; graph theory; security of data; CPTL; cyber-physical topology language; electrical power domain; enterprise domain; formal logics; information diversity; large-scale system; security attacks; sensor information; system security; system state information collection; Data models; Databases; Graph theory; Ontologies; Security; Semantics; Sensors; Cyber-Physical Topology Language; description logics; system state; target system model (ID#: 16-11041)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371849&isnumber=7371833
N. S. V. Rao, C. Y. T. Ma, U. Shah, J. Zhuang, F. He and D. K. Y. Yau, “On Resilience of Cyber-Physical Infrastructures Using Discrete Product-Form Games,” Information Fusion (Fusion), 2015 18th International Conference on, Washington, DC, 2015,
pp. 1451-1458. doi: (not provided)
Abstract: In critical infrastructures consisting of discrete cyber and physical components, the correlations between them may be exploited to launch strategic component attacks that may degrade the entire system. We capture such correlations between cyber and physical sub-infrastructures using the conditional probabilities, and between cyber and physical components using first-order differential conditions. By using a resilience measure specified by the infrastructure's survival probability, we formulate a discrete game between the provider and attacker. Their disutility functions are products of the survival (or failure) probability and cost terms expressed in terms of the number of components attacked and reinforced by the attacker and provider, respectively. The Nash Equilibrium conditions of the game provide the sensitivity functions that clearly show the dependence of the infrastructure resilience on cost terms, correlation function and sub-infrastructure survival probabilities. These results for product-form disutility functions complement the sum-form results from previous works, and more closely represent the provider's objectives for a certain class of infrastructures. We apply these results to simple models of network testbed infrastructures and cyber infrastructures of smart energy grids.
Keywords: critical infrastructures; game theory; probability; sensitivity; Nash Equilibrium conditions; conditional probabilities; critical infrastructures; cyberphysical infrastructures; discrete cyber components; discrete physical components; discrete product-form games; disutility functions; first-order differential conditions; infrastructure survival probability; network testbed infrastructures; product-form disutility functions; smart energy grids; strategic component attacks; subinfrastructure survival probabilities; Correlation; Games; Mathematical model; Nash equilibrium; Probability; Sensitivity; Smart grids (ID#: 16-11042)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266728&isnumber=7266535
S. S. Shah and R. F. Babiceanu, “Resilience Modeling and Analysis of Interdependent Infrastructure Systems,” Systems and Information Engineering Design Symposium (SIEDS), 2015, Charlottesville, VA, 2015, pp. 154-158. doi: 10.1109/SIEDS.2015.7116965
Abstract: The infrastructures on which our society depends are interconnected and interdependent on multiple levels. The failure of one infrastructure can result in the disruption of other infrastructures, which can lead to severe economic disruption and loss of life or failure of services. Today, within the cyber-physical engineering and social world, all acting infrastructure systems are certain to be subjected to changes in their environment given by changing inputs, constraints, mechanisms, or interfaces. Moreover, when the infrastructures are interconnected the changes may impact not only the output of individual infrastructures, but also the output of the resulting coordinated operations. This work addresses the concept of resilience of interdependent infrastructures systems and compares the resultant system level resilience with component infrastructure systems resilience.
Keywords: health care; network theory (graphs); reliability theory; transportation; health care delivery infrastructure; infrastructure network model; interdependent infrastructure system analysis; resilience modelling; transportation infrastructure; Analytical models; Computational modeling; Logistics; Medical services; Reliability engineering; Resilience; Transportation; Interdependent infrastructures; Network modeling; Resilience analysis (ID#: 16-11043)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116965&isnumber=7116953
X. Li, J. Wen and E. W. Bai, “Building Energy Forecasting Using System Identification Based on System Characteristics Test,” Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, Seattle, WA, 2015, pp. 1-6. doi: 10.1109/MSCPES.2015.7115401
Abstract: Buildings, consuming over 70% of the electricity in the U.S., play significant roles in smart grid infrastructure. The automatic operation of buildings and their subsystems in responding to signals from a smart grid is essential to reduce energy consumption and demand, as well as improve the resilience to power disruptions. In order to achieve such automatic operation, high fidelity and computationally efficiency building energy forecasting models under different weather and operation conditions are needed. Currently, data-driven (black box) models and hybrid (grey box) models are commonly used in model based building control operation. However, typical black box models often require long training period and are bounded to weather and operation conditions during the training period. On the other hand, creating a grey box model often requires long calculation time due to parameter optimization process and expert knowledge during the model structure determining and simplification process. An earlier study by the authors proposed a system identification approach to develop computationally efficient and accurate building energy forecasting models. This paper attempts to extend this early study and to quantitatively evaluate how the most important characteristics of a building energy system: its nonlinearity and response time, affect the system identification process and model accuracy. Two commercial building: a small-size and a medium-size commercial building, with varying chiller nonlinearity, are simulated using EnergyPlus in lieu of real buildings for model development and validation. The system identification method proposed in the early study is applied to these two buildings that have varying nonlinearity and response time. Adaption of the proposed system identification method based on systems' nonlinearity and response time is proposed in this study. The energy forecasting results demonstrate that the adaption is capable of significantly improve the performance of the system identification model.
Keywords: building management systems; building simulation; optimisation; power consumption; power system parameter estimation; smart power grids; EnergyPlus; black box model; building control operation; building energy forecasting; buildings automatic operation; chiller nonlinearity variation; data-driven model; electricity consumption; energy consumption reduction; long training period; parameter optimization process; power disruption resilience improvement; smart grid infrastructure; system characteristics test; system identification process; Buildings; Computational modeling; Forecasting; Predictive models; System identification; Temperature measurement; Time factors; smart grids; building energy modeling; system identification; system nonlinearity; system response time (ID#: 16-11044)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7115401&isnumber=7115373
E. Penera and D. Chasaki, “Packet Scheduling Attacks on Shipboard Networked Control Systems,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287421
Abstract: Shipboard networked control systems are based on a distributed control system architecture that provides remote and local control monitoring. In order to allow the network to scale a hierarchical communication network is composed of high speed Ethernet based network switches. Ethernet is the prevalent medium to transfer control data, such as control signals, alarm signal, and sensor measurements on the network. However, communication capabilities bring new security vulnerabilities and make communication links a potential target for various kinds of cyber/physical attacks. The goal of this work is to implement and demonstrate a network layer attack against networked control systems, by tampering with temporal characteristics of the network, leading to time varying delays and packet scheduling abnormalities.
Keywords: computer network security; delay systems; local area networks; networked control systems; scheduling; ships; telecommunication control; time-varying systems; alarm signal; communication capability; communication link; control data; control signal; cyber attack; distributed control system architecture; hierarchical communication network; high speed Ethernet based network switch; network layer attack; packet scheduling abnormality; packet scheduling attack; physical attack; remote and local control monitoring; security vulnerability; sensor measurement; shipboard networked control system; temporal characteristics; time varying delay; Delays; IP networks; Network topology; Networked control systems; Security; Topology (ID#: 16-11045)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287421&isnumber=7287407
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Cyber Physical Systems and Metrics 2015 |
The research work cited here looks at the Science of Security hard problem of measurement in the context of cyber physical systems. The work was presented in 2015.
C. Lv, J. Zhang, P. Nuzzo, A. Sangiovanni-Vincentelli, Y. Li and Y. Yuan, “Design Optimization of the Control System for the Powertrain of an Electric Vehicle: A Cyber-Physical System Approach,” Mechatronics and Automation (ICMA), 2015 IEEE International Conference on, Beijing, 2015, pp. 814-819. doi: 10.1109/ICMA.2015.7237590
Abstract: By leveraging the interaction between the physical and the computation worlds, cyber-physical systems provide the capability of augmenting the available design space in several application domains, possibly improving the quality of the final design. In this paper, we propose a new, optimization-based methodology for the co-design of the gear ratio and the active damping controller of the powertrain system in an electric vehicle. Our goal is to explore the trade-off between vehicle acceleration performance and drivability. Using a platform-based approach, we first define the system architecture, the requirements, and quality metrics of interest. Then, we formulate the design problem for the powertrain control system as an optimization problem, and propose a procedure to derive an optimal system sizing, by relying on the simulation of the vehicle performance for a set of driving scenarios. Optimization results show that the driveline control performance can be substantially improved with respect to conventional solutions, using the proposed methodology. This further highlights the relevance and effectiveness of a cyber-physical system approach to system design across the boundary between plant architecture and control law.
Keywords: control system synthesis; damping; electric vehicles; gears; optimisation; power transmission (mechanical); active damping controller co-design; application domains; control law; control system design optimization; cyber-physical system approach; driveline control performance; electric vehicle powertrain; gear ratio co-design; optimal system sizing; optimization problem; optimization-based methodology; plant architecture; platform-based approach; powertrain control system; quality metrics; system architecture; vehicle acceleration performance; vehicle drivability; vehicle performance simulation; Acceleration; Damping; Gears; Mechanical power transmission; Optimization; Torque; Vehicles; Design optimization; cyber-physical system; electric vehicle; platform-based design; powertrain control system (ID#: 16-11046)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237590&isnumber=7237445
Y. Kwon, H. K. Kim, Y. H. Lim and J. I. Lim, “A Behavior-Based Intrusion Detection Technique for Smart Grid Infrastructure,” PowerTech, 2015 IEEE Eindhoven, Eindhoven, 2015, pp. 1-6. doi: 10.1109/PTC.2015.7232339
Abstract: A smart grid is a fully automated electricity network, which monitors and controls all its physical environments of electricity infrastructure being able to supply energy in an efficient and reliable way. As the importance of cyber-physical system (CPS) security is growing, various intrusion detection algorithms to protect SCADA system and generation sector have been suggested, whereas there were less consideration on distribution sector. Thus, this paper first highlights the significance of CPS security, especially the availability as the most important factor in smart grid environment. Then this paper classifies various modern intrusion detection system (IDS) techniques for securing smart grid network. In our approach, we propose a novel behavior-based IDS for IEC 61850 protocol using both statistical analysis of traditional network features and specification-based metrics. Finally, we present the attack scenarios and detection methods applicable for IEC 61850-based digital substation in Korean environment.
Keywords: IEC standards; SCADA systems; power engineering computing; power system security; security of data; smart power grids; statistical analysis; substation protection; CPS security; IEC 61850 protocol; Korean environment; SCADA system protection; behavior-based IDS; behavior-based intrusion detection technique; cyber physical system security; digital substation; electricity infrastructure physical environment; fully automated electricity network reliability; smart grid infrastructure; statistical analysis; Clustering algorithms; Indexes; Inductors; Measurement; Security; Cyber-physical system; IEC 61850; anomaly detection; intrusion detection; smart grid (ID#: 16-11047)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232339&isnumber=7232233
S. Grüner and U. Epple, “Real-Time Properties for Design of Heterogeneous Industrial Automation Systems,” Industrial Electronics Society, IECON 2015 - 41st Annual Conference of the IEEE, Yokohama, 2015, pp. 004500-004505. doi: 10.1109/IECON.2015.7392801
Abstract: Scenarios for future industrial production, e.g., cyber-physical production systems, Industrie 4.0 or cloud manufacturing rely on the integration of heterogeneous physical production equipment, automation and IT systems. Moreover, a dynamic reconfiguration of these systems shall be possible without or with minimal work effort and system downtime. These requirements entail the consideration of timing aspects throughout all involved systems, which can ideally be employed by algorithms that support the dynamic reconfiguration. For this purpose, the paper introduces a generic terminology and metrics for the design of heterogeneous real-time systems.
Keywords: cyber-physical systems; production engineering computing; production equipment; dynamic reconfiguration; heterogeneous industrial automation system design; heterogeneous physical production equipment; industrial production; Automation; Context; Delays; Real-time systems; Terminology; Time factors (ID#: 16-11048)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7392801&isnumber=7392066
M. Rodriguez, K. A. Kwiat and C. A. Kamhoua, “Modeling Fault Tolerant Architectures with Design Diversity for Secure Systems,” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 1254-1263. doi: 10.1109/MILCOM.2015.7357618
Abstract: Modern critical systems are facing an increasingly number of new security risks. Nowadays, the extensive use of third-party components and tools during design, and the massive outsourcing overseas of the implementation and integration of systems parts, augment the chances for the introduction of malicious system alterations along the development lifecycle. In addition, the growing dominance of monocultures in the cyberspace, comprising collections of identical interconnected computer platforms, leads to systems that are subject to the same vulnerabilities and attacks. This is especially important for cyber-physical systems, which interconnect cyberspace with computing resources and physical processes. The application of concepts and principles from design diversity to the development and operation of critical systems can help palliate these emerging security challenges. This paper defines and analyzes models of fault tolerant architectures for secure systems that rely on the use of design diversity. The models are built using minimal extensions to classical architectures according to a set of defined failure classes for secure services. A number of metrics are provided to quantify fault tolerance and performance as a function of design diversity. The architectures are analyzed with respect to the design diversity, and compared based on the undetected failure probability, the number of tolerated and detected failures, and the performance delay.
Keywords: diversity reception; telecommunication security; computer platforms; cyber-physical systems; cyberspace; design diversity; fault tolerance; fault tolerant architectures; secure systems; security risks; third-party components; Circuit faults; Computer architecture; Fault tolerance; Fault tolerant systems; Nuclear magnetic resonance; Security; Software; dependability; design diversity; modeling; security (ID#: 16-11049)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357618&isnumber=7357245
O. Arouk, A. Ksentini and T. Taleb, “Performance Analysis of RACH Procedure with Beta Traffic-Activated Machine-Type-Communication,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417095
Abstract: Machine-Type-Communication (MTC) is a key enabler for a variety of novel smart systems, such as smart grid, eHealth, Intelligent Transport System (ITS), and smart city, opening the area of the cyber physical systems. These systems may require the use of a huge number of MTC devices, which will put a great pressure on the whole network, i.e. Radio Access Network (RAN) and Core Network (CN) parts, resulting in the shape of congestion and system overload. Aiming at better evaluating the network performance under the existence of MTC traffic and also the effectiveness of the congestion control methods, the 3rd Generation Partnership Project (3GPP) group has proposed two traffic models: Uniform Distribution (over 60 s) and Beta Distribution (over 10 s). In this paper, a recursive operation-based analytical model, namely General Recursive Estimation (GRE), for modeling the performance of RACH procedure in the existence of MTC with Beta traffic is proposed. In order to show the effectiveness of our analytical model GRE, many metrics have been considered, such as the total number of MTC devices in each Random Access (RA) slot, the number of success MTC devices in each RA slot, and the Cumulative Distribution Function (CDF) of preamble transmission. Numerical results demonstrate the accuracy of GRE. Moreover, our model GRE could be used to model the performance of RACH procedure with any type of traffic.
Keywords: 3G mobile communication; recursive estimation; statistical distributions; telecommunication congestion control; telecommunication traffic; wireless channels; 3GPP; 3rd Generation Partnership Project; CDF; GRE; ITS; MTC devices; RA slot; RACH procedure; RAN; beta distribution traffic model; beta traffic-activated machine-type-communication; congestion control methods; core network; cumulative distribution function; cyber physical systems; eHealth; general recursive estimation; intelligent transport system; performance analysis; radio access network; random access channel; random access slot; recursive operation-based analytical model; smart city; smart grid; smart systems; system overload; uniform distribution traffic model; Analytical models; Long Term Evolution; Performance evaluation; Radio access networks; Recursive estimation; Synchronization; Uplink (ID#: 16-11050)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417095&isnumber=7416057
F. Hämmerle et al., “Evaluation of Context Management Architectures: The Case of Context Framework and Context Broker,” Industrial Technology (ICIT), 2015 IEEE International Conference on, Seville, 2015, pp. 1870-1875. doi: 10.1109/ICIT.2015.7125369
Abstract: In recent years Context Management Systems have been increasingly adopted for processing sensory information in different fields of applications, such as surveillance of construction areas, coordination of emergencies and production monitoring. While system architectures have been designed for specific domains, literature informs only little about evaluation criteria for Context Management-Architectures. Nevertheless it is crucial to use the right criteria, when the system has to face requirements of industrial applications. While one system may be designed for reusable contextualisation mechanisms, others provide real-time information for Manufacturing Execution Systems (MES). In this paper we develop suitable scenarios and metrics for architecture evaluation and apply these on two exemplary real-world architectures. The development of scenarios was based on domain experts and researchers in a project 1 concerning Cyber Physical Production Systems (CPPS), funded by the German Federal Ministry for Economic Affairs and Energy. Our results show, that early architectural decisions have a high influence on the systems performance and therefore the systems applicability. The developed criteria also provide knowledge for system engineers in industrial practice.
Keywords: expert systems; production engineering computing; software architecture; software performance evaluation; CPPS; European industrial production; context broker; context framework; context management architecture evaluation; cyber physical production system; domain expert; system performance evaluation; systems applicability; Computer architecture; Context; Measurement; Production; Scalability; Sensors; Time factors; Architecture; Context Management; Industrial Internet; Industry 4.0; System Evaluation (ID#: 16-11051)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7125369&isnumber=7125066
F. Zhang, Y. Wang and F. Kuang, “A Cognitive Network for Reliability Maintenance of Double-Path Routing in IEEE 802.15.4 Networks,” Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015 IEEE International Conference on, Shenyang, 2015, pp. 82-87. doi: 10.1109/CYBER.2015.7287914
Abstract: In order to build a reliable traffic to overcome ever-changing scenarios with different QoS requirements for WSN-based M2M applications, a cognitive technology using the causality tree is proposed for the reliability maintenance of wireless sensor networks. Routing maintenance metrics are defined as basic initiatives for device maintenance. The metrics enable to guarantee quality of services for the development of machine-to-machine communications. The causality tree performs a comprehensive survey of the metrics, and possible events are analyzed to identify routing design issues about link states, link reliability, and correlated dynamics transitions in terms of RSSI and PRR. Device maintenance metrics categorize the proposed semantic metrics into six class events. A global profile of the causality tree is finally put forward for each intelligent node. Physical experiments are used to evaluate the reliable performance of wireless nodes in the scenarios of channel and electromagnetic interference.
Keywords: cognitive radio; quality of service; telecommunication network reliability; telecommunication network routing; wireless sensor networks; PRR; QoS requirements; RSSI; WSN-based M2M applications; causality tree; channel interference; cognitive technology; correlated dynamics transitions; device maintenance metrics; double-path routing; electromagnetic interference; link reliability; link states; machine-to-machine communications; quality of services; reliability maintenance; routing design issues; routing maintenance metrics; semantic metrics; wireless nodes; wireless sensor networks; Data transfer; Logic gates; Maintenance engineering; Measurement; Reliability; Routing; Wireless sensor networks; M2M; QoS; WSN; reliability; routing (ID#: 16-11052)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287914&isnumber=7287893
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Cyber Physical Systems and Privacy 2015 |
The research work cited here looks at the Science of Security problem of Privacy in the context of cyber physical systems. The work was presented in 2015.
C. Konstantinou, M. Maniatakos, F. Saqib, S. Hu, J. Plusquellic and Y. Jin, “Cyber-Physical Systems: A Security Perspective,” Test Symposium (ETS), 2015 20th IEEE European, Cluj-Napoca, 2015, pp. 1-8. doi: 10.1109/ETS.2015.7138763
Abstract: A cyber-physical system (CPS) is a composition of independently interacting components, including computational elements, communications and control systems. Applications of CPS institute at different levels of integration, ranging from nation-wide power grids, to medium scale, such as the smart home, and small scale, e.g. ubiquitous health care systems including implantable medical devices. Cyber-physical systems primarily transmute how we interact with the physical world, with each system requiring different levels of security based on the sensitivity of the control system and the information it carries. Considering the remarkable progress in CPS technologies during recent years, advancement in security and trust measures is much needed to counter the security violations and privacy leakage of integration elements. This paper focuses on security and privacy concerns at different levels of the composition and presents system level solutions for ensuring the security and trust of modern cyber-physical systems.
Keywords: data privacy; security of data; CPS; communications; computational elements; control systems; cyber-physical system; integration elements; nation-wide power grids; privacy leakage; security measure; security violations; smart home; trust measure; ubiquitous health care systems; Computer crime; Guidelines; Medical services; Smart grids; Smart homes (ID#: 16-11003)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7138763&isnumber=7138715
L. Vegh and L. Miclea, “A Simple Scheme for Security and Access Control in Cyber-Physical Systems,” Control Systems and Computer Science (CSCS), 2015 20th International Conference on, Bucharest, 2015, pp. 294-299. doi: 10.1109/CSCS.2015.13
Abstract: In a time when technology changes continuously, where things you need today to run a certain system, might not be needed tomorrow anymore, security is a constant requirement. No matter what systems we have, or how we structure them, no matter what means of digital communication we use, we are always interested in aspects like security, safety, privacy. An example of the ever-advancing technology are cyber-physical systems. We propose a complex security architecture that integrates several consecrated methods such as cryptography, steganography and digital signatures. This architecture is designed to not only ensure security of communication by transforming data into secret code, it is also designed to control access to the system and detect and prevent cyber attacks.
Keywords: authorisation; cryptography; digital signatures; steganography; access control; cyber attacks; cyber-physical system; security architecture; security requirement; system security; Computer architecture; Digital signatures; Encryption; Public key; cyber-physical systems; multi-agent systems (ID#: 16-11004)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168445&isnumber=7168393
L. Feng and B. McMillin, “Information Flow Quantification Framework for Cyber Physical System with Constrained Resources,” Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 50-59. doi: 10.1109/COMPSAC.2015.92
Abstract: In Cyber Physical Systems (CPSs), traditional security mechanisms such as cryptography and access control are not enough to ensure the security of the system. In a CPS, security is violated through complex interactions between the cyber and physical worlds, and, most insidiously, unintended information leakage through observable physical actions. Information flow analysis, which aims at controlling the way information flows among different entities, is better suited for CPSs. Information theory is widely used to quantify information leakage received by a program that produces a public output. Quantifying information leakage in CPSs can, however, be challenging due to implicit information flow between the cyber portion, the physical portion, and the outside world. This paper focuses on statistical methods to quantify information leakage in CPSs, especially, CPSs that allocate constrained resources. With aggregated physical observations, unintended information about the constrained resource might be leaked. The framework proposed is based on the advice tape concept of algorithmically quantifying information leakage and statistical analysis. An electric smart grid has been used as an example to develop confidence intervals of information leakage within a real CPS. The impact of this work is that it can be used as in algorithmic design to allocate electric power to nodes while maximizing the uncertainly of the information flow to an attacker.
Keywords: data privacy; security of data; statistical analysis; CPSs; access control; aggregated physical observations; confidence intervals; constrained resources; cryptography; cyber physical system; electric power allocation; electric smart grid; information flow quantification framework; information leakage; information theory; security mechanisms; statistical analysis; unintended information; Algorithm design and analysis; Complexity theory; Entropy; Security; Smart grids; Standards; Uncertainty; advice tape; confidence interval; information flow; quantify (ID#: 16-11005)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273599&isnumber=7273573
Y. Zhou, S. Chen, Z. Mo and Q. Xiao, “Point-to-Point Traffic Volume Measurement through Variable-Length Bit Array Masking in Vehicular Cyber-Physical Systems,” Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, Columbus, OH, 2015, pp. 51-60. doi: 10.1109/ICDCS.2015.14
Abstract: In this paper, we consider an important problem of privacy-preserving point-to-point traffic volume measurement in vehicular cyber physical systems (VCPS), whose focus is utilizing VCPS to enable automatic traffic data collection, and measuring point-to-point traffic volume while preserving the location privacy of all participating vehicles. The novel scheme that we propose tackles the efficiency, privacy, and accuracy problems encountered by previous solutions. Its applicability is demonstrated through both mathematical and numerical analysis. The simulation results also show its superior performance.
Keywords: data privacy; traffic engineering computing; vehicles; VCPS; automatic traffic data collection; location privacy; mathematical analysis; numerical analysis; privacy-preserving point-to-point traffic volume measurement; variable-length bit array masking; vehicular cyber-physical systems; Accuracy; Arrays; Privacy; Servers; Vehicles; Volume measurement (ID#: 16-11006)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164892&isnumber=7164877
I. Halcu, D. Nunes, V. Sgârciu and J. S. Silva, “New Mechanisms for Privacy in Human-in-the-Loop Cyber-Physical Systems,” Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2015 IEEE 8th International Conference on, Warsaw, 2015, pp. 418-423. doi: 10.1109/IDAACS.2015.7340770
Abstract: Nowadays, we witness a tremendous increase in systems that sense various facets of humans and their surrounding environments. In particular, the detection of human emotions can lead to emotionally-aware applications that use this information to help improving people's daily lives and offering new business opportunities. We address the issue of maintaining privacy for this type of applications. In this paper, we propose a general model that focuses on privacy-preserving mechanisms for a Human-in-the-loop emotionally-aware Cyber-Physical System (HiTLCPS). As a proof-of-concept, we also present an emotionally aware application that attempts to positively impact students' lives without compromising their privacy1.
Keywords: data privacy; emotion recognition; HiTLCPS; business opportunities; emotionally-aware applications; human emotions; human-in-the-loop emotionally-aware cyber-physical systems; privacy-preserving mechanisms; Context; Data privacy; Privacy; Security; Sensors; Smart phones; Social network services; Anonymity; Emotion detection; Human-in-the-loop; Privacy (ID#: 16-11007)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7340770&isnumber=7340677
S. Götz, I. Gerostathopoulos, F. Krikava, A. Shahzada and R. Spalazzese, “Adaptive Exchange of Distributed Partial Models@run.time for Highly Dynamic Systems,” Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2015 IEEE/ACM 10th International Symposium on, Florence, 2015, pp. 64-70. doi: 10.1109/SEAMS.2015.25
Abstract: Future software systems will be highly dynamic. We are already experiencing, for example, a world where Cyber-Physical Systems (CPSs) play a more and more crucial role. CPSs integrate computational, physical, and networking elements, they comprise a number of subsystems, or entities, that are connected and work together. The open and highly distributed nature of the resulting system gives rise to unanticipated runtime management issues such as the organization of subsystems and resource optimization. In this paper, we focus on the problem of knowledge sharing among cooperating entities of a highly distributed and self-adaptive CPS. Specifically, the research question we address is how to minimize the knowledge that needs to be shared among the entities of a CPS. If all entities share all their knowledge with each other, the performance, energy and memory consumption as well as privacy are unnecessarily negatively impacted. To reduce the amount of knowledge to share between CPS entities, we envision a role-based adaptive knowledge exchange technique working on partial runtime models, i.e., Models reflecting only part of the state of the CPS. Our approach supports two adaptation dimensions: the runtime type of knowledge and conditions over the knowledge. We illustrate the feasibility of our technique by discussing its realization based on two state-of-the-art approaches.
Keywords: distributed processing; knowledge management; optimisation; CPS; adaptive exchange; computational elements; cooperating entities; cyber-physical systems; distributed partial models@run.time; highly dynamic systems; knowledge sharing; networking elements; partial runtime models; physical elements; resource optimization; role-based adaptive knowledge exchange technique; software systems; Adaptation models; Cleaning; Collaboration; Object oriented modeling; Robot kinematics; Runtime; Cyber-Physical Systems; Model synchronization; Models@run.time (ID#: 16-11008)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194658&isnumber=7194643
S. Unger and D. Timmermann, “DPWSec: Devices Profile for Web Services Security,” Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/ISSNIP.2015.7106961
Abstract: As cyber-physical systems (CPS) build a foundation for visions such as the Internet of Things (IoT) or Ambient Assisted Living (AAL), their communication security is crucial so they cannot be abused for invading our privacy and endangering our safety. In the past years many communication technologies have been introduced for critically resource-constrained devices such as simple sensors and actuators as found in CPS. However, many do not consider security at all or in a way that is not suitable for CPS. Also, the proposed solutions are not interoperable although this is considered a key factor for market acceptance. Instead of proposing yet another security scheme, we looked for an existing, time-proven solution that is widely accepted in a closely related domain as an interoperable security framework for resource-constrained devices. The candidate of our choice is the Web Services Security specification suite. We analysed its core concepts and isolated the parts suitable and necessary for embedded systems. In this paper we describe the methodology we developed and applied to derive the Devices Profile for Web Services Security (DPWSec). We discuss our findings by presenting the resulting architecture for message level security, authentication and authorization and the profile we developed as a subset of the original specifications. We demonstrate the feasibility of our results by discussing the proof-of-concept implementation of the developed profile and the security architecture.
Keywords: Internet; Internet of Things; Web services; ambient intelligence; assisted living; security of data; AAL; CPS; DPWSec; IoT; ambient assisted living; communication security; cyber-physical system; devices profile for Web services security; interoperable security framework; message level security; resource-constrained devices; Authentication; Authorization; Cryptography; Interoperability; Web services; Applied Cryptography; Cyber-Physical Systems (CPS); DPWS; Intelligent Environments; Internet of Things (IoT); Usability (ID#: 16-11009)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106961&isnumber=7106892
X. Li and T. Yang, “Signal Processing Oriented Approach for Big Data Privacy,” High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on, Daytona Beach Shores, FL, 2015, pp. 275-276. doi: 10.1109/HASE.2015.23
Abstract: This paper addresses the challenge of big data security by exploiting signal processing theories. We propose a new big data privacy protocol that scrambles data via artificial noise and secret transform matrices. The utility of the scrambled data is maintained, as demonstrated by a cyber-physical system application. We further outline the proof of the proposed protocol's privacy by considering the limitations of blind source separation and compressive sensing.
Keywords: Big Data; compressed sensing; data privacy; matrix algebra; security of data; Big Data privacy; Big Data security; artificial noise; blind source separation; compressive sensing; secret transform matrix; signal processing; Big data; Data privacy; Noise; Power demand; Protocols; Vectors; big data; cyber-physical systems; privacy (ID#: 16-11010)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027443&isnumber=7027398
K. M. Alam, A. Sopena and A. E. Saddik, “Design and Development of a Cloud Based Cyber-Physical Architecture for the Internet-of-Things,” 2015 IEEE International Symposium on Multimedia (ISM), Miami, FL, USA, 2015, pp. 459-464. doi: 10.1109/ISM.2015.96
Abstract: Internet-of-Things (IoT) is considered as the next big disruptive technology field which main goal is to achieve social good by enabling collaboration among physical things or sensors. We present a cloud based cyber-physical architecture to leverage the Sensing as-a-Service (SenAS) model, where every physical thing is complemented by a cloud based twin cyber process. In this model, things can communicate using direct physical connections or through the cyber layer using peer-to-peer inter process communications. The proposed model offers simultaneous communication channels among groups of things by uniquely tagging each group with a relationship ID. An intelligent service layer ensures custom privacy and access rights management for the sensor owners. We also present the implementation details of an IoT platform and demonstrate its practicality by developing case study applications for the Internet-of-Vehicles (IoV) and the connected smart home.
Keywords: Ad hoc networks; Cloud computing; Intelligent sensors; Peer-to-peer computing; Vehicles; Wireless sensor networks; Connected Smart Home; Cyber-Physical Systems; Emulator; Internet-of-Things; Sensing-as-a-Service; Vehicular Ad-hoc Networks (ID#: 16-11011)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7442379&isnumber=7442255
B. McMillin, “Distributed Intelligence in the Electric Smart Grid,” Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 48-48. doi: 10.1109/COMPSAC.2015.338
Abstract: Cyber-physical systems are physical infrastructures with deeply embedded computation, As these systems move beyond traditional control and response into more sophisticated planning and interaction, particularly with human elements, the need for intelligent cyber-physical systems emerges. This talk uses advanced electric smart grid technology as an example of distributed intelligence in a cyber-physical system from the point of view of energy management, human interfaces, and security and privacy.
Keywords: power engineering computing; smart power grids; cyberphysical system; distributed intelligence; electric smart grid; energy management; human interface; Artificial intelligence; Conferences; Cyber-physical systems; Energy management; Privacy; Security; Smart grids; Human-Machine Interface; Security; Usability; physical (ID#: 16-11012)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273597&isnumber=7273573
X. Liu, A. Doboli and F. Ye, “Optimized Local Control Strategy for Voice-Based Interaction-Tracking Badges for Social Applications,” Computer Design (ICCD), 2015 33rd IEEE International Conference on, New York, NY, 2015, pp. 688-695. doi: 10.1109/ICCD.2015.7357182
Abstract: This paper presents a method to design optimized local control strategies for Cyber-Physical Systems that produce reliable data models for social applications. Data models have different semantics and abstraction levels. The local control strategies manage ad-hoc nano-clouds of embedded computing and communication nodes (CCNs) used for data collection, modeling, and communication. Control strategies consider tradeoffs defined by the resource constraints of embedded CCNs (e.g., computing power, communication bandwidth, and energy), assurance requirements (e.g., robustness) of the models, and privacy of users. Experiments evaluate and demonstrate the effectiveness of the control strategies for nano-clouds composed of smart voice-based interaction-tracking badges.
Keywords: cyber-physical systems; data privacy; embedded systems; human computer interaction; social sciences computing; ubiquitous computing; CCN; ad-hoc nano-clouds; assurance requirements; embedded computing and communication nodes; optimized local control strategy; reliable data models; smart voice-based interaction-tracking badges; social applications; user privacy; Bandwidth; Computational modeling; Data acquisition; Data models; Data privacy; Privacy; Reliability (ID#: 16-11013)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357182&isnumber=7357071
J. Sliwa, “Statistical Challenges for Quality Assessment of Smart Medical Devices,” 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, 2015, pp. 380-385. doi: 10.1109/3PGCIC.2015.96
Abstract: Connected medicine, using smart (software based and networked) medical devices is frequently presented as the major disruptive trend in health care. Such devices will however be broadly used only if they are “prescribed” by the hospitals as a part of a therapy and are reimbursed by the insurances. For this we need the proof of their safety, medical efficacy and economic efficiency. Aside of obligatory clinical trials we need an extensive system of post-market surveillance, because: a medical device is a part of a complex cyber-physical system, with humans in the loop / the environment cannot be sufficiently defined / humans react differently to the therapy, they also behave differently / after every software upgrade the device is not the same as before In their operation such devices generate huge amounts of data that can be reused for such analysis. Technically oriented people believe it can be done using a Big Data Analytics system without a deeper understanding of the underlying processes. It is doubtful if such approach can deliver useful results. The main problems seem to be: unbalanced cohort / various patient groups with various preferences / multiple quality parameters (basic algorithm, signal propagation, battery, security & privacy, obtrusiveness, etc.) / multiple variants (operating modes, device settings) / variability of the device and of the environment. When we transform data into “actionable knowledge”, especially if the generated decisions influence human health, utmost care has to be applied. The goal of this paper is to present the complexity of the problem, warn against hasty, purely technical solutions, raise interest among specialists in health statistics and ignite an interdisciplinary cooperation to solve it.
Keywords: Big Data; computational complexity; cyber-physical systems; data analysis; medical computing; socio-economic effects; statistical analysis; actionable knowledge; big data analytics system; complex cyber-physical system; connected medicine; device variability; disruptive trend; economic efficiency; health care; health statistics; hospitals; human health; insurances; medical efficacy; multiple quality parameters; obligatory clinical trials; patient groups; post-market surveillance; problem complexity; quality assessment; smart medical devices; software upgrade; statistical challenges; technically oriented people; unbalanced cohort; Batteries; Biomedical imaging; Biomedical monitoring; Hospitals; Monitoring; Smart phones; Software; Evidence Based Medicine; Medical Statistics; Smart Medical Devices (ID#: 16-11014)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7424592&isnumber=7424499
X. Yin and S. Lafortune, “A New Approach for Synthesizing Opacity-Enforcing Supervisors for Partially-Observed Discrete-Event Systems,” 2015 American Control Conference (ACC), Chicago, IL, 2015, pp. 377-383. doi: 10.1109/ACC.2015.7170765
Abstract: Opacity is a confidentiality property for partially-observed discrete-event systems relevant to the analysis of security and privacy in cyber and cyber-physical systems. It captures the plausible deniability of the system's “secret” in the presence of an outside observer that is potentially malicious. In this paper, we consider the enforcement of opacity on systems modeled by finite-state automata. We assume that the given system is not opaque and the objective is to restrict its behavior by supervisory control in order to enforce opacity of its secret. We consider the general setting of supervisory control under partial observations where the controllable events need not all be observable. Our approach for the synthesis of an opacity enforcing supervisor is based on the construction of a new transition system that we call the “All Inclusive Controller for Opacity” (or AIC-O). The AIC-O is a finite bipartite transition system that embeds in its transition structure all valid opacity enforcing supervisors. We present an algorithm for the construction of the AIC-O and discuss its properties. We then develop a synthesis algorithm, based on the AIC-O, that constructs a “maximally permissive” opacity-enforcing supervisor. Our approach generalizes previous approaches in the literature for opacity enforcement by supervisory control.
Keywords: control system synthesis; discrete event systems; finite state machines; observers; AIC-O; all inclusive controller for opacity transition system; confidentiality property; cyber-physical systems; finite bipartite transition system; finite-state automata; opacity-enforcing supervisor synthesis algorithm; outside observer; partially-observed discrete-event systems; privacy analysis; security analysis; supervisory control; transition structure; Automata; Discrete-event systems;Games; Observers; Security; Supervisory control (ID#: 16-11015)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7170765&isnumber=7170700
P. Carmona, D. Nunes, D. Raposo, D. Silva, J. S. Silva and C. Herrera, “Happy Hour - Improving Mood with an Emotionally Aware Application,” Innovations for Community Services (I4CS), 2015 15th International Conference on, Nuremberg, 2015, pp. 1-7. doi: 10.1109/I4CS.2015.7294480
Abstract: Mobile sensing in Cyber-Physical Systems has been evolving proportionally with smartphones. In fact, we are witnessing a tremendous increase in systems that sense various facets of human beings and their surrounding environments. In particular, the detection of human emotions can lead to emotionally-aware applications that use this information to benefit people's daily lives. This work presents the implementation of a Human-in-the- loop emotionally-aware Cyber-Physical System that attempts to positively impact its user's mood through moderate walking exercise. Data from smartphone sensors, a smartshirt's electrocardiogram and weather information from a web API are processed through a machine learning algorithm to infer emotional states. When negative emotions are detected, the application timely suggests walking exercises, while providing real-time information regarding nearby points of interest. This information includes events, background music, attendance, agitation and general mood. In addition, the system also dynamically adapts privacy and networking configurations based on emotions. The sharing of the user's location on social networks and the device's networking interfaces are configured according to user-defined rules in order to reduce frustration and provide a better Quality of Experience.
Keywords: Internet; data privacy; electrocardiography; emotion recognition; learning (artificial intelligence); mobile computing; quality of experience; social networking (online); Web API; emotionally aware application; human emotion detection; human-in-the- loop emotionally-aware cyber-physical system; machine learning algorithm; mobile sensing; networking configurations; privacy configurations; quality of experience; smartphone sensors; smartphones; smartshirt electrocardiogram; social networks; weather information; Accuracy; Androids; Humanoid robots; Mood; Privacy; Sensors; Smart phones; Emotion Inference; Human-in-the-loop; Network Management; Smartphones (ID#: 16-11016)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7294480&isnumber=7294473
C. W. Axelrod, “Enforcing Security, Safety and Privacy for the Internet of Things,” Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island, Farmingdale, NY, 2015, pp. 1-6. doi: 10.1109/LISAT.2015.7160214
Abstract: The connecting of physical units, such as thermostats, medical devices and self-driving vehicles, to the Internet is happening very quickly and will most likely continue to increase exponentially for some time to come. Valid concerns about security, safety and privacy do not appear to be hampering this rapid growth of the so-called Internet of Things (IoT). There have been many popular and technical publications by those in software engineering, cyber security and systems safety describing issues and proposing various “fixes.” In simple terms, they address the “why” and the “what” of IoT security, safety and privacy, but not the “how.” There are many cultural and economic reasons why security and privacy concerns are relegated to lower priorities. Also, when many systems are interconnected, the overall security, safety and privacy of the resulting systems of systems generally have not been fully considered and addressed. In order to arrive at an effective enforcement regime, we will examine the costs of implementing suitable security, safety and privacy and the economic consequences of failing to do so. We evaluated current business, professional and government structures and practices for achieving better IoT security, safety and privacy, and found them lacking. Consequently, we proposed a structure for ensuring that appropriate security, safety and privacy are built into systems from the outset. Within such a structure, enforcement can be achieved by incentives on one hand and penalties on the other. Determining the structures and rules necessary to optimize the mix of penalties and incentives is a major goal of this paper.
Keywords: Internet of Things; data privacy; security of data; IoT privacy; IoT safety; IoT security; cyber security; software engineering; Government; Privacy; Safety; Security; Software; Standards; Internet of Things (IoT); privacy; safety; security; software liability; system development lifecycle (SDLC); time to value; value hills; vulnerability marketplace (ID#: 16-11017)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160214&isnumber=7160171
“Internet and Collaboration: Challenges and Research Directions,” 2015 IEEE Conference on Collaboration and Internet Computing (CIC), Hangzhou, China, 2015, pp. xxiv-xxiv. doi: 10.1109/CIC.2015.50
Abstract: Summary form only given, as follows. Internet has become the ubiquitous fabric that enabled the growth of infrastructures, applications, and technologies that significantly enhance global interactions and collaborations with significant and increasing impact on society. Unprecedented cyber-social and cyber-physical infrastructures, systems, and applications that span geographic boundaries are becoming reality. Technology has evolved from standalone tools to open systems supporting collaboration in multi-organizational settings, and from general purpose tools to specialized collaboration platforms. Increasingly, individuals and organizations have relied on Internet-enabled collaboration between distributed teams of humans, computer applications, or autonomous robots to achieve higher productivity and produce collaboratively developed products that would have been infeasible just a few years ago. This panel will explore and debate on the challenges and research directions related to Collaboration and Internet computing areas. Some key issues that will be discussed in this panel are, but not limited to: (1) What are new key challenges in systems, applications and networking areas related to CIC? Are there specific limitations in these areas that need a fundamental redesign? (2) How are the global safety, security and privacy issues reshaping within the context of the CIC area? (3) What are potential transformative, killer applications that CIC can enable and what are the challenges towards achieving them? A record of the panel discussion was not made available for publication as part of the conference proceedings.
Keywords: Collaboration; Computer applications; Internet; Open systems (ID#: 16-11018)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7423058&isnumber=7423045
S. Subhani, M. Gibescu and W. L. Kling, “Autonomous Control of Distributed Energy Resources via Wireless Machine-to-Machine Communication; a Survey of Big Data Challenges,” Environment and Electrical Engineering (EEEIC), 2015 IEEE 15th International Conference on, Rome, 2015, pp. 1437-1442. doi: 10.1109/EEEIC.2015.7165381
Abstract: It is anticipated that the growing number of distributed energy resources and other cyber physical components of smart grids will make the management of the distribution grid more complex. In this survey paper, four discernible challenges related to big data and the enablement of autonomous grid operation are investigated: (1) the technical readiness level of cloud computing services, (2) limitations of wireless telecommunication technology, (3) smart meter related privacy issues and (4) the intrinsic uncertainty in data analytics. The investigated challenges indicate that the current performance of cloud computing and wireless telecommunication technology do not readily enable autonomous decentralized secondary control of power systems. Moreover, technical and legislative solutions have to be developed to ensure consumer privacy, prior to applying data analytics on smart meter data.
Keywords: Big Data; cloud computing; control engineering computing; data analysis; decentralised control; power system control; power system management; radio networks; smart meters; smart power grids; Big Data; cloud computing service; cyber physical component; data analytics; distributed energy resource autonomous control; distribution grid management; power system autonomous decentralized secondary control; smart grid; smart meter; wireless machine-to-machine communication; Big data; Cloud computing; Smart grids; Smart meters; Wireless communication; Wireless sensor networks; big data analytic; distributed resource; privacy
(ID#: 16-11019)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165381&isnumber=7165173
Y. Park, “Connected Smart Buildings, a New Way to Interact with Buildings,” Cloud Engineering (IC2E), 2015 IEEE International Conference on, Tempe, AZ, 2015, pp. 5-5. doi: 10.1109/IC2E.2015.57
Abstract: Summary form only given. Devices, people, information and software applications rarely live in isolation in modern building management. For example, networked sensors that monitor the performance of a chiller are common and collected data are delivered to building automation systems to optimize energy use. Detected possible failures are also handed to facility management staffs for repairs. Physical and cyber security services have to be incorporated to prevent improper access of not only HVAC (Heating, Ventilation, Air Conditioning) equipment but also control devices. Harmonizing these connected sensors, control devices, equipment and people is a key to provide more comfortable, safe and sustainable buildings. Nowadays, devices with embedded intelligences and communication capabilities can interact with people directly. Traditionally, few selected people (e.g., facility managers in building industry) have access and program the device with fixed operating schedule while a device has a very limited connectivity to an operating environment and context. Modern connected devices will learn and interact with users and other connected things. This would be a fundamental shift in ways in communication from unidirectional to bi-directional. A manufacturer will learn how their products and features are being accessed and utilized. An end user or a device on behalf of a user can interact and communicate with a service provider or a manufacturer without go though a distributer, almost real time basis. This will requires different business strategies and product development behaviors to serve connected customers' demands. Connected things produce enormous amount of data that result many questions and technical challenges in data management, analysis and associated services. In this talk, we will brief some of challenges that we have encountered In developing connected building solutions and services. More specifically, (1) semantic interoperability requirements among smart s- nsors, actuators, lighting, security and control and business applications, (2) engineering challenges in managing massively large time sensitive multi-media data in a cloud at global scale, and (3) security and privacy concerns are presented.
Keywords: HVAC; building management systems; intelligent sensors; actuators; building automation systems; building management; business strategy; chiller performance; connected smart buildings; control devices; cyber security services; data management; facility management staffs; heating-ventilation-air conditioning equipment; lighting; networked sensors; product development behaviors; service provider; smart sensors; time sensitive multimedia data; Building automation; Business; Conferences; Intelligent sensors; Security; Building Management; Cloud; Internet of Things (ID#: 16-11020)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092892&isnumber=7092808
P. Volf, “NAS-Wide Simulation of Air Traffic with ATC Behavior Model,” 2015 Integrated Communication, Navigation, and Surveillance Conference (ICNS), Herndon, VA, USA, 2015, pp. 1-13. doi: 10.1109/ICNSURV.2015.7121322
Abstract: Agent Technology Center ▸ Size: 35 researchers, PhD/MSc students & CTU faculty members ▸ Objective: fundamental/applied research, empirical evaluation & tech transfer ▸ Core competences: » multiagent modeling and simulation » multiagent planning and coordination » multiagent data analysis » adversarial reasoning & game theory ▸ Application domains: » air traffic, ground transportation » cyber security, privacy, steganalysis » UAV robotics, ground robotics » physical security (maritime) ▸ AgentFly is a complex multi-agent system developed as a result of multiple research activities since 2006 ▸ Funded by the US Air Force, FAA (NextGen), US Army, ONR, Czech Government ▸Cooperation with other universities »Drexel (US, Philadelphia), Bradley (US, Peoria), Linkoping (Sweden), TU Dresden (Germany) ▸Industrial cooperation »NASA (US), BAE Systems (UK), SAAB (Sweden), CS SOFT (Czech), DSTO (DoD Australia).
Keywords: (not provided) (ID#: 16-11021)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7121322&isnumber=7121207
S. Moses, J. Mercado, A. Larson and D. Rowe, “Touch Interface and Keylogging Malware,” Innovations in Information Technology (IIT), 2015 11th International Conference on, Dubai, 2015, pp. 86-91. doi: 10.1109/INNOVATIONS.2015.7381520
Abstract: Software keyloggers have been used to spy on computer users to track activity or gather sensitive information for decades. Their primary focus has been to capture keystroke data from physical keyboards. However, since the release of Microsoft Windows 8 in 2012 touchscreen personal computers have become much more prevalent, introducing the use of on-screen keyboards which afford users an alternative keystroke input method. Smart cities are designed to enhance and improve the quality of life of city populations while reducing cost and resource consumption. As new technology is developed to create safe, renewable, and sustainable environments, we introduce additional risk that mission critical data and access credentials may be stolen via malicious keyloggers. In turn, cyber-attacks targeting critical infrastructure using this data could result in widespread catastrophic systems failure. In order to protect society in the age of smart-cities it is vital that security implications are considered as this technology is implemented. In this paper we investigate the capabilities of keyloggers to capture keystrokes from an on-screen (virtual) keyboard and demonstrate that different keyloggers respond very differently to on-screen keyboard input. We suggest a number of future studies that could be performed to further understand the security implications presented by on-screen keyboards to smart cities as they relate to keyloggers.
Keywords: invasive software; keyboards; touch sensitive screens; user interfaces; Microsoft Windows; cyber attacks; keylogging malware; keystroke input method; on-screen keyboards; software keyloggers; touch interface malware; virtual keyboard; Computers; Hardware; Keyboards; Malware; Operating systems; Malware; Privacy; Software Security (ID#: 16-11022)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381520&isnumber=7381480
A. Ouaddah, I. Bouij-Pasquier, A. Abou Elkalam and A. Ait Ouahman, “Security Analysis and Proposal of New Access Control Model in the Internet of Thing,” Electrical and Information Technologies (ICEIT), 2015 International Conference on, Marrakech, 2015, pp. 30-35. doi: 10.1109/EITech.2015.7162936
Abstract: The Internet of Things (IoT) represents a concept where the barriers between the real world and the cyber-world are progressively annihilated through the inclusion of everyday physical objects combined with an ability to provide smart services. These services are creating more opportunities but at the same time bringing new challenges in particular security and privacy concerns. To address this issue, an access control management system must be implemented. This work introduces a new access control framework for IoT environment, precisely the Web of Things (WoT) approach, called “SmartOrBAC” Based on the OrBAC model. SmartOrBAC puts the context aware concern in a first position and deals with the constrained resources environment complexity. To achieve these goals, a list of detailed IoT security requirements and needs is drawn up in order to establish the guidelines of the “SmartOrBAC”. Then, The OrBAC model is analyzed and extended, regarding these requirements, to specify local as well as collaboration access control rules; on the other hand, these security policies are enforced by applying web services mechanisms mainly the RESTFUL approach. Finaly the most important works that emphasize access control in IoT environment are discussed.
Keywords: Internet of Things; Web services; authorisation; ubiquitous computing; Internet of Thing; RESTFUL approach; SmartOrBAC; Web of Things; Web services; collaboration access control rules; context aware concern; cyber-world; new access control model; security analysis; Access control; Biomedical monitoring; Monitoring; Organizations; Scalability; Usability; OrBAC; access control model; internet of things; privacy; security policy; web of things (ID#: 16-11023)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162936&isnumber=7162923
T. Veugen and Z. Erkin, “Content-Based Recommendations with Approximate Integer Division,” Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, South Brisbane, QLD, 2015, pp. 1802-1806. doi: 10.1109/ICASSP.2015.7178281
Abstract: Recommender systems have become a vital part of e-commerce and online media applications, since they increased the profit by generating personalized recommendations to the customers. As one of the techniques to generate recommendations, content-based algorithms offer items or products that are most similar to those previously purchased or consumed. These algorithms rely on user-generated content to compute accurate recommendations. Collecting and storing such data, which is considered to be privacy-sensitive, creates serious privacy risks for the customers. A number of threats to mention are: service providers could process the collected rating data for other purposes, sell them to third parties, or fail to provide adequate physical security. In this paper, we propose a cryptographic approach to protect the privacy of individuals in a recommender system. Our proposal is founded on homomorphic encryption, which is used to obscure the private rating information of the customers from the service provider. Our proposal explores basic and efficient cryptographic techniques to generate private recommendations using a server-client model, which neither relies on (trusted) third parties, nor requires interaction with peer users. The main strength of our contribution lies in providing a highly efficient division protocol which enables us to hide commercially sensitive similarity values, which was not the case in previous works.
Keywords: approximation theory; cryptography; electronic commerce; integer programming; recommender systems; approximate integer division; content based algorithms; content based recommendations; cryptographic approach; cryptographic techniques; e-commerce; homomorphic encryption; online media applications; personalized recommendations; recommender systems; serious privacy risks; server-client model; service providers; user generated content; Computational modeling; Protocols; Recommender systems; homomorphic encryption; privacy; secure division; secure multi-party computation (ID#: 16-11024)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178281&isnumber=7177909
R. Dong, W. Krichene, A. M. Bayen and S. S. Sastry, “Differential Privacy of Populations in Routing Games,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 2798-2803. doi: 10.1109/CDC.2015.7402640
Abstract: As our ground transportation infrastructure modernizes, the large amount of data being measured, transmitted, and stored motivates an analysis of the privacy aspect of these emerging cyber-physical technologies. In this paper, we consider privacy in the routing game, where the origins and destinations of drivers are considered private. This is motivated by the fact that this spatiotemporal information can easily be used as the basis for inferences for a person's activities. More specifically, we consider the differential privacy of the mapping from the amount of flow for each origin-destination pair to the traffic flow measurements on each link of a traffic network. We use a stochastic online learning framework for the population dynamics, which is known to converge to the Nash equilibrium of the routing game. We analyze the sensitivity of this process and provide theoretical guarantees on the convergence rates as well as differential privacy values for these models. We confirm these with simulations on a small example.
Keywords: convergence; data privacy; game theory; learning (artificial intelligence); security of data; stochastic processes; traffic information systems; transportation; Nash equilibrium; convergence rates; cyber-physical technology; differential privacy; driver destination; driver origin; ground transportation infrastructure modernization; person activity; population dynamics; privacy analysis; routing game; spatiotemporal information; stochastic online learning framework; traffic flow measurement; traffic network; Games; Privacy; Routing; Sociology; Statistics; Vehicles; Yttrium (ID#: 16-11025)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7402640&isnumber=7402066
P. Wang, A. Ali and W. Kelly, “Data Security and Threat Modeling for Smart City Infrastructure,” Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC), 2015 International Conference on, Shanghai, 2015, pp. 1-6. doi: 10.1109/SSIC.2015.7245322
Abstract: Smart city opens up data with a wealth of information that brings innovation and connects government, industry and citizens. Cyber insecurity, on the other hand has raised concerns among data privacy and threats to smart city systems. In this paper, we look into security issues in smart city infrastructure from both technical and business operation perspectives and propose an approach to analyze threats and to improve data security of smart city systems. The assessment process takes hundreds of features into account. Data collected during the assessment stage are then imported into an algorithm that calculates the threat factor. Mitigation strategies are provided to help reducing risks of smart city systems from being hacked into and to protect data from being misused, stolen or identifiable. Study shows that the threat factor can be reduced significantly by following this approach. Experiments show that this comprehensive approach can reduce the risks of cyber intrusions to smart city systems. It can also deal with privacy concerns in this big data arena.
Keywords: Big Data; data protection; security of data; smart cities; big data; cyber insecurity; cyber intrusions; data privacy; data security; smart city infrastructure; threat modeling; Business; Encryption; Firewalls (computing); Malware; cyber physical; smart city (ID#: 16-11026)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245322&isnumber=7245317
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Decomposition and Security 2015 |
Mathematical decomposition is often used to address network flows. For the Science of Security community, decomposition is a useful method of dealing with cyber physical systems issues, metrics, and compositionality. The work cited here was presented in 2015.
R. M. Savola, P. Savolainen, A. Evesti, H. Abie and M. Sihvonen, “Risk-Driven Security Metrics Development for an E-Health IoT Application,” Information Security for South Africa (ISSA), 2015, Johannesburg, 2015, pp. 1-6. doi: 10.1109/ISSA.2015.7335061
Abstract: Security and privacy for e-health Internet-of-Things applications is a challenge arising due to the novelty and openness of the solutions. We analyze the security risks of an envisioned e-health application for elderly persons' day-to-day support and chronic disease self-care, from the perspectives of the service provider and end-user. In addition, we propose initial heuristics for security objective decomposition aimed at security metrics definition. Systematically defined and managed security metrics enable higher effectiveness of security controls, enabling informed risk-driven security decision-making.
Keywords: Internet of Things; data privacy; decision making; diseases; geriatrics; health care; risk management; security of data; chronic disease self-care; e-health Internet-of-Things applications; e-health IoT application; elderly person day-to-day support; privacy; risk-driven security decision-making; risk-driven security metrics development; security controls; security objective decomposition; Artificial intelligence; Android; risk analysis; security effectiveness; security metrics (ID#: 16-10929)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335061&isnumber=7335039
O. Jane, H. G. İlk and E. Elbaşı, “A Comparative Study on Chaotic Map Approaches for Transform Domain Watermarking Algorithm,” Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, Prague, 2015, pp. 1-5. doi: 10.1109/TSP.2015.7296451
Abstract: Watermarking is identified as a major technology to achieve copyright protection and multimedia security. In this study, combination of Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD) via Lower-and-Upper (LU) Decomposition is used as a transform domain watermarking algorithm with a comparative study on chaotic map approaches as Logistic Map (LM), Asymmetric Tent Map (ATM), and Arnold's Cat Map (ACM). As quality metrics, Similarity Ratio (SR) values for ACM are greater by approximately 20% than that of LM and ATM despite nearly same Peak Signal-to-Noise Ratios (PSNRs) after attacks. This study expands the application areas of watermarking with the algorithm consisting DWT, SVD, and LU with chaotic maps together.
Keywords: discrete wavelet transforms; image watermarking; singular value decomposition; ACM; ATM; Arnold's cat map; DWT; LM; LU decomposition; PSNR; SR values; SVD; asymmetric tent map; chaotic map approach; copyright protection; discrete wavelet transform; logistic map; lower-and-upper decomposition; multimedia security; peak signal-to-noise ratios; quality metrics; similarity ratio values; transform domain watermarking algorithm; Discrete wavelet transforms; Logistics; Matrix decomposition; Robustness; Watermarking (ID#: 16-10930)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296451&isnumber=7296206
S. Edward Jero, P. Ramu and S. Ramakrishnan, “Steganography in Arrhythmic Electrocardiogram Signal,” Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, Milan, 2015, pp. 1409-1412. doi: 10.1109/EMBC.2015.7318633
Abstract: Security and privacy of patient data is a vital requirement during exchange/storage of medical information over communication network. Steganography method hides patient data into a cover signal to prevent unauthenticated accesses during data transfer. This study evaluates the performance of ECG steganography to ensure secured transmission of patient data where an abnormal ECG signal is used as cover signal. The novelty of this work is to hide patient data into two dimensional matrix of an abnormal ECG signal using Discrete Wavelet Transform and Singular Value Decomposition based steganography method. A 2D ECG is constructed according to Tompkins QRS detection algorithm. The missed R peaks are computed using RR interval during 2D conversion. The abnormal ECG signals are obtained from the MIT-BIH arrhythmia database. Metrics such as Peak Signal to Noise Ratio, Percentage Residual Difference, Kullback-Leibler distance and Bit Error Rate are used to evaluate the performance of the proposed approach.
Keywords: data privacy; discrete wavelet transforms; diseases; electrocardiography; medical signal processing; security of data; singular value decomposition; steganography; 2D abnormal ECG signal matrix; ECG steganography; Kullback-Leibler distance; MIT-BIH arrhythmia database; Tompkins QRS detection algorithm; arrhythmic electrocardiogram signal; bit error rate; cover signal; data security; data transfer; discrete wavelet transform; medical information; percentage residual difference; steganography method; Bit error rate; Discrete wavelet transforms; Electrocardiography; Matrix decomposition; Measurement; Watermarking (ID#: 16-10931)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7318633&isnumber=7318236
X. Li, W. Wang, A. Razi and T. Li, “Nonconvex Low-Rank Sparse Factorization for Image Segmentation,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 227-230. doi: 10.1109/CIS.2015.63
Abstract: In this paper, we present a new color image segmentation model based on nonconvex low-rank and nonconvex sparse (NLRSR) factorization of the feature matrix. The main difference between our model and the recently developed methods like the sparse subspace clustering (SSC) and low-rank representation (LRR) based subspace clustering is that they use the data matrix as the dictionary while we learn a dictionary. In order to better cater to the low-rankness of the dictionary and the sparsity of the represent coefficients, we use the nonconvex penalty functions rather than the convex ones. The variable splitting technique and the alternative minimization method are applied for solving the proposed NLRSR model. The sparse representation coefficient matrix is utilized to construct an affinity matrix and then the normalized cut (Ncut) is applied to obtain the segmentation result. Experimental results show our method can achieve visually better segmentation results than the SSC and LRR method. Objective metrics further confirms this.
Keywords: concave programming; image colour analysis; image representation; image segmentation; matrix decomposition; minimisation; pattern clustering; sparse matrices; LRR based subspace clustering method; NLRSR factorization; Ncut; SSC method; affinity matrix; color image segmentation model; feature matrix; low-rank representation based subspace clustering; nonconvex low-rank sparse factorization; nonconvex penalty functions; normalized cut; sparse representation coefficient matrix; sparse subspace clustering; variable splitting technique; Computational modeling; Dictionaries; Feature extraction; Image segmentation; Measurement; Sparse matrices; Yttrium; Image segmentation; data clustering; low-rank representation; sparse representation (ID#: 16-10932)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396293&isnumber=7396229
Y. Jarraya, A. Shameli-Sendi, M. Pourzandi and M. Cheriet, “Multistage OCDO: Scalable Security Provisioning Optimization in SDN-Based Cloud,” Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, New York City, NY, 2015, pp. 572-579. doi: 10.1109/CLOUD.2015.82
Abstract: Cloud computing is increasingly changing the landscape of computing, however, one of the main issues that is refraining potential customers from adopting the cloud is the security. Network functions virtualization together with software-defined networking can be used to efficiently coordinate different network security functionality in the network. To squeeze the best out of network capabilities, there is need for algorithms for optimal placement of the security functionality in the cloud infrastructure. However, due to the large number of flows to be considered and complexity of interactions in these networks, the classical placement algorithms are not scalable. To address this issue, we elaborate an optimization framework, namely OCDO, that provides adequate and scalable network security provisioning and deployment in the cloud. Our approach is based on an innovative multistage approach that combines together decomposition and segmentation techniques to the problem of security functions placement while coping with the complexity and the scalability of such an optimization problem. We present the results of multiple scenarios to assess the efficiency and the adequacy of our framework. We also describe our prototype implementation of the framework integrated into an open source cloud framework, i.e. Open stack.
Keywords: cloud computing; optimisation; security of data; SDN-based cloud; innovative multistage approach; multistage OCDO; scalable network security provisioning; scalable security provisioning optimization framework; security functionality; software-defined networking; Bandwidth; Complexity theory; Network topology; Optimization; Security; Servers; Topology; Cloud; Decomposition; OpenStack; SDN; Security Provisioning; Segmentation (ID#: 16-10933)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214092&isnumber=7212169
I. Jeon, E. E. Papalexakis, U. Kang and C. Faloutsos, “HaTen2: Billion-Scale Tensor Decompositions,” Data Engineering (ICDE), 2015 IEEE 31st International Conference on, Seoul, 2015, pp. 1047-1058. doi: 10.1109/ICDE.2015.7113355
Abstract: How can we find useful patterns and anomalies in large scale real-world data with multiple attributes? For example, network intrusion logs, with (source-ip, target-ip, port-number, timestamp)? Tensors are suitable for modeling these multi-dimensional data, and widely used for the analysis of social networks, web data, network traffic, and in many other settings. However, current tensor decomposition methods do not scale for tensors with millions and billions of rows, columns and 'fibers', that often appear in real datasets. In this paper, we propose HaTen2, a scalable distributed suite of tensor decomposition algorithms running on the MapReduce platform. By carefully reordering the operations, and exploiting the sparsity of real world tensors, HaTen2 dramatically reduces the intermediate data, and the number of jobs. As a result, using HaTen2, we analyze big real-world tensors that cannot be handled by the current state of the art, and discover hidden concepts.
Keywords: Internet; security of data; tensors; HaTen2; MapReduce platform; Web data; billion-scale tensor decompositions; columns; fibers; large scale real-world data; modeling; multidimensional data; network intrusion logs; network traffic; rows; scalable distributed suite; social networks; tensor decomposition algorithms; tensor decomposition methods; Algorithm design and analysis; Computer science; Data models; Matrix converters; Matrix decomposition; Scalability; Tensile stress (ID#: 16-10934)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113355&isnumber=7113253
A. Haidar, A. YarKhan, C. Cao, P. Luszczek, S. Tomov and J. Dongarra, “Flexible Linear Algebra Development and Scheduling with Cholesky Factorization,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 861-864. doi: 10.1109/HPCC-CSS-ICESS.2015.285
Abstract: Modern high performance computing environments are composed of networks of compute nodes that often contain a variety of heterogeneous compute resources, such as multicore CPUs and GPUs. One challenge faced by domain scientists is how to efficiently use all these distributed, heterogeneous resources. In order to use the GPUs effectively, the workload parallelism needs to be much greater than the parallelism for a multicore-CPU. Additionally, effectively using distributed memory nodes brings out another level of complexity where the work load must be carefully partitioned over the nodes. In this work we are using a lightweight runtime environment to handle many of the complexities in such distributed, heterogeneous systems. The runtime environment uses task-superscalar concepts to enable the developer to write serial code while providing parallel execution. The task-programming model allows the developer to write resource-specialization code, so that each resource gets the appropriate sized workload-grain. Our task-programming abstraction enables the developer to write a single algorithm that will execute efficiently across the distributed heterogeneous machine. We demonstrate the effectiveness of our approach with performance results for dense linear algebra applications, specifically the Cholesky factorization.
Keywords: distributed memory systems; graphics processing units; mathematics computing; matrix decomposition; parallel processing; resource allocation; scheduling; Cholesky factorization; GPU; compute nodes; distributed heterogeneous machine; distributed memory nodes; distributed resources; flexible linear algebra development; flexible linear algebra scheduling; heterogeneous compute resources; high performance computing environments; multicore-CPU; parallel execution; resource-specialization code; serial code; task-programming abstraction; task-programming model; task-superscalar concept; workload parallelism; Graphics processing units; Hardware; Linear algebra; Multicore processing; Parallel processing; Runtime; Scalability; accelerator-based distributed memory computers; heterogeneous HPC computing; superscalar dataflow scheduling (ID#: 16-10935)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336271&isnumber=7336120
L. Kuang; L. Yang; J. Feng; M. Dong, “Secure Tensor Decomposition Using Fully Homomorphic Encryption Scheme,” in IEEE Transactions on Cloud Computing, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TCC.2015.2511769
Abstract: As the rapidly growing volume of data are beyond the capabilities of many computing infrastructures, to securely process them on cloud has become a preferred solution which can both utilize the powerful capabilities provided by cloud and protect data privacy. This paper presents an approach to securely decompose a tensor, a mathematical model widely used in data-intensive applications, to a core tensor multiplied with a certain number of truncated orthogonal bases. The unstructured, semi-structured, and structured data are represented as low-order sub-tensors which are then encrypted using the fully homomorphic encryption scheme. A unified high-order cipher tensor model is constructed by collecting all the cipher sub-tensors and embedding them to a base tensor space. The cipher tensor is decomposed through a proposed secure algorithm, in which the square root operations are eliminated during the Lanczos procedure. Theoretical analyses of the algorithm in terms of time complexity, memory usage, decomposition accuracy, and data security are provided. Experimental results demonstrate that the approach can securely decompose a tensor. With the advancement of fully homomorphic encryption scheme, it can be expected that the secure tensor decomposition approach has the potential to be applied on cloud for privacy-preserving data processing.
Keywords: Ciphers; Cloud computing; Encryption; Matrix decomposition; Symmetric matrices; Tensile stress; Cloud; Fully Homomorphic Encryption; Lanczos Method; Tensor Decomposition (ID#: 16-10936)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7364235&isnumber=6562694
M. Nick; O. Alizadeh-Mousavi; R. Cherkaoui; M. Paolone, “Security Constrained Unit Commitment with Dynamic Thermal Line Rating,” in IEEE Transactions on Power Systems , vol. 31, no. 3, pp. 2014-2025, May 2016. doi: 10.1109/TPWRS.2015.2445826
Abstract: The integration of the dynamic line rating (DLR) of overhead transmission lines (OTLs) in power systems security constrained unit commitment (SCUC) potentially enhances the overall system security as well as its technical/economic performances. This paper proposes a scalable and computationally efficient approach aimed at integrating the DLR in SCUC problem. The paper analyzes the case of the SCUC with AC load flow constraints. The AC-optimal power flow (AC-OPF) is linearized and incorporated into the problem. The proposed multi-period formulation takes into account a realistic model to represent the different terms appearing in the Heat-Balance Equation (HBE) of the OTL conductors. In order to include the HBE in the OPF, a relaxation is proposed for the heat gain associated to resistive losses while the inclusion of linear approximations are investigated for both convection and radiation heat losses. A decomposition process relying on the Benders decomposition is used in order to breakdown the problem and incorporate a set of contingencies representing both generators and line outages. The effects of different linearization, as well as time step discretization of HBE, are investigated. The scalability of the proposed method is verified using IEEE 118-bus test system.
Keywords: Conductors; Heating; Mathematical model; Reactive power; Security; Wind speed; AC optimal power flow; Benders decomposition; Heat Balance Equation (HBE); convex formulation (ID#: 16-10937)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160786&isnumber=4374138
H. Ye; Z. Li, “Robust Security-Constrained Unit Commitment and Dispatch with Recourse Cost Requirement,” in IEEE Transactions on Power Systems, vol. 31, no. 5, pp. 3527-3536, Sept. 2016. doi: 10.1109/TPWRS.2015.2493162
Abstract: With increasing renewable energy resources, price- sensitive loads, and electric-vehicle charging stations in the power grid, uncertainties on both power generation and consumption sides become critical factors in the Security-Constrained Unit Commitment (SCUC) problem. Recently, worst scenario based robust optimization approaches are employed to consider uncertainties. This paper proposes a non-conservative robust SCUC model and an effective solution approach. The contributions of this paper are three-fold. First, the commitment and dispatch solution obtained in this paper can be directly used in day-ahead market as it overcomes two issues, conservativeness and absence of robust dispatch, which are the two largest obstacles to applying robust SCUC in real markets. Secondly, a new concept recourse cost requirement, similar to reserve requirement, is proposed to define the upper bound of re-dispatch cost when uncertainties are revealed. Thirdly, a novel decomposition approach is proposed to effectively address the well-known computational challenge in robust approaches. Simulation results on the IEEE 118-bus system validate the effectiveness of the proposed novel model and solution approach.
Keywords: Computational modeling; Load modeling; Optimization; Renewable energy sources; Robustness; Stochastic processes; Uncertainty; Recourse cost; redispatch; renewable energy; robust optimization; robust security-constrained unit commitment; uncertainty (ID#: 16-10938)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7331341&isnumber=4374138
Y. Wang; H. Zhong; Q. Xia; D. S. Kirschen; C. Kang, “An Approach for Integrated Generation and Transmission Maintenance Scheduling Considering N-1 Contingencies,” in IEEE Transactions on Power Systems, vol. 31, no. 3, pp. 2225-2233, May 2016. doi: 10.1109/TPWRS.2015.2453115
Abstract: This paper presents an approach for integrated generation and transmission maintenance scheduling model (IMS) that takes into consideration N-1 contingencies. The objective is to maximize the maintenance preference of facility owners while satisfying N-1 security and other constraints. To achieve this goal, Benders decomposition is employed to decompose the problem into a master problem and several sub-problems. A Relaxation Induced (RI) algorithm is proposed to efficiently solve the large mixed integer programming (MIP) master problem. This algorithm is based on the solution of the linear relaxed problem. It is demonstrated that the proposed algorithm can efficiently reach a near-optimal solution that is usually satisfactory. If this near-optimal solution is not acceptable, it is used as the initial solution to fast start the solution of the original IMS problem. The performance of the proposed method is demonstrated using a modified version of the IEEE 30-bus system and a model of the power system of a Chinese province. Case studies show that the proposed algorithm can improve the computational efficiency by more than an order of magnitude.
Keywords: Computational modeling; Indexes; Linear programming; Maintenance engineering; Power transmission lines; Schedules; Security; Benders decomposition; N-1 security; generation maintenance scheduling; mixed integer programming; relaxation induced; transmission maintenance scheduling (ID#: 16-10939)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177142&isnumber=4374138
S. Teimourzadeh; F. Aminifar, “MILP Formulation for Transmission Expansion Planning with Short-Circuit Level Constraints,” in IEEE Transactions on Power Systems, vol. 31, no. 4, pp. 3109-3918, July 2016. doi: 10.1109/TPWRS.2015.2473663
Abstract: This paper deals with the short-circuit level constrained transmission expansion planning (TEP) problem through a mixed-integer linear programming (MILP) approach. The proposed framework is outlined by a master problem and three subproblems based on the Benders decomposition technique. The master problem incorporates the optimal investment planning model. System security and short-circuit level constraints are examined by subproblems I and II, respectively. In case of any violation, infeasibility cuts are derived to reflect the appropriate modification in the master problem solution. The short-circuit study is inherently a nonlinear analysis and hard to concurrently be tackled in power system studies. To overcome this difficulty, a linear approximation is developed for the short-circuit analysis which not only mitigates the computational burden of the problem, but even is efficient for taking the advantage of decomposed schemes. Subproblem III examines optimality of the investment solution from the operation point of view and, through optimality cuts, steers the master problem toward the optimal solution. The proposed model is tested on the IEEE 24-bus reliability test system and its effectiveness is assured by comprehensive simulation studies.
Keywords: Impedance; Indexes; Investment; Linear programming; Planning; Power transmission lines; Security; Benders decomposition method; mixed-integer programming (MILP); transmission expansion planning (TEP) (ID#: 16-10940)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272775&isnumber=4374138
P. Henneaux; P. E. Labeau; J. C. Maun; L. Haarla, “A Two-Level Probabilistic Risk Assessment of Cascading Outages,” in IEEE Transactions on Power Systems, vol. 31, no. 3, pp. 2393-2403, 2015. doi: 10.1109/TPWRS.2015.2439214
Abstract: Cascading outages in power systems can lead to major power disruptions and blackouts and involve a large number of different mechanisms. The typical development of a cascading outage can be split in two phases with different dominant cascading mechanisms. As a power system is usually operated in N-1 security, an initiating contingency cannot entail a fast collapse of the grid. However, it can trigger a thermal transient, increasing significantly the likelihood of additional contingencies, in a “slow cascade.” The loss of additional elements can then trigger an electrical instability. This is the origin of the subsequent “fast cascade,” where a rapid succession of events can lead to a major power disruption. Several models of probabilistic simulations exist, but they tend to focus either on the slow cascade or on the fast cascade, according to mechanisms considered, and rarely on both. We propose in this paper a decomposition of the analysis in two levels, able to combine probabilistic simulations for the slow and the fast cascades. These two levels correspond to these two typical phases of a cascading outage. Models are developed for each of these phases. A simplification of the overall methodology is applied to two test systems to illustrate the concept.
Keywords: Computational modeling; Load modeling; Power system dynamics; Power system stability; Probabilistic logic; Steady-state; Transient analysis; Blackout; Monte Carlo methods; cascading failure; power system reliability; power system security; risk analysis (ID#: 16-10941)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127060&isnumber=4374138
H. Zhang; H. Xing; J. Cheng; A. Nallanathan; V. Leung, “Secure Resource Allocation for OFDMA Two-Way Relay Wireless Sensor Networks Without and with Cooperative Jamming,” in IEEE Transactions on Industrial Informatics, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TII.2015.2489610
Abstract: We consider secure resource allocations for orthogonal frequency division multiple access two-way relay wireless sensor networks. The joint problem of subcarrier assignment, subcarrier pairing and power allocations is formulated under scenarios of using and not using cooperative jamming to maximize the secrecy sum rate subject to limited power budget at the relay station and orthogonal subcarrier allocation policies. The optimization problems are shown to be mixed integer programming and non-convex. For the scenario without cooperative jamming, we propose an asymptotically optimal algorithm based on the dual decomposition method, and a suboptimal algorithm with lower complexity. For the scenario with cooperative jamming, the resulting optimization problem is non-convex, and we propose a heuristic algorithm based on alternating optimization. Finally, the proposed schemes are evaluated by simulations and compared to the existing schemes.
Keywords: Communication system security; Jamming; Relays; Resource management; Sensors; Wireless communication; Wireless sensor networks; Cooperative jamming; OFDMA; physical layer security; secure resource allocation; wireless sensor network (ID#: 16-10942)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296635&isnumber=4389054
C. J. Neill; R. S. Sangwan; N. H. Kilicay-Ergin, “A Prescriptive Approach to Quality-Focused System Architecture,” in IEEE Systems Journal, vol. PP, no. 99, pp. 1-12, 2015. doi: 10.1109/JSYST.2015.2423259
Abstract: The most critical requirements for the lifetime value of a system are its nonfunctional requirements (NFRs) such as reliability, security, maintainability, changeability, etc. These are collectively known as the “ilities,” and they are typically not addressed in system design until the functional architecture has been completed. In this paper, we propose the use of quality-based design that modifies this standard process so that those NFRs, which actually reflect the true business needs, are addressed first. This is accomplished through a combination of quality attribute workshops, to elicit and refine quality-based mission objectives, and attribute-driven design, where design heuristics, termed tactics, can be employed in the decomposition of the system. This ensures that the final system better reflects and embodies those architecturally significant requirements rather than having them addressed secondarily. This is an important change since the “ilities” are systemic properties (properties of the system as a whole) rather than properties of individual components or subsystems. Consequently, they are difficult to address in an architecture that has already been decomposed with respect to required functionality. To illustrate the proposed approach, we provide an example based upon the Department of Defense Pre-positioned Expeditionary Assistance Kit.
Keywords: Computer architecture; Conferences; Reliability; Security; Software; Systems engineering and theory; Unified modeling language; Attribute-driven design (ADD); disaster assistance system; quality attribute workshops; system architecting (ID#: 16-10943)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7105359&isnumber=4357939
Y. Han; P. Shen; X. Zhao; J. M. Guerrero, “Control Strategies for Islanded Microgrid Using Enhanced Hierarchical Control Structure with Multiple Current-Loop Damping Schemes,” in IEEE Transactions on Smart Grid, vol. PP, no. 99, pp. 1-1, 2015. doi: 10.1109/TSG.2015.2477698
Abstract: In this paper, the modeling, controller design, and stability analysis of the islanded microgrid (MG) using enhanced hierarchical control structure with multiple current loop damping schemes is proposed. The islanded MG consists of the parallel-connected voltage source inverters using inductor-capacitor-inductor (LCL) output filters, and the proposed control structure includes the primary control with additional phase-shift loop, the secondary control for voltage amplitude and frequency restoration, the virtual impedance loops which contain virtual positive- and negative-sequence impedance loops at fundamental frequency and virtual variable harmonic impedance loop at harmonic frequencies, and the inner voltage and current loop controllers. A small-signal model for the primary and secondary controls with additional phase-shift loop is presented, which shows an over-damped feature from eigenvalue analysis of the state matrix. The moving average filter-based sequence decomposition method is proposed to extract the fundamental positive and negative sequences and harmonic components. The multiple inner current loop damping scheme is presented, including the virtual positive, virtual negative, and variable harmonic sequence impedance loops for reactive and harmonic power sharing purposes, and the proposed active damping scheme using capacitor current feedback loop of the LCL filter, which shows enhanced damping characteristics and improved inner-loop stability. Finally, the experimental results are provided to validate the feasibility of the proposed approach.
Keywords: Damping; Frequency control; Harmonic analysis; Impedance; Inverters; Power system harmonics; Voltage control; Active damping (AD); droop control; microgrid (MG); phase-shift control; power sharing; secondary control; small-signal model; virtual impedance; voltage control (ID#: 16-10944)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7283639&isnumber=5446437
V. Kekatos; G. B. Giannakis; R. Baldick, “Online Energy Price Matrix Factorization for Power Grid Topology Tracking,” in IEEE Transactions on Smart Grid, vol. 7, no. 3, pp. 1239-1248, May 2016. doi: 10.1109/TSG.2015.2469098
Abstract: Grid security and open markets are two major smart grid goals. Transparency of market data facilitates a competitive and efficient energy environment. But it may also reveal critical physical system information. Recovering the grid topology based solely on publicly available market data is explored here. Real-time energy prices are typically calculated as the Lagrange multipliers of network-constrained economic dispatch; that is, via a linear program (LP) typically solved every 5 min. Since the grid Laplacian matrix is a parameter of this LP, someone apart from the system operator could try inferring this topology-related matrix upon observing successive LP dual outcomes. It is first shown that the matrix of spatio-temporal prices can be factored as the product of the inverse Laplacian times a sparse matrix. Leveraging results from sparse matrix decompositions, topology recovery schemes with complementary strengths are subsequently formulated. Solvers scalable to high-dimensional and streaming market data are devised. Numerical validation using synthetic and real-load data on the IEEE 30-bus grid provide useful input for current and future market designs.
Keywords: Laplace equations; Network topology; Power grids; Real-time systems; Sparse matrices; Topology; Transmission line matrix methods; Alternating direction method of multipliers (ADMM); compressive sensing; economic dispatch; graph Laplacian; locational marginal prices (LMPs); online convex optimization (ID#: 16-10945)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226869&isnumber=5446437
X. Gonzalez, J. M. Ramirez and G. Caicedo, “An Alternative Method for Multiarea State Estimation Based on OCD,” Power & Energy Society General Meeting, 2015 IEEE, Denver, CO, 2015, pp. 1-5, 2015. doi: 10.1109/PESGM.2015.7286194
Abstract: The State Estimation (SE) constitutes the main core of the online security analysis, such that the development of suitable strategies to improve state estimators is one of the main aims in the transition toward smart control centers and transmission systems. This paper focuses on an alternative method for multiarea state estimation (MASE) based on Optimality Condition Decomposition (OCD). The state estimation problem is addressed through a decentralized optimization scheme with minimum information exchange among subsystems. The proposed method is applied to an equivalent of the Mexican power grid of 190-buses, which has been split into two and three subsystems. Results indicate that the proposed strategy is a reliable alternative.
Keywords: power grids; power system security; power system state estimation; power transmission control; Mexican power grid; OCD; alternative method; decentralized optimization scheme; multiarea state estimation; online security analysis; optimality condition decomposition; smart control centers; subsystem information exchange; transmission systems; Area measurement; Art; Power measurement; Power system reliability; Reliability; State estimation; Decentralized scheme; Decomposition methods; Lagrangian relaxation; Multiarea state estimation; Non-linear optimization; Optimality Condition Decomposition (ID#: 16-10946)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286194&isnumber=7285590
M. H. Amini, R. Jaddivada, S. Mishra and O. Karabasoglu, “Distributed Security Constrained Economic Dispatch,” Smart Grid Technologies - Asia (ISGT ASIA), 2015 IEEE Innovative, Bangkok, 2015, pp. 1-6. doi: 10.1109/ISGT-Asia.2015.7387167
Abstract: In this paper, we investigate two decomposition methods for their convergence rate which are used to solve security constrained economic dispatch (SCED): 1) Lagrangian Relaxation (LR), and 2) Augmented Lagrangian Relaxation (ALR). First, the centralized SCED problem is posed for a 6-bus test network and then it is decomposed into subproblems using both of the methods. In order to model the tie-line between decomposed areas of the test network, a novel method is proposed. The advantages and drawbacks of each method are discussed in terms of accuracy and information privacy. We show that there is a tradeoff between the information privacy and the convergence rate. It has been found that ALR converges faster compared to LR, due to the large amount of shared data.
Keywords: load dispatching; power system economics; power system security; 6-bus test network; ALR; Augmented lagrangian relaxation; LR; Lagrangian relaxation; centralized SCED problem; convergence rate; decomposition methods; distributed security constrained economic dispatch; information privacy; tie-line modelling; Economics; Generators; Linear programming; Load flow; Optimization; Security; DC power flow; Decomposition theory; Distributed optimization; Lagrangian Relaxation; Security constrained economic dispatch (ID#: 16-10947)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7387167&isnumber=7386954
N. A. Daher, I. Mougharbel, M. Saad, H. Y. Kanaan and D. Asber, “Pilot Buses Selection Based on Reduced Jacobian Matrix,” Smart Energy Grid Engineering (SEGE), 2015 IEEE International Conference on, Oshawa, ON, 2015, pp. 1-7. doi: 10.1109/SEGE.2015.7324611
Abstract: The non-supervised insertion of renewable energy sources into electric power networks causes fluctuations that may lead to voltage instability. The simple and coordinated secondary voltage control systems are used to avoid this instability. To obtain maximum regulation performance with optimized number of controllers, an appropriate selection of pilot buses is suggested. In this paper new algorithm is proposed to select optimal pilot buses. This method is based on the singular decomposition of the reduced Jacobian matrix with the voltage security margin index. To evaluate the efficiency of this algorithm, a comparison with the Bifurcation, Clustering with Node-Partitioning Around Medoids and the Hybrid algorithms is proposed. The simulation results show that the proposed algorithm gives optimal pilot buses according to the selection criteria (explained later in the paper).
Keywords: optimal control; optimisation; power system stability; voltage control; bifurcation algorithms; clustering algorithms; hybrid algorithms; maximum regulation performance; nodepartitioning around medoids; nonsupervised renewable energy source insertion electric power network; optimal pilot bus selection; reduced Jacobian matrix; singular decomposition; voltage security margin index; Algorithm design and analysis; Bifurcation; Clustering algorithms; Controllability; Generators; Robustness; Voltage control; Pilot buses selection; Power Network Stability; Renewable energy; Secondary Voltage Control (ID#: 16-10948)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7324611&isnumber=7324562
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Deterrence 2015 |
Finding ways both technical and behavioral to provide disincentives to threats is a promising area of research. Since most cybersecurity is “bolt on” rather than embedded, and since detection, response, and forensics are expensive, time-consuming processes, discouraging attacks can be a cost-effective cybersecurity approach. The research works cited here were presented and published in 2015.
M. K. Awad, B. Zogheib and H. M. K. Alazemi, “Optimal Penalties for Misbehavior Deterrence in Communication Networks,” Communications, Computers and Signal Processing (PACRIM), 2015 IEEE Pacific Rim Conference on, Victoria, BC, 2015,
pp. 381-384. doi: 10.1109/PACRIM.2015.7334866
Abstract: The communication among entities in any network is administered by a set of rules and technical specifications detailed in the communication protocol. All communicating entities adhere to the same protocol to successfully exchange data. Most of the rules are expressed in an algorithm format that computes a decision based on a set of inputs provided by communicating entities or collected by a central controller. Due to the increasing number of communicating entities and large bandwidth required to exchange the set of inputs generated at each entity, distributed implementations have been favorable to reduce the control overhead. In such implementations, each entity self-computes crucial protocol decisions; therefore, can alter these decisions to gain unfair share of the resources managed by the protocol. Misbehaving users degrade the performance of the whole network in-addition to starving well-behaving users. In this work we develop a framework to derive the optimal penalty strategy for penalizing misbehaving users. The proposed framework considers users learning of the detection mechanism techniques and the detection mechanism tracking of the users behavior and history of protocol offenses. Analysis indicates that escalating penalties are optimal for deterring repeat protocol offenses.
Keywords: protocols; communication networks; control overhead; crucial protocol decisions; detection mechanism techniques; misbehavior deterrence; optimal penalty strategy; Communication networks; Decision making; Mathematical model; Monitoring; Protocols; Quality of service; Throughput; Computer Networks Security; Penalty Scheme; Resource Allocation (ID#: 16-11053)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7334866&isnumber=7334793
J. A. Ambrose, R. G. Ragel, D. Jayasinghe, T. Li and S. Parameswaran, “Side Channel Attacks in Embedded Systems: A Tale of Hostilities and Deterrence,” Quality Electronic Design (ISQED), 2015 16th International Symposium on, Santa Clara, CA, 2015, pp. 452-459. doi: 10.1109/ISQED.2015.7085468
Abstract: Security of embedded computing systems is becoming paramount as these devices become more ubiquitous, contain personal information and are increasingly used for financial transactions. Side Channel Attacks, in particular, have been effective in obtaining secret keys which protect information. In this paper we selectively classify the side channel attacks, and selectively demonstrate a few attacks. We further classify the popular countermeasures to Side Channel Attacks. The paper paints an overall picture for a researcher or a practitioner who seeks to understand or begin to work in the area of side channel attacks in embedded systems.
Keywords: embedded systems; security of data; embedded computing system; embedded system; financial transaction; personal information; security; side channel attack; Algorithm design and analysis; Correlation; Embedded systems; Encryption; Power demand; Timing (ID#: 16-11054)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085468&isnumber=7085355
A. L. Russell, “Strategic Anti-Access/Area Denial in Cyberspace,” Cyber Conflict: Architectures in Cyberspace (CyCon), 2015 7th International Conference on, Tallinn, 2015,
pp. 153-168. doi: 10.1109/CYCON.2015.7158475
Abstract: This paper investigates how anti-access and area denial (A2/AD) operations can be conducted to deny actors access to cyberspace. It examines multiple facets of cyberspace to identify the potential vulnerabilities within the system that could be exploited. This project will also touch upon the policy implications of strategic cyber A2/AD for national security, particularly as they relate to deterrence strategy, coercion, and interstate conflict. The question of deterrence is particularly important. Given the extensive reliance of modern states and societies on cyberspace, the ability to deny access to cyberspace would threaten the economy, security, and stability of a state. A credible threat of this nature may be sufficient to deter armed conflict or compel a more favorable course of action. Thus, strategic A2/AD in cyberspace may create new options and tools for international relations. This paper will address strategic A2/AD with regards to the physical aspects of cyberspace (i.e., cables, satellites). It will assess the strengths and potential vulnerabilities of the physical attributes (the architecture) of cyberspace, as they relate to potential A2/AD operations. It will also address the relevant policy and strategy implications of strategic cyber A2/AD for states, including how this may affects the development of cyber security strategy, critical infrastructure protection, and private sector cooperation. The paper will offer conclusions and recommendations to policymakers and scholars.
Keywords: national security; security of data; critical infrastructure protection; cybersecurity strategy; cyberspace; national security; private sector cooperation; strategic anti-access-area denial operation; strategic cyber A2-AD operation; Communication cables; Cyberspace; Internet; Optical fiber cables; Optical fibers; Physical layer; Satellites; A2/AD; anti-access/area denial; conflict; deterrence; infrastructure; strategy (ID#: 16-11055)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158475&isnumber=7158456
J. Kallberg, “A Right to Cybercounter Strikes: The Risks of Legalizing Hack Backs,” in IT Professional, vol. 17, no. 1, pp. 30-35, Jan.-Feb. 2015. doi: 10.1109/MITP.2015.1
Abstract: The idea to legalize hacking back has gained traction in the last few years and has received several influential corporate and political proponents in the US and Europe. The growing frustration with repeated cyberattacks and a lack of effective law enforcement pushes for alternative ways to prevent future exploits. Countercyberattacks are currently illegal in most nations, because they constitute a cybercrime independent of the initial attack. Considering the legalization of cyber counterattacks raises a set of questions, including those linked to the underlying assumptions supporting the proposal to legalize countercyberattacks. Another line of questions deal with the embedded challenges to the role of the nation state. Privatized countercyberattacks could jeopardize the authority and legitimacy of the state. The combined questions raised by hacking back undermines the viability of the action itself, so hacking back is likely to be ineffective and to have a negative impact on the development of Internet governance and norms. This article is part of a special issue on IT security.
Keywords: law; security of data; IT security; Internet governance; cyber counterattack legalization; cybercounter strikes; cybercrime; effective law enforcement; hack back legalization; privatized countercyberattacks; Computer crime; Computer hacking; Computer security; Information technology; Intellectual property; Internet; Law; cyber defense; cyber deterrence; cyber ethics; cyber theft; hack back; information technology; intellectual property; retaliation; security (ID#: 16-11056)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7030161&isnumber=7030137
J. Rivera, “Achieving Cyberdeterrence and the Ability of Small States to Hold Large States at Risk,” Cyber Conflict: Architectures in Cyberspace (CyCon), 2015 7th International Conference on, Tallinn, 2015, pp. 7-24. doi: 10.1109/CYCON.2015.7158465
Abstract: Achieving cyberdeterrence is a seemingly elusive goal in the international cyberdefense community. The consensus among experts is that cyberdeterrence is difficult at best and perhaps impossible, due to difficulties in holding aggressors at risk, the technical challenges of attribution, and legal restrictions such as the UN Charter's prohibition against the use of force. Consequently, cyberspace defenders have prioritized increasing the size and strength of the metaphorical “walls” in cyberspace over facilitating deterrent measures.
Keywords: security of data; UN Charter prohibition; cyberdeterrence; cyberspace defenders; Cyberspace; Force; Internet; Lenses; National security; Power measurement; attribution; deterrence; use of force (ID#: 16-11057)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158465&isnumber=7158456
E. Takamura, K. Mangum, F. Wasiak and C. Gomez-Rosa, “Information Security Considerations for Protecting NASA Mission Operations Centers (MOCs),” Aerospace Conference, 2015 IEEE, Big Sky, MT, 2015, pp. 1-14. doi: 10.1109/AERO.2015.7119207
Abstract: In NASA space flight missions, the Mission Operations Center (MOC) is often considered “the center of the (ground segment) universe,” at least by those involved with ground system operations. It is at and through the MOC that spacecraft is commanded and controlled, and science data acquired. This critical element of the ground system must be protected to ensure the confidentiality, integrity and availability of the information and information systems supporting mission operations. This paper identifies and highlights key information security aspects affecting MOCs that should be taken into consideration when reviewing and/or implementing protecting measures in and around MOCs. It stresses the need for compliance with information security regulation and mandates, and the need for the reduction of IT security risks that can potentially have a negative impact to the mission if not addressed. This compilation of key security aspects was derived from numerous observations, findings, and issues discovered by IT security audits the authors have conducted on NASA mission operations centers in the past few years. It is not a recipe on how to secure MOCs, but rather an insight into key areas that must be secured to strengthen the MOC, and enable mission assurance. Most concepts and recommendations in the paper can be applied to non-NASA organizations as well. Finally, the paper emphasizes the importance of integrating information security into the MOC development life cycle as configuration, risk and other management processes are tailored to support the delicate environment in which mission operations take place.
Keywords: aerospace computing; command and control systems; data integrity; information systems; risk management; security of data; space vehicles; IT security audits; IT security risk reduction; MOC development life cycle; NASA MOC protection; NASA mission operation center protection; NASA space flight missions; ground system operations; information availability; information confidentiality; information integrity; information security considerations; information security regulation; information systems; nonNASA organizations; spacecraft command and control; Access control; Information security; Monitoring; NASA; Software; IT security metrics; NASA; access control; asset protection; automation; change control; connection protection; continuous diagnostics and mitigation; continuous monitoring; ground segment ground system; incident handling; information assurance; information security; information security leadership; information technology leadership; infrastructure protection; least privilege; logical security; mission assurance; mission operations; mission operations center; network security; personnel screening; physical security; policies and procedures; risk management; scheduling restrictions; security controls; security hardening; software updates; system cloning and software licenses; system security; system security life cycle; unauthorized change detection; unauthorized change deterrence; unauthorized change prevention (ID#: 16-11058)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119207&isnumber=7118873
L. Watkins, K. Silberberg, J. A. Morales and W. H. Robinson, “Using Inherent Command and Control Vulnerabilities to Halt DDoS Attacks,” 2015 10th International Conference on Malicious and Unwanted Software (MALWARE), Fajardo, 2015, pp. 3-10. doi: 10.1109/MALWARE.2015.7413679
Abstract: Dirt Jumper is a powerful distributed denial of service (DDoS) family of toolkits (e.g., includes Drive version x, Dirt Jumper version x, and Pandora) sold in online black markets. The buyers are typically individuals who seek to infect computers globally and incite them to collectively emit crippling unsolicited network traffic to unsuspecting targets, often for criminal purposes. The Dirt Jumper Family (DJF) of botnets is not new; however, new variants have made the family more destructive and more relevant. The DJF has caused millions of dollars of damage across several different business sectors. Notably in 2014, a European media company was attacked with a 10-hour, 200 gigabit per second DDoS campaign with an estimated impact of $20M. Traditional defensive measures, like firewalls, intrusion prevention systems, and defense-in-depth, are not always effective. The threat may hasten the emergence of active defenses to protect Internet-based revenue streams or intellectual property. In practice, some companies have either found legal loopholes that provide immunity, or have decided to leverage the budding relationship between the government and the private sector to Hack Back with implied immunity. Either way, tools are currently being used to defend against hacking. This paper provides: (1) an overview of the present threat posed by the Dirt Jumper family of DDoS toolkits, (2) an overview of the Hacking Back debate and clear examples of the use of legal loopholes or implied immunity, and (3) novel offensive campaigns that could be used to stop active DDoS attacks by exploiting vulnerabilities in the botnet's command and control (C&C). Our work could be the first steps toward a cyber-deterrence strategy for hacking and cyber espionage, which is a National Security imperative.
Keywords: Internet; computer network security; industrial property; invasive software; national security; telecommunication traffic; DDoS attacks; DDoS campaign; DDoS toolkits; DJF; Dirt Jumper Family; Internet-based revenue streams; National Security; active defenses; botnet C and C; botnet command and control; crippling unsolicited network traffic; cyber espionage; cyber-deterrence strategy; distributed denial of service family; hacking; intellectual property; Companies; Computer crime; Computers; Government; Law; Malware (ID#: 16-11059)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7413679&isnumber=7413673
Y. Fujii, N. Yoshiura, N. Ohta and A. Takita, “Abuse Prevention of Street Camera Network by Browsing-History Disclosure,” 2015 4th International Conference on Instrumentation, Communications, Information Technology, and Biomedical Engineering (ICICI-BME), Bandung, 2015, pp. 17-17. doi: 10.1109/ICICI-BME.2015.7401306
Abstract: Summary form only given. Street camera network, in which many street cameras are installed at high density just like street lights throughout the nation, will have a stronger positive effect in suspect tracking and crime deterrence in the near future. On the other hand, it will also have a stronger negative possibility in violation of privacy of ordinary citizen. It is necessary for such a stronger surveillance camera system, which forcibly captures the images of passers for the public interest, to be accepted as an essential social infrastructure by the society, that only the usage for the public interest, which is accepted by the society beforehand, is possible and this can be believed by the ordinary citizen. To realize this, a new concept, in which abuse of street camera network is deterred by browsing-history disclosure, is proposed. In the lecture, the concept, the prototype and the experiment will be reported and the future prospect will also be presented.
Keywords: target tracking; video cameras; video surveillance; abuse prevention; browsing-history disclosure; crime deterrence; ordinary citizen; privacy violation; social infrastructure; street camera network; surveillance camera system; suspect tracking; Biomedical engineering; Cameras; Information technology; Instruments; Privacy; Prototypes; Surveillance (ID#: 16-11060)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7401306&isnumber=7401298
A. Basuchoudhary, M. Eltoweissy, M. Azab, L. Razzolini and S. Mohamed, “Cyberdefense When Attackers Mimic Legitimate Users: A Bayesian Approach,” Information Reuse and Integration (IRI), 2015 IEEE International Conference on, San Francisco, CA, 2015, pp. 502-509. doi: 10.1109/IRI.2015.83
Abstract: Cyber defenders cannot clearly identify attackers from other legitimate users on a computer network. The network administration can protect the network using an active or a passive defense. Attackers can mount attacks like denial of service attacks or try to gain entry into secure systems. We model cyber defense as a signaling game. We find Bayesian Nash equilibria for both the attacker and the defender and characterize how these equilibria respond to changes in underlying parameters. We explore the question, is there an optimal deterrence policy that utilizes passive and/or active defenses given that both attacks and defenses impose costs on legitimate users? Comparative static results show how exogenous changes in the context and the nature of the attack change optimal strategies for both the attacker and the defender. These results suggest that sensors should look for certain kinds of information and not others as well as technologies that can automatically calibrate a response. Results also suggest when attackers are more likely to break into secure systems relative to mounting DDoS attacks. We use simulation to verify the analytical results.
Keywords: Bayes methods; computer crime; computer network security; game theory; Bayesian Nash equilibria; Bayesian approach; DDoS attacks; active defense; attackers; computer network; cyber defenders; cyber defense; denial of service attacks; legitimate users; network administration; network protection; optimal deterrence policy; optimal strategies; passive defense; secure systems; signaling game; Bayes methods; Computational modeling; Computer crime; Computer networks; Economics; Games; Military computing; Bayesian; Cyber Defense; Nash Equilibrium (ID#: 16-11061)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301019&isnumber=7300933
K. Mehmood, M. Afzal, M. Mukaram Khan and M. M. WaseemIqbal, “A Practical Approach to Impede Key Recovery and Piracy in Digital Rights Management System (DRM),” Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, Islamabad, 2015, pp. 349-353. doi: 10.1109/IBCAST.2015.7058528
Abstract: With the invention of high speed internet and content digitization, large scale content sharing has become exceptionally easy. This ease adds fuel to the fire of piracy, which causes a gigantic loss to content providers. Copyright laws only cause deterrence. Hence a technological solution was required to protect the rights of digital content owners. This inevitably gave birth to Digital Rights Management System (DRM). But DRM could not fulfill this obligation and is broken time to time. Two major types of attacks faced by DRM are the key recovery and unencrypted content capturing. In this paper a DRM model has been proposed which will employ elliptic curve integrated encryption system (ECIES) and a secure one-way hash function for generating a dynamic one time content encryption/decryption key. A portion of key is stored in license. With the proposed technique, the knowledge of a portion of key will reveal no information about the key itself. The key will never be reused and will never be stored on end user device. The proposed solution will raise the scale of difficulty for key recovery and piracy. If any effort is made to distribute the contents illegally, the contents will be locked cryptographically for both legal and illegal consumers. The proposed technique also provides protection against attacks, wherein an attacker becomes successful in extracting the content decryption key and publishes it on a public website database. With the help of any well-known technique like remote attestation, the proposed solution also allows checking the integrity of DRM client software which is executed in malicious host environment.
Keywords: Internet; computer crime; copyright; digital rights management; public key cryptography; DRM model; ECIES; attacks; content digitization; content providers; copyright laws; digital content owners; digital rights management system; dynamic one time content encryption/decryption key; elliptic curve integrated encryption system; end user device; high speed Internet; illegal consumers; key recovery; large scale content sharing; license; malicious host environment; piracy; public Website database; rights protection; secure one-way hash function; unencrypted content capturing; Computational modeling; Computer architecture; Encryption
(ID#: 16-11062)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058528&isnumber=7058466
Z. Ghasem, I. Frommholz and C. Maple, “A Hybrid Approach to Combat Email-Based Cyberstalking,” Future Generation Communication Technology (FGCT), 2015 Fourth International Conference on, Luton, 2015, pp. 1-6. doi: 10.1109/FGCT.2015.7300257
Abstract: Email is one of the most popular Internet applications which enables individuals and organisations alike to communicate and work effectively. However, email has also been used by criminals as a means to commit cybercrimes such as phishing, spamming, cyberbullying and cyberstalking. Cyberstalking is a relatively new surfacing cybercrime, which recently has been recognised as a serious social and worldwide problem. Combating email-based cyberstalking is a challenging task that involves two crucial steps: a robust method for filtering and detecting cyberstalking emails and documenting evidence for identifying cyberstalkers as a prevention and deterrence measure. In this paper, we discuss a hybrid approach that applies machine learning to detect, filter and file evidence. To this end we present a new robust feature selection approach to select informative features, aiming to improve the performance of machine learning within this task.
Keywords: Internet; computer crime; feature selection; learning (artificial intelligence); unsolicited e-mail; Internet applications; cyberbullying; cybercrimes; cyberstalking; email-based cyberstalking; machine learning; phishing; robust feature selection approach; spamming; Computer crime; Computers; Electronic mail; Feature extraction; Law enforcement; Robustness (ID#: 16-11063)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300257&isnumber=7300234
V. P. Sonawane and P. Irabashetti, “Method for Preventing Direct and Indirect Discrimination in Data Mining,” Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, Pune, 2015, pp. 353-357. doi: 10.1109/ICCUBEA.2015.74
Abstract: For extorting the helpful comprehension concealed in the biggest compilation of a database the data mining technology is used. There are some negative approaches occurred about the data mining technology, among which the potential privacy incursion and potential discrimination. The latter consists of irrationally considering individuals on the source of their fitting to an exact group. Data mining and automated data collection methods like the classification covered the way forsaking the automated judgment like granting or denying the loan on the basis of race, creed, etc. If the training data sets are unfair in what respects discriminatory attributes like masculine category, race, creed, etc., discriminator decisions may ensue. Because of this reason the data mining technology introduced ant discrimination methods with including the discrimination discovery and avoidance. The discrimination can direct or indirect. When any decisions were made to the sensitive attributes at that time direct discrimination are occurring. While the indirect discrimination are occurring when the decision remade on the basis of non-sensitive attributes which are strongly associated with the sensitive. Here in this paper, we deal with discrimination avoidance in data mining and proposed novel method for discrimination prevention with the post processing approach. We projected Classification based on predictive association rules (CPAR) algorithm, which is a kind of association classification methods. The algorithm combines the advantages of both association classification methods and traditional rule based classification. The algorithm used to thwart discrimination deterrence in post processing. We calculate the utility of the proposed approach and compare with the existing approaches. The experimental assessment proved that the proposed method is effectively removing the direct or unintended discrimination prejudices in the original data set for maintaining the quality of data.
Keywords: data analysis; data mining; database management systems; CPAR algorithm; antidiscrimination methods; automated data collection methods; automated judgment; classification based on predictive association rules; creed; data mining technology; data sets; database; discrimination avoidance; discrimination discovery; discriminatory attributes; masculine category; nonsensitive attributes; post processing approach; race; Accuracy; Computers; Data mining; Data models; Databases; Prediction algorithms; Training data; Direct discrimination prevention; Indirect discrimination prevention; antidiscrimination; post-processing; privacy; rule generalization; rule protection (ID#: 16-11064)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155866&isnumber=7155781
G. G. Thoreson, E. A. Schneider, H. Armstrong and C. A. van der Hoeven, “The Application of Neutron Transport Green’s Functions to Threat Scenario Simulation,” in IEEE Transactions on Nuclear Science, vol. 62, no. 1, pp. 236-249, Feb. 2015. doi: 10.1109/TNS.2015.2389769
Abstract: Radiation detectors provide deterrence and defense against nuclear smuggling attempts by scanning vehicles, ships, and pedestrians for radioactive material. Understanding detector performance is crucial to developing novel technologies, architectures, and alarm algorithms. Detection can be modeled through radiation transport simulations; however, modeling a spanning set of threat scenarios over the full transport phase-space is computationally challenging. Previous research has demonstrated Green's functions can simulate photon detector signals by decomposing the scenario space into independently simulated submodels. This paper presents decomposition methods for neutron and time-dependent transport. As a result, neutron detector signals produced from full forward transport simulations can be efficiently reconstructed by sequential application of submodel response functions.
Keywords: Green's function methods; neutron detection; neutron transport theory; nuclear materials transportation; particle detectors; photodetectors; alarm algorithm; full forward transport simulation; full transport phase-space; neutron detector signals; neutron transport Green functions; nuclear smuggling; photon detector signals; radiation detectors; radiation transport simulation; radioactive material; scanning vehicles; submodel response function; threat scenario simulation; time-dependent transport; Computational modeling; Detectors; Geometry; Materials; Neutrons; Photonics; Interdiction; nuclear material; photon transport; radiation portal monitor; smuggle (ID#: 16-11065)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024189&isnumber=7033069
A. Le Compte, D. Elizondo and T. Watson, “A Renewed Approach to Serious Games for Cyber Security,” Cyber Conflict: Architectures in Cyberspace (CyCon), 2015 7th International Conference on, Tallinn, 2015, pp. 203-216. doi: 10.1109/CYCON.2015.7158478
Abstract: We are living in a world which is continually evolving and where modern conflicts have moved to the cyber domain. In its 2010 Strategic Concept, NATO affirmed its engagement to reinforce the defence and deterrence of its state members. In this light, it has been suggested that the gamification of training and education for cyber security will be beneficial. Although serious games have demonstrated pedagogic effectiveness in this field, they have only been used in a limited number of contexts, revealing some limitations. Thus, it is argued that serious games could be used in informal contexts while achieving similar pedagogic results. It is also argued that the use of such a serious game could potentially reach a larger audience than existing serious games, while complying with national cyber strategies. To this end, a framework for designing serious games which are aimed at raising an awareness of cyber security to those with little or no knowledge of the subject is presented. The framework, based upon existing frameworks and methodologies, is also accompanied with a set of cyber security skills, itself based upon content extracted from government sponsored awareness campaigns, and a method of integrating these skills into the framework. Finally, future research will be conducted to refine the framework and to improve the set of cyber security related skills in order to suit a larger range of players. A proof of concept will also be designed in order to collect empirical data and to validate the effectiveness of the framework.
Keywords: security of data; serious games (computing); cyber domain; cyber security; empirical data collection; national cyber strategy; renewed approach; serious games; Business; Computer security; Context; Games; Industries; Training; cyber security; framework; serious games (ID#: 16-11066)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158478&isnumber=7158456
H. T. Liu and Y. M. Tang, “The Causal Model Analysis of Resident's Evaluation of Closed Circuit TV and Security Perception,” Security Technology (ICCST), 2015 International Carnahan Conference on, Taipei, 2015, pp. 61-66. doi: 10.1109/CCST.2015.7389658
Abstract: Closed Circuit TV (CCTV) was considered to be an effective technology for crime deterrence in Taiwan. But the resource of CCTV effectiveness always came from police administration, and there were little studies exploring the relationship among community surveillance, public space environment, situational crime prevention, security perception and Closed Circuit TV evaluation from residents' viewpoint. This study developed one integrative model to examine their causal relationships quantitatively. We collected 830 community resident samples around Taiwan and used structural equation modeling (SEM) to verify those hypotheses we explored. The results showed that resident's perception of community surveillance, public space environment, situational crime prevention all positively influenced security perception and Closed Circuit TV evaluation.
Keywords: closed circuit television; CCTV resident evaluation causal model analysis; Taiwan; closed circuit TV security perception; community surveillance; police administration; public space environment; situational crime prevention; structural equation model; Aerospace electronics; Green design; Mathematical model; Reliability; Security; Surveillance; TV; Closed Circuit TV evaluation; Community surveillance; public space environment; security perception; situational crime prevention (ID#: 16-11067)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389658&isnumber=7389647
“IEEE Draft Standard for Identification of Contact Wire Used in Overhead Contact Systems,” in IEEE P1896 /D2.2, June 2015, pp. 1-15, July 23 2015. doi: (not provided)
Abstract: This standard defines the parameters to be used in the identification of contact wires used in transit systems and electric railways and railroads. This standard is intended for use in identifying contact wire by metallurgical content, electrical conductivity and agency ownership. This standard is not intended to replace or supersede existing identification standards using grooving but rather to simplify identification methods.
Keywords: IEEE Standards; Light rail systems; Overhead cable systems; Power transmission lines; Public transportation; Rail transportation; OCS design; OCS styles; Overhead Contact System or OCS; contact wire; electric trolley buses; light rail system; streetcars; transit systems; trolley; trolley wire (ID#: 16-11068)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165575&isnumber=7165574
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Dynamic Network Services and Security 2015 |
Since the Bell System introduced “dynamic routing” several decades ago using the SS-7 signaling system, dynamic network services have been an important tool for network management and intelligence. For the Science of Security community, dynamic methods are useful toward solving the hard problems of resiliency, metrics, and composability. The work cited here was presented in 2015.
M. K. Sharma, R. S. Bali and A. Kaur, “Dynamic Key Based Authentication Scheme for Vehicular Cloud Computing,” Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, Noida, 2015, pp. 1059-1064. doi: 10.1109/ICGCIoT.2015.7380620
Abstract: In recent years, Vehicular Cloud Computing (VCC) has emerged as new technology to provide uninterrupted information to the vehicles from anywhere, anytime. The VCC provides two types of services such as safety related messages and non-safety related messages to the users. The vehicles have less computational power, storage etc. so that the vehicles collect information and send these information to the local or vehicular cloud for computation or storage purposes. But due to the dynamic nature, rapid topology changes and open communication medium, the information can be altered so that it leads to misguiding users, wrong information sharing etc. In the proposed scheme, Elliptic Curve Cryptography used for secure communication in the network that also ensures the security requirements such as confidentiality, integrity, privacy etc. The proposed scheme ensures the mutual authentication of both sender and receiver that wants to communicate. The scheme uses additional operation such as one-way hash function and concatenation to secure the network against various attacks i.e. spoofing attack, man-in-the-middle attack, replay attack etc. The effectiveness of the proposed scheme is evaluated using the different metrics such as packet delivery ratio, throughput and end-to-end delay and it is found better where it is not applied.
Keywords: automobiles; cloud computing; intelligent transportation systems; public key cryptography; vehicular ad hoc networks; VCC; dynamic key-based authentication scheme; elliptic curve cryptography; mutual authentication; open communication medium; vehicular cloud computing; Authentication; Cloud computing; Elliptic curve cryptography; Elliptic curves; Receivers; Vehicles; Intelligent Transportation System; Key Authentication; Key Generation; VANET's; Vehicular Cloud Computing (ID#: 16-10976)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380620&isnumber=7380415
C. J. Chung, T. Xing, D. Huang, D. Medhi and K. Trivedi, “SeReNe: On Establishing Secure and Resilient Networking Services for an SDN-Based Multi-Tenant Datacenter Environment,” Dependable Systems and Networks Workshops (DSN-W), 2015 IEEE International Conference on, Rio de Janeiro, 2015, pp. 4-11. doi: 10.1109/DSN-W.2015.25
Abstract: In the current enterprise data enter networking environment, a major hurdle in the development of network security is the lack of an orchestrated and resilient defensive mechanism that uses well-established quantifiable metrics, models, and evaluation methods. In this position paper, we describe an emerging Secure and Resilient Networking (SeReNe) service model to establish a programmable and dynamic defensive mechanism that can adjust the system's networking resources such as topology, bandwidth allocation, and traffic/flow forwarding policies, according to the network security situations. We posit that this requires addressing two interdependent technical areas: (a) a Moving Target Defense (MTD) framework both at networking and software levels, and (b) an Adaptive Security-enabled Traffic Engineering (ASeTE) approach to select optimal countermeasures by considering the effectiveness of countermeasures and network bandwidth allocations while minimizing the intrusiveness to the applications and the cost of deploying the countermeasures. We believe that our position can greatly benefit the virtual networking system established in data Centerior enterprise virtual networking systems that have adopted latest Open Flow technologies.
Keywords: bandwidth allocation; cloud computing; computer centres; computer network security; software defined networking; virtual machines; ASeTE; MTD framework; OpenFlow technologies; SDN-based multitenant datacenter environment; SeReNe service model; VM; VN; adaptive security-enabled traffic engineering; cloud virtual networking system; dynamic defensive mechanism; enterprise virtual networking systems; moving target defense; network bandwidth allocations; network security; programmable defensive mechanism; secure and resilient networking services; software defined networking; virtual machines; Bridges; Cloud computing; Computational modeling; Computer bugs; Home appliances; Security; multi-tenant datacenter; security and resilience (ID#: 16-10977)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272544&isnumber=7272533
M. Ennahbaoui, H. Idrissi and S. E. Hajji, “Secure and Flexible Grid Computing Based Intrusion Detection System Using Mobile Agents and Cryptographic Traces,” Innovations in Information Technology (IIT), 2015 11th International Conference on, Dubai, 2015, pp. 314-319. doi: 10.1109/INNOVATIONS.2015.7381560
Abstract: Grid Computin60g is one of the new and innovative information technologies that attempt to make resources sharing global and more easier. Integrated in networked areas, the resources and services in grid are dynamic, heterogeneous and they belong to multiple spaced domains, which effectively enables a large scale collection, sharing and diffusion of data. However, grid computing stills a new paradigm that raises many security issues and conflicts in the computing infrastructures where it is integrated. In this paper, we propose an intrusion detection system (IDS) based on the autonomy, intelligence and independence of mobile agents to record the behaviors and actions on the grid resource nodes to detect malicious intruders. This is achieved through the use of cryptographic traces associated with chaining mechanism to elaborate hashed black statements of the executed agent code, which are then compared to depict intrusions. We have conducted experiments basing three metrics: network load, response time and detection ability to evaluate the effectiveness of our proposed IDS.
Keywords: cryptography; grid computing; mobile agents; IDS; chaining mechanism; cryptographic traces; data collection; data diffusion; data sharing; detection ability metric; intrusion detection system; network load metric; resources sharing; response time metric; security issues; Computer architecture; Cryptography; Grid computing; Intrusion detection; Mobile agents; Monitoring (ID#: 16-10978)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381560&isnumber=7381480
A. Bouchami, E. Goettelmann, O. Perrin and C. Godart, “Enhancing Access-Control with Risk-Metrics for Collaboration on Social Cloud-Platforms,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 864-871. doi: 10.1109/Trustcom.2015.458
Abstract: Cloud computing promotes the exchange of information, resources and tasks between different organizations by facilitating the deployment and adoption of centralized collaboration platforms: Professional Social Networking (PSN). However, issues concerning security management are preventing their widespread use, as organizations still need to protect some of their sensitive data. Traditional access control policies, defined over the triplet (User, Action, Resource) are difficult to put in place in such highly dynamic environments. In this paper, we introduce risk metrics in existing access control systems to combine the fine-grained policies defined at the user level, with a global risk-policy defined at the organization's level. Experiments show the impact of our approach when deployed on traditional systems.
Keywords: authorisation; cloud computing; data protection; groupware; organisational aspects; resource allocation; risk management; social networking (online); PSN; access-control; action; centralized collaboration platform; fine-grained policies; global risk-policy; organization level; professional social networking; resource; risk-metrics; security management; sensitive data protection; social cloud-platform; user; Access control; Collaboration; Companies; Context; Social network services; Access-Control; Professional Social Networking; Risk; Security (ID#: 16-10979)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345366&isnumber=7345233
N. Soule, B. Simidchieva, F. Yaman et al., “Quantifying & Minimizing Attack Surfaces Containing Moving Target Defenses,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287449
Abstract: The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.
Keywords: security of data; attack surface minimization; cyber friendly fire; cyber security exposure; dynamic proactive defenses; moving target defenses; resilient systems; timeliness violations; Analytical models; Computational modeling; IP networks; Measurement; Minimization; Security; Surface treatment; cyber security analysis; modeling; threat assessment (ID#: 16-10980)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287449&isnumber=7287407
B. Bhargava, P. Angin, R. Ranchal and S. Lingayat, “A Distributed Monitoring and Reconfiguration Approach for Adaptive Network Computing,” Reliable Distributed Systems Workshop (SRDSW), 2015 IEEE 34th Symposium on, Montreal, QC, 2015, pp. 31-35. doi: 10.1109/SRDSW.2015.16
Abstract: The past decade has witnessed immense developments in the field of network computing thanks to the rise of the cloud computing paradigm, which enables shared access to a wealth of computing and storage resources without needing to own them. While cloud computing facilitates on-demand deployment, mobility and collaboration of services, mechanisms for enforcing security and performance constraints when accessing cloud services are still at an immature state. The highly dynamic nature of networks and clouds makes it difficult to guarantee any service level agreements. On the other hand, providing quality of service guarantees to users of mobile and cloud services that involve collaboration of multiple services is contingent on the existence of mechanisms that give accurate performance estimates and security features for each service involved in the composition. In this paper, we propose a distributed service monitoring and dynamic service composition model for network computing, which provides increased resiliency by adapting service configurations and service compositions to various types of changes in context. We also present a greedy dynamic service composition algorithm to reconfigure service orchestrations to meet user-specified performance and security requirements. Experiments with the proposed algorithm and the ease-of-deployment of the proposed model on standard cloud platforms show that it is a promising approach for agile and resilient network computing.
Keywords: cloud computing; quality of service; security of data; software fault tolerance; software prototyping; agile network computing; distributed service monitoring; dynamic service composition model; greedy dynamic service composition algorithm; quality of service; security requirement; service orchestration reconfiguration; Cloud computing; Context; Heuristic algorithms; Mobile communication; Monitoring; Quality of service; Security; adaptability; agile computing; monitoring; resilience; service-oriented computing (ID#: 16-10981)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371438&isnumber=7371403
D. Zhang and J. P. G. Sterbenz, “Measuring the Resilience of Mobile Ad Hoc Networks with Human Walk Patterns,” Reliable Networks Design and Modeling (RNDM), 2015 7th International Workshop on, Munich, 2015, pp. 161-168. doi: 10.1109/RNDM.2015.7325224
Abstract: MANET (mobile ad hoc network) technology has become increasingly attractive for real-world applications in the past decade. Dynamic and intermittent connectivity caused by node mobility poses a huge challenge to the operation of MANETs that require end-to-end paths for communication. The attacks against critical nodes could result in a more degraded network service. In this paper, we evaluate the network resilience of real-world humans' walking traces under different malicious attacks. We propose a new flexible attack strategy by selecting different centrality metrics to measure node significance according to network topological properties. We employ a resilience quantification approach to evaluate the node pair communication ability spanning a range of network operational states. Resilience of topological robustness is evaluated for different combinations of network parameters, and resilience of application layer service using different routing protocols are compared given a range of states of topological flow robustness. Our results show that flexible attacks impact overall network resilience more than attacks based on any single centrality metric with varying network connectivities.
Keywords: mobile ad hoc networks; routing protocols; telecommunication network topology; telecommunication security; MANET technology; application layer service; critical nodes; dynamic connectivity; end-to-end paths; flexible attack strategy; flexible attacks; intermittent connectivity; malicious attacks; mobile ad hoc network technology; network operational states; network resilience; network topological properties; node mobility; node pair communication ability; resilience quantification approach; single centrality metric; topological robustness resilience; varying network connectivities; Ad hoc networks; Measurement; Mobile computing; Network topology; Resilience; Robustness; Topology; MANET; mobile wireless topology challenge modeling; ns-3 simulation; resilient survivable disruption-tolerant network; time-varying weighted graph (ID#: 16-10982)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7325224&isnumber=7324297
D. Zhang and J. P. G. Sterbenz, “Robustness Analysis of Mobile Ad Hoc Networks using Human Mobility Traces,” Design of Reliable Communication Networks (DRCN), 2015 11th International Conference on the, Kansas City, MO, 2015, pp. 125-132. doi: 10.1109/DRCN.2015.7149003
Abstract: With the rapid advancement of wireless technology and the exponential increase of wireless devices in the past decades, there are more consumer applications for MANETs (mobile ad hoc networks) in addition to the traditional military uses. A resilient and robust MANET is essential to high service quality for applications. The dynamically changing topologies of MANETs pose a huge challenge to normal network operations. Furthermore, malicious attacks against critical nodes in the network could result in the deterioration of the network. In this paper, we employ several real-world human mobility traces to analyze network robustness in the time domain. We apply attacks against important nodes of the human topology and compare the impact of attacks based on different centrality measures. Our results confirm that nodes with high betweenness in a well-connected large dynamic network play the most pivotal roles in the communication between all node pairs.
Keywords: mobile ad hoc networks; telecommunication security; MANET; human mobility traces; malicious attacks; robustness analysis; Ad hoc networks; Correlation; Measurement; Mobile computing; Network topology; Robustness; Topology; Dynamic networks; Graph centrality; Human mobility traces; MANETs; Resilience and survivability; Robustness (ID#: 16-10983)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149003&isnumber=7148972
C. J. Chung, T. Xing, D. Huang, D. Medhi and K. Trivedi, “SeReNe: On Establishing Secure and Resilient Networking Services for an SDN-based Multi-tenant Datacenter Environment,” Dependable Systems and Networks Workshops (DSN-W), 2015 IEEE International Conference on, Rio de Janeiro, 2015, pp. 4-11. doi: 10.1109/DSN-W.2015.25
Abstract: In the current enterprise data enter networking environment, a major hurdle in the development of network security is the lack of an orchestrated and resilient defensive mechanism that uses well-established quantifiable metrics, models, and evaluation methods. In this position paper, we describe an emerging Secure and Resilient Networking (SeReNe) service model to establish a programmable and dynamic defensive mechanism that can adjust the system's networking resources such as topology, bandwidth allocation, and traffic/flow forwarding policies, according to the network security situations. We posit that this requires addressing two interdependent technical areas: (a) a Moving Target Defense (MTD) framework both at networking and software levels, and (b) an Adaptive Security-enabled Traffic Engineering (ASeTE) approach to select optimal countermeasures by considering the effectiveness of countermeasures and network bandwidth allocations while minimizing the intrusiveness to the applications and the cost of deploying the countermeasures. We believe that our position can greatly benefit the virtual networking system established in data Centerior enterprise virtual networking systems that have adopted latest Open Flow technologies.
Keywords: bandwidth allocation; cloud computing; computer centres; computer network security; software defined networking; virtual machines; ASeTE; MTD framework; OpenFlow technologies; SDN-based multitenant datacenter environment; SeReNe service model; VM; VN; adaptive security-enabled traffic engineering; cloud virtual networking system; dynamic defensive mechanism; enterprise virtual networking systems; moving target defense; network bandwidth allocations; network security; programmable defensive mechanism; secure and resilient networking services; software defined networking; virtual machines; Bridges; Cloud computing; Computational modeling; Computer bugs; Home appliances; Security; multi-tenant datacenter; security and resilience (ID#: 16-10984)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272544&isnumber=7272533
D. Oliveira, P. Carvalho and S. R. Lima, “Towards Cloud Storage Services Characterization,” Computational Science and Engineering (CSE), 2015 IEEE 18th International Conference on, Porto, 2015, pp. 129-136. doi: 10.1109/CSE.2015.40
Abstract: Monitoring of Internet services shows that there is a global and growing trend in the use of Cloud Services. This paper aims to identify and quantify the use of Cloud Services taking the University of Minho (UMinho) network as a practical case study. Thus, this study focuses on characterizing Cloud Storage services, identifying the most accessed Cloud Storage Providers and the characteristics of corresponding traffic. As a first step, this involves identifying appropriate techniques for traffic classification and the definition of a model for processing the collected traces. Cloud Storage services present several characteristics that turn the current classification methods insufficient or too complex to apply, namely the use of dynamic communication ports and security protocols encrypting the traffic. This has motivated the use of a new classification approach based on Tstat tool, which allows extracting signatures of servers during SSL handshaking. The obtained results provide global statistics regarding the most used services at UMinho, focusing subsequently on Cloud Storage services. For these, the top Cloud Storage Providers within user preferences are identified and the corresponding traffic characteristics discussed.
Keywords: cloud computing; pattern classification; storage management; telecommunication traffic; Internet services monitoring; SSL handshaking; Tstat tool; UMinho network; University of Minho networks; cloud storage providers; cloud storage services characterization; dynamic communication ports; encryption; global statistics; security protocols; signatures extraction; traffic characteristics; traffic classification; user preferences; Cloud computing; Cryptography; Payloads; Ports (Computers); Protocols; Servers; cloud services; cloud storage; traffic classification (ID#: 16-10985)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371365&isnumber=7371335
S. Jegadeeswari, P. Dinadayalan and N. Gnanambigai, “A Neural Data Security Model: Ensure High Confidentiality and Security in Cloud Datastorage Environment,” Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, Kochi, 2015, pp. 400-406. doi: 10.1109/ICACCI.2015.7275642
Abstract: Cloud computing is a computing paradigm which provides a dynamic environment for end users to guarantee Quality of Service (QoS) on data towards confidentiality on the out sourced data. Confidentiality is about accessing a set of information from a cloud database with a high security level This research proposes a new cloud data security model, A Neural Data Security Model to ensure high confidentiality and security in cloud data storage environment for achieving data confidentiality in the cloud database platform. This cloud Neural Data Security Model comprises Dynamic Hashing Fragmented Component and Feedback Neural Data Security Component. The data security component deals with data encryption for sensitive data using the RSA algorithm to increase the confidentiality level. The fragmented sensitive data is stored in dynamic hashing. The Feedback Neural Data Security Component is used to encrypt and decrypt the sensitive data by using Feedback Neural Network. This Feedback Neural Network is deployed using the RSA security algorithm. This work is efficient and effective for all kinds of queries requested by the user. The performance of this work is better than the conventional cloud data security models as it achieve a high data confidentiality level.
Keywords: cloud computing; digital storage; neural nets; public key cryptography; quality of service; QoS; RSA algorithm; RSA security algorithm; cloud data security model; cloud data storage environment; cloud database platform; cloud neural data security model; data confidentiality; data encryption; dynamic hashing fragmented component; feedback neural data security component; feedback neural network; fragmented sensitive data; high confidentiality; sensitive data decryption; Data models; Encryption; Memory; Quality of service; Training; Cloud Computing; Confidentiality; Data security; Feedback Neural Network; Neural Network; RSA (ID#: 16-10986)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275642&isnumber=7275573
M. K. Sharma, R. S. Bali and A. Kaur, “Dynamic Key Based Authentication Scheme for Vehicular Cloud Computing,” Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, Noida, 2015, pp. 1059-1064. doi: 10.1109/ICGCIoT.2015.7380620
Abstract: In recent years, Vehicular Cloud Computing (VCC) has emerged as new technology to provide uninterrupted information to the vehicles from anywhere, anytime. The VCC provides two types of services such as safety related messages and non-safety related messages to the users. The vehicles have less computational power, storage etc. so that the vehicles collect information and send these information to the local or vehicular cloud for computation or storage purposes. But due to the dynamic nature, rapid topology changes and open communication medium, the information can be altered so that it leads to misguiding users, wrong information sharing etc. In the proposed scheme, Elliptic Curve Cryptography used for secure communication in the network that also ensures the security requirements such as confidentiality, integrity, privacy etc. The proposed scheme ensures the mutual authentication of both sender and receiver that wants to communicate. The scheme uses additional operation such as one-way hash function and concatenation to secure the network against various attacks i.e. spoofing attack, man-in-the-middle attack, replay attack etc. The effectiveness of the proposed scheme is evaluated using the different metrics such as packet delivery ratio, throughput and end-to-end delay and it is found better where it is not applied.
Keywords: automobiles; cloud computing; intelligent transportation systems; public key cryptography; vehicular ad hoc networks; VCC; dynamic key-based authentication scheme; elliptic curve cryptography; mutual authentication; open communication medium; vehicular cloud computing; Authentication; Cloud computing; Elliptic curve cryptography; Elliptic curves; Receivers; Vehicles; Intelligent Transportation System; Key Authentication; Key Generation; VANET's; Vehicular Cloud Computing (ID#: 16-10987)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380620&isnumber=7380415
M. A. Abdrabou, A. D. E. Elbayoumy and E. A. El-Wanis, “LTE Authentication Protocol (EPS-AKA) Weaknesses Solution,” 2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, 2015, pp. 434-441. doi: 10.1109/IntelCIS.2015.7397256
Abstract: Extensible Authentication Protocol (EAP) is an authentication framework in Long Term Evolution (LTE) networks. EAP-AKA is one of the methods of EAP which uses the Authentication and Key Agreement (AKA) mechanism based on challenge-response mechanisms, EAP-AKA is used in the 3rd generation mobile networks then modified and inherited to 4th generation mobile networks (LTE) as Evolved Packet System Authentication and Key Agreement (EPS-AKA) mechanism which is used when the user access the network through EUTRAN. EPS-AKA vulnerabilities are disclosure of the user identity, Man in the Middle attack and Denial of Services (DoS) attacks so a robust authentication mechanism must replace EPSAKA to avoid such attacks. In this paper, Modified Evolved Packet System Authentication and Key Agreement (MEPS-AKA) protocol based on Simple Password Exponential Key Exchange (SPEKE) and symmetric key cryptography is proposed to solve these problems by performing a pre-authentication procedure to generate a dynamic key every time user access to the network, also each message send or received is confidentially protected. Scyther tool is used to verify the efficiency of the proposed protocol. EPS-AKA and MEPS-AKA are simulated using C programming language to calculate the execution time for both algorithms. The proposed protocol is simulated using a client-server application program using C# programming language.
Keywords: Long Term Evolution; protocols; telecommunication security; 3rd generation mobile networks; 4th generation mobile networks; AKA mechanism; C programming language; C# programming language; Denial of Services; DoS attacks; EPS-AKA mechanism; EPS-AKA weaknesses solution; LTE authentication protocol; LTE network; SPEKE; Scyther tool; authentication and key agreement; authentication framework; challenge response mechanisms; client-server application program; evolved packet system authentication and key agreement; extensible authentication protocol; long term evolution; preauthentication procedure; robust authentication mechanism; simple password exponential key exchange; symmetric key cryptography; Long Term Evolution; Protocols; Redundancy; AES; EAP-AKA; EPS-AKA; LTE; Scyther (ID#: 16-10988)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7397256&isnumber=7397173
N. W. Lo, M. C. Chiang and C. Y. Hsu, “Hash-Based Anonymous Secure Routing Protocol in Mobile Ad Hoc Networks,” Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 55-62. doi: 10.1109/AsiaJCIS.2015.27
Abstract: A mobile ad hoc network (MANET) is composed of multiple wireless mobile devices in which an infrastructure less network with dynamic topology is built based on wireless communication technologies. Novel applications such as location-based services and personal communication Apps used by mobile users with handheld wireless devices utilize MANET environments. In consequence, communication anonymity and message security have become critical issues for MANET environments. In this study, a novel secure routing protocol with communication anonymity, named as Hash-based Anonymous Secure Routing (HASR) protocol, is proposed to support identity anonymity, location anonymity and route anonymity, and defend against major security threats such as replay attack, spoofing, route maintenance attack, and denial of service (DoS) attack. Security analyses show that HASR can achieve both communication anonymity and message security with efficient performance in MANET environments.
Keywords: cryptography; mobile ad hoc networks; mobile computing; mobility management (mobile radio); routing protocols; telecommunication network topology; telecommunication security; DoS attack; HASR protocol; Hash-based anonymous secure routing protocol; MANET; denial of service attack; dynamic network topology; handheld wireless devices; location-based services; message security; mobile users; personal communication Apps; route maintenance attack; wireless communication technologies; wireless mobile devices; Cryptography; Mobile ad hoc networks; Nickel; Routing; Routing protocols; communication anonymity; message security; mobile ad hoc network; routing protocol (ID#: 16-10989)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153936&isnumber=7153836
M. Ennahbaoui, H. Idrissi and S. E. Hajji, “Secure and Flexible Grid Computing Based Intrusion Detection System Using Mobile Agents and Cryptographic Traces,” Innovations in Information Technology (IIT), 2015 11th International Conference on, Dubai, 2015, pp. 314-319. doi: 10.1109/INNOVATIONS.2015.7381560
Abstract: Grid Computing is one of the new and innovative information technologies that attempt to make resources sharing global and more easier. Integrated in networked areas, the resources and services in grid are dynamic, heterogeneous and they belong to multiple spaced domains, which effectively enables a large scale collection, sharing and diffusion of data. However, grid computing stills a new paradigm that raises many security issues and conflicts in the computing infrastructures where it is integrated. In this paper, we propose an intrusion detection system (IDS) based on the autonomy, intelligence and independence of mobile agents to record the behaviors and actions on the grid resource nodes to detect malicious intruders. This is achieved through the use of cryptographic traces associated with chaining mechanism to elaborate hashed black statements of the executed agent code, which are then compared to depict intrusions. We have conducted experiments basing three metrics: network load, response time and detection ability to evaluate the effectiveness of our proposed IDS.
Keywords: cryptography; grid computing; mobile agents; IDS; chaining mechanism; cryptographic traces; data collection; data diffusion; data sharing; detection ability metric; intrusion detection system; network load metric; resources sharing; response time metric; security issues; Computer architecture; Cryptography; Grid computing; Intrusion detection; Mobile agents; Monitoring
(ID#: 16-10990)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381560&isnumber=7381480
M. Ahmadi, M. Gharib, F. Ghassemi and A. Movaghar, “Probabilistic Key Pre-Distribution for Heterogeneous Mobile Ad Hoc Networks Using Subjective Logic,” Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, Gwangiu, 2015, pp. 185-192. doi: 10.1109/AINA.2015.184
Abstract: Public key management scheme in mobile ad hoc networks (MANETs) is an inevitable solution to achieve different security services such as integrity, confidentiality, authentication and nonrepudiation. Probabilistic asymmetric key pre-distribution (PAKP) is a self-organized and fully distributed approach. It resolves most of MANET's challenging concerns such as storage constraint, limited physical security and dynamic topology. In such a model, secure path between two nodes is composed of one or more random successive direct secure links where intermediate nodes can read, drop or modify packets. This way, intelligent selection of intermediate nodes on a secure path is vital to ensure security and lower traffic volume. In this paper, subjective logic is used to improve PAKP method with the aim to select the most trusted and robust path. Consequently, our approach results in a better data traffic and also improve the security. Proposed algorithm chooses the least number of nodes among the most trustworthy nodes which are able to act as intermediate stations. We exploit two subjective logic based models: one exploits the subjective nature of trust between nodes and the other considers path conditions. We then evaluate our approach using network simulator ns-3. Simulation results confirm the effectiveness and superiority of the proposed protocol compared to the basic PAKP scheme.
Keywords: cryptographic protocols; mobile ad hoc networks; public key cryptography; radio links; telecommunication security; telecommunication traffic; MANET; PAKP; data traffic; heterogeneous mobile ad hoc network security; intermediate node intelligent selection; network simulator ns-3; probabilistic asymmetric key predistribution; protocol; public key management scheme; random successive direct secure link; subjective logic; Ad hoc networks; Mobile computing; Probabilistic logic; Public key; Uncertainty; Probabilistic asymmetric key pre-distribution; Subjective logic; Trust (ID#: 16-10991)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097969&isnumber=7097928
S. R. M. Krishna, P. V. K. Prasad, M. N. S. Ramanath and B. M. Kumari, “Security in MANET Routing Tables with FMNK Cryptography Model,” Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, Visakhapatnam, 2015, pp. 1-7. doi: 10.1109/EESCO.2015.7254021
Abstract: MANET-Mobile Ad hoc Network is an assembly of movable nodes which pass on information with each other by means of multi-hop wireless associates. As nodes travel often, it won't be having an unchanging infrastructure as a result it is unsafe and will be inclined by various attacks. Each node has a finite interaction range, which will be behaves as a router to communicate the packets to another node. DOS (Denial-of-Service) attack, Flooding attack are some serious threats in MANET which causes data unsafety in MANET. The main problem in MANET is that, the routing tables which consists of each neighbour node information which maintained by the each node for the dynamic topology creation which is insecure. So to overcome this drawback, a optimized FMNK (Finger print Minutiae point non-invertible Key) algorithm is produced utilizing Biometric image models are introduced which can afford security and authentication. An optimized FMNK-SSL-AES-256(Finger print non invert able key Secure Socket Layer-Advance Encryption Standard) encryption algorithm is being introduced to encrypt the information applying a key to increase the security in MANET. Once the algorithm was developed the tables are protected with this proposed model and the message communication among nodes shown through NS2 tool. This showed the communication among starting node to end with minimum cost by dynamic calculation of Euclidian distance Model.
Keywords: cryptography; mobile ad hoc networks; telecommunication network routing; telecommunication network topology; telecommunication security; DOS; Euclidian distance model; FMNK cryptography model; MANET routing table security; NS2 tool; biometric image models; denial-of-service attack; dynamic topology creation; finger print minutiae point noninvertible key algorithm; finger print noninvertable key secure socket layer-advance encryption standard encryption algorithm; finite interaction range; flooding attack; message communication; mobile ad hoc network; movable nodes; multihop wireless associates; neighbour node information; optimized FMNK-SSL-AES-256; starting node; Ciphers; Encryption; Fingerprint recognition; Mobile ad hoc networks; Peer-to-peer computing; FMNK; MANET; SSL-AES-256; crossover operator; fingerprint; minutiae points (ID#: 16-10992)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7254021&isnumber=7253613
S. Sicari, A. Rizzardi, L. A. Grieco and A. Coen-Porisini, “GoNe: Dealing with Node Behavior,” Consumer Electronics - Berlin (ICCE-Berlin), 2015 IEEE 5th International Conference on, Berlin, 2015, pp. 358-362. doi: 10.1109/ICCE-Berlin.2015.7391280
Abstract: The detection of malicious nodes still represents a challenging task in wireless sensor networks. This issue is particularly relevant in data sensitive services. In this work a novel scheme, namely GoNe, is proposed, able to enforce data security and privacy leveraging a machine learning technique based on self organizing maps. GoNe provides an assessment of node reputation scores on a dynamic basis and in presence of multiple kinds of malicious attacks. Its performance has been extensively analized through simulations, which demonstrate its effectiveness in terms of node behavior classification, attack identification, data accuracy, energy efficiency and signaling overhead.
Keywords: data privacy; learning (artificial intelligence); self-organising feature maps; telecommunication computing; wireless sensor networks; GoNe; attack identification; data accuracy; data privacy; data security; energy efficiency; machine learning technique; malicious nodes; node behavior classification; node reputation scores; self organizing maps; wireless sensor networks; Cryptography; Data models; Data privacy; Engines; Neurons; Wireless sensor networks; Reputation; Security; Wireless Sensor Network
(ID#: 16-10993)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7391280&isnumber=7391194
P. Shen, K. Guo, M. Xiao and Q. Xu, “Spy: A QoS-Aware Anonymous Multi-Cloud Storage System Supporting DSSE,” Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on, Shenzhen, 2015, pp. 951-960. doi: 10.1109/CCGrid.2015.88
Abstract: Constructing an overlay storage system based on multiple personal cloud storages is a desirable technique and novel idea for cloud storages. Existing designs provide the basic functions with some customized features. Unfortunately, some important issues have always been ignored including privacy protection, QoS and cipher-text search. In this paper, we present Spy, our design for an anonymous storage overlay network on multiple personal cloud storage, supporting a flexible QoS awareness and cipher-text search. We reform the original Tor protocol by extending the command set and adding a tail part to the Tor cell, which makes it possible for coordination among proxy servers and still keeps the anonymity. Based on which, we proposed a flexible user-defined QoS policy and employed a Dynamic Searchable Symmetric Encryption (DSSE) scheme to support secure cipher-text search. Extensive security analysis prove the security on privacy preserving and experiments show how different QoS policy work according to different security requirements.
Keywords: cloud computing; cryptography; data privacy; information retrieval; quality of service; storage management; DSSE; QoS-aware anonymous multicloud storage system; Spy; Tor cell; Tor protocol; anonymous storage overlay network; cipher-text search; dynamic searchable symmetric encryption scheme; flexible QoS awareness; flexible user-defined QoS policy; multiple personal cloud storage; multiple personal cloud storages; overlay storage system; privacy protection; security requirements; Cloud computing; Encryption; Indexes; Quality of service; Servers; Cipher-text search; PCS; Privacy Preserving; QoS (ID#: 16-10994)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152581&isnumber=7152455
W. Sun, X. Liu, W. Lou, Y. T. Hou and H. Li, “Catch You If You Lie to Me: Efficient Verifiable Conjunctive Keyword Search over Large Dynamic Encrypted Cloud Data,” Computer Communications (INFOCOM), 2015 IEEE Conference on, Kowloon, 2015, pp. 2110-2118. doi: 10.1109/INFOCOM.2015.7218596
Abstract: Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.
Keywords: cloud computing; cryptography; trusted computing; computation outsourcing model; data users; dynamic encrypted cloud data; efficient verifiable conjunctive keyword search; encrypted data search scheme; file collection size; public trusted authority; result verification mechanism; semitrusted server; universally composable security; Conferences; Cryptography; Indexes; Keyword search; Polynomials; Servers (ID#: 16-10995)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218596&isnumber=7218353
M. Feiri, R. Pielage, J. Petit, N. Zannone and F. Kargl, “Pre-Distribution of Certificates for Pseudonymous Broadcast Authentication in VANET,” Vehicular Technology Conference (VTC Spring), 2015 IEEE 81st, Glasgow, 2015, pp. 1-5. doi: 10.1109/VTCSpring.2015.7146029
Abstract: In the context of vehicular networks, certificate management is challenging because of the dynamic topology and privacy requirements. In this paper we propose a technique that combines certificate omission and certificate pre-distribution in order to reduce communication overhead and to minimize cryptographic packet loss. Simulation results show that this technique is useful to improve awareness quality during pseudonym changes.
Keywords: cryptography; telecommunication security; vehicular ad hoc networks; VANET; certificate management; certificate pre-distribution; communication overhead; cryptographic packet loss; pseudonymous broadcast authentication; vehicular networks; Bandwidth; Cryptography; Privacy; Vehicles; Vehicular ad hoc networks; Wireless communication (ID#: 16-10996)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7146029&isnumber=7145573
W. C. Hsieh, C. C. Wu and Y. W. Kao, “A Study of Android Malware Detection Technology Evolution,” Security Technology (ICCST), 2015 International Carnahan Conference on, Taipei, 2015, pp. 135-140. doi: 10.1109/CCST.2015.7389671
Abstract: According to the report of International Data Corporation (IDC), Android OS has dominated the worldwide smart phone Operating System (OS) Market with a 78% share at the first quarter of 2015; also, in the report of F-Secure, 99% of new smart phone threats emerged in the first quarter of 2014 are designed for Android. In recent years, many kinds of malware, such as Botnet, Backdoor, Rootkits, and Trojans, start to attack smart phones for conducting crimes such as fraud, service misuse, information stealing, and root access. In general, they have some shared characteristics, such as constantly scanning for Bluetooth to shorten the device's battery life, accessing the GPS to send the position information to Internet, and jamming the communication between device and the base station to paralyze the wireless network. According to these characteristics, there are a lot of detection method proposed, such as behavior checking, permission-based analysis, and Static Analysis, applied in malware detection software and anti-virus software. However, advanced hackers can utilize some techniques, such as emulator detection, packer, and code obfuscation, to prevent their attacks from being detected. This paper focuses on reviewing the malware evolution which makes malware detection more and more difficult, as well as the development of malware detection software which makes smart phones safer. Finally, our survey gives an insight into the malware evolution trend to increase the detecting rate of unknown malware for malware detection software.
Keywords: Android (operating system); invasive software; program diagnostics; smart phones; Android OS; Android malware detection technology evolution; Bluetooth; F-Secure; IDC; International Data Corporation; anti-virus software; backdoor; base station; behavior checking; botnet; code obfuscation; detection method; emulator detection; fraud; information stealing; malware detection software; malware evolution; operating system; packer; permission-based analysis; root access; rootkits; service misuse; smart phone threats; static analysis; trojans; wireless network; Cryptography; Mobile communication; Operating systems; Smart phones; Trojan horses; Android malware; Behavioral Analysis; Dynamic Analysis; Static Analysis; anti-virus (ID#: 16-10997)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389671&isnumber=7389647
M. Alajeely, A. Ahmad and R. Doss, “Malicious Node Traceback in Opportunistic Networks Using Merkle Trees,” 2015 IEEE International Conference on Data Science and Data Intensive Systems, Sydney, NSW, 2015, pp. 147-152. doi: 10.1109/DSDIS.2015.86
Abstract: Security is a major challenge in Opportunistic Networks because of its characteristics, such as open medium, dynamic topology, no centralized management and absent clear lines of defense. A packet dropping attack is one of the major security threats in OppNets since neither source nodes nor destination nodes have the knowledge of where or when the packet will be dropped. In this paper, we present a malicious nodes detection mechanism against a special type of packet dropping attack where the malicious node drops one or more packets and then injects new fake packets instead. Our novel detection and traceback mechanism is very powerful and has very high accuracy. Each node can detect and then traceback the malicious nodes based on a solid and powerful idea that is, Merkle tree hashing technique. In our defense techniques we have two stages. The first stage is to detect the attack, and the second stage is to find the malicious nodes. We have compared our approach with the acknowledgement based mechanisms and the networks coding based mechanism which are well known approaches in the literature. Simulation results show this robust mechanism achieves a very high accuracy and detection rate.
Keywords: computer network security; cryptography; Merkle tree hashing technique; acknowledgement based mechanisms; destination nodes; malicious node traceback; malicious nodes detection mechanism; networks coding based mechanism; opportunistic networks; packet dropping attack; source nodes; Australia; Electronic mail; Information technology; Network coding; Routing; Security; Wireless communication; Denial-of-Service; Malicious Node Detection; OppNets; Opportunistic Networks; Packet Dropping Attacks; Security (ID#: 16-10998)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396496&isnumber=7396460
K. Fan, N. Huang, Y. Wang, H. Li and Y. Yang, “Secure and Efficient Personal Health Record Scheme Using Attribute-Based Encryption,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 111-114. doi: 10.1109/CSCloud.2015.40
Abstract: With the rapid development of the cloud computing, personal health record (PHR) has attracted great attention of many researchers all over the world recently. However, PHR, which is often outsourced to be stored at a third party, has many security and efficiency issues. Therefore, the study of secure and efficient Personal Health Record Scheme to protect users' privacy in PHR files is of great significance. In this paper, we present a secure and efficient Personal Health Record scheme called SE-PHR. In the SE-PHR scheme, we divide the users into personal domain (PSD) and public domain (PUD) logically. In the PSD, the Key-Aggregate Encryption called KAE is exploited. For the users of PUD, we use outsource-able multi-authority attribute-based encryption (MA-ABE) to largely eliminate the overhead for users and support efficient attribute revocation without updating the user's private key. Our scheme also presents a new algorithm which enables dynamic modification of access policies. Function and performance testing results show the security and efficiency of the proposed SE-PHR.
Keywords: cloud computing; cryptography; data privacy; electronic health records; MA-ABE; SE-PHR; key-aggregate encryption; multiauthority attribute-based encryption; personal domain; personal health record; public domain; security issue; user privacy; Cloud computing; Encryption; Heuristic algorithms; Servers; Transforms; Cloud Computing; Data Sharing; Personal health record; Privacy Protection (ID#: 16-10999)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371468&isnumber=7371418
S. D. Taru and V. B. Maral, “Object Oriented Accountability Approach in Cloud for Data Sharing with Patchy Image Encryption,” Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, Kochi, 2015, pp. 1688-1693. doi: 10.1109/ICACCI.2015.7275856
Abstract: Cloud computing presents a new approach for delivery model and consumption of different IT services based on internet. Highly scalable and virtualized resources are provided as a service on demand basis. Cloud computing provides flexibility for deploying applications at lower cost while increasing business agility. The main feature of using cloud services is that user's data are more often processed at remote machines which are unknown to user. As user do not own these remote machine used for speed up data processing or operate them in cloud, users can lose control of own confidential data. Despite of all of advantages of cloud this remains a challenge and acts as a barrier to the large scale adoption of cloud. To address above problem in this paper we present object oriented approach that performs automated logging mechanism to ensure any access to user's data will trigger authentication with use of decentralized information accountability framework called as CIA (Cloud Information Accountability) [1]. We use the JAR (JAVA Archive File) programmable capabilities to create dynamic travelling object containing user's data. To strengthen the distributed data security we use the chaos image encryption technique specific to image files. Chaos is patchy image encryption technique based on pixel shuffling. Randomness of the chaos is made utilized to scramble the position of the pixel of image.
Keywords: Java; chaos; cloud computing; cryptography; image coding; message authentication; object-oriented programming; CIA; JAR; JAVA archive file; automated logging mechanism; chaos image encryption technique; cloud information accountability; data sharing; distributed data security; object oriented accountability approach; pixel shuffling; user authentication; Authentication; Chaos; Ciphers; Cloud computing; Encryption; Accountability; Chaos encryption; Cloud computing; Data sharing; Logging mechanism
(ID#: 16-11000)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275856&isnumber=7275573
M. Portnoi and C. C. Shen, “Loc-Auth: Location-Enabled Authentication Through Attribute-Based Encryption,” Computing, Networking and Communications (ICNC), 2015 International Conference on, Garden Grove, CA, 2015, pp. 89-93. doi: 10.1109/ICCNC.2015.7069321
Abstract: Traditional user authentication involves entering a username and password into a system. Strong authentication security demands, among other requirements, long, frequently hard-to-remember passwords. Two-factor authentication aids in the security, even though, as a side effect, might worsen user experience. We depict a mobile sign-on scheme that benefits from the dynamic relationship between a user's attributes, the service the user wishes to utilize, and location (where the user is, and what services are available there) as an authentication factor. We demonstrate our scheme employing Bluetooth Low Energy beacons for location awareness and the expressiveness of Attribute-Based Encryption to capture and leverage the described relationship. Bluetooth Low Energy beacons broadcast encrypted messages with encoded access policies. Within range of the beacons, a user with appropriate attributes is able to decrypt the broadcast message and obtain parameters that allow the user to perform a short or simplified login.
Keywords: Bluetooth; authorisation; cryptography; mobile computing; Bluetooth low energy beacons; Loc-Auth; attribute-based encryption; authentication security; encoded access policies; encrypted messages; hard-to-remember passwords; location awareness; location-enabled authentication; mobile sign-on scheme; two-factor authentication; user authentication; user experience; Conferences; Cryptography; Decision support systems; Handheld computers; Information security; Radio frequency; authentication; bluetooth low energy; security (ID#: 16-11001)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069321&isnumber=7069279
J. Chen, Q. Yuan, G. Xue and R. Du, “Game-Theory-Based Batch Identification of Invalid Signatures in Wireless Mobile Networks,” Computer Communications (INFOCOM), 2015 IEEE Conference on, Kowloon, 2015, pp. 262-270. doi: 10.1109/INFOCOM.2015.7218390
Abstract: Digital signature has been widely employed in wireless mobile networks to ensure the authenticity of messages and identity of nodes. A paramount concern in signature verification is reducing the verification delay to ensure the network QoS. To address this issue, researchers have proposed the batch cryptography technology. However, most of the existing works focus on designing batch verification algorithms without sufficiently considering the impact of invalid signatures. The performance of batch verification could dramatically drop, if there are verification failures caused by invalid signatures. In this paper, we propose a Game-theory-based Batch Identification Model (GBIM) for wireless mobile networks, enabling nodes to find invalid signatures with the optimal delay under heterogeneous and dynamic attack scenarios. Specifically, we design an incomplete information game model between a verifier and its attackers, and prove the existence of Nash Equilibrium, to select the dominant algorithm for identifying invalid signatures. Moreover, we propose an auto-match protocol to optimize the identification algorithm selection, when the attack strategies can be estimated based on history information. Comprehensive simulation results demonstrate that GBIM can identify invalid signatures more efficiently than existing algorithms.
Keywords: cryptography; digital signatures; game theory; mobile communication; quality of service; telecommunication security; GBIM; Nash Equilibrium; QoS network; batch cryptography technology; batch identification; batch verification; digital signature; dynamic attack; game theory based batch identification model; invalid signatures; message authentication; signature verification; wireless mobile networks; Algorithm design and analysis; Games; Heuristic algorithms; Magnetic resonance imaging; Mobile communication; Mobile computing; Testing; Batch identification (ID#: 16-11002)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218390&isnumber=7218353
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Game Theoretic Security 2015 |
Game theory has historically been the province of social sciences such as economics, political science, and psychology. Game theory has developed into an umbrella term for the logical side of science that includes both human and non-human actors like computers. It has been used extensively in wireless networks research to develop understanding of stable operation points for networks made of autonomous/selfish nodes. The nodes are considered as the players. Utility functions are often chosen to correspond to achieved connection rate or similar technical metrics. In security, the computer game framework is used to anticipate and analyze intruder and administrator concurrent interactions within the network. Research cited here was presented in 2015.
L. Tom, “Game-Theoretic Approach Towards Network Security: A Review,” Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, Nagercoil, 2015, pp. 1-4. doi: 10.1109/ICCPCT.2015.7159364
Abstract: Advancements in information technology has increased the use of internet. With the pervasiveness of internet, network security has become critical issue in every organization. Network attacks results in massive amount of loss in terms of money, reputation and data confidentiality. Reducing or eliminating the negative effects of any intrusion is a fundamental issue of network security. The network security problem can be represented as a game between the attacker or intruder and the network administrator where both the players try to attain maximum outcome. The network administrator tries to defend the attack and the attacker tries to overcome it and attack the system. Thus network security can be enforced using game theoretic approach. This paper presents a review of game theoretic solutions developed for network security.
Keywords: Internet; game theory; information technology; security of data; ubiquitous computing; game-theoretic approach; network administration; network security; pervasiveness; Communication networks; Computational modeling; Games; Intrusion detection; Nash equilibrium; attack defence (ID#: 16-11069)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159364&isnumber=7159156
D. K. Tosh, S. Sengupta, S. Mukhopadhyay, C. A. Kamhoua and K. A. Kwiat, “Game Theoretic Modeling to Enforce Security Information Sharing Among Firms,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 7-12. doi: 10.1109/CSCloud.2015.81
Abstract: Robust CYBersecurity information EXchange (CYBEX) infrastructure is envisioned to protect the firms from future cyber attacks via collaborative threat intelligence sharing, which might be difficult to achieve via sole effort. The executive order from the U. S. federal government clearly encourages the firms to share their cybersecurity breach and patch related information among other federal and private firms for strengthening their as well as nation's security infrastructure. In this paper, we present a game theoretic framework to investigate the economic benefits of cyber-threat information sharing and analyze the impacts and consequences of not participating in the game of information exchange. We model the information exchange framework as distributed non-cooperative game among the firms and investigate the implications of information sharing and security investments. The proposed incentive model ensures and self-enforces the firms to share their breach information truthfully for maximization of its gross utility. Theoretical analysis of the incentive framework has been conducted to find the conditions under which firms' net benefit for sharing security information and investment can be maximized. Numerical results verify that the proposed model promotes such sharing, which helps to relieve their total security technology investment too.
Keywords: business data processing; electronic data interchange; game theory; security of data; breach information; cyber-threat information sharing; distributed noncooperative game; firms net benefit; game theoretic framework; gross utility; incentive model; information exchange framework; security information sharing; security investments; Computer security; Games; Information exchange; Information management; Investment; Numerical models; CYBEX; Cyber-threat intelligence; Game theory (ID#: 16-11070)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371431&isnumber=7371418
R. K. Abercrombie and F. T. Sheldon, “Security Analysis of Smart Grid Cyber Physical Infrastructures Using Game Theoretic Simulation,” Computational Intelligence, 2015 IEEE Symposium Series on, Cape Town, 2015, pp. 455-462. doi: 10.1109/SSCI.2015.74
Abstract: Cyber physical computing infrastructures typically consist of a number of interconnected sites including both cyber and physical components. In this analysis we studied the various types and frequency of attacks that may be levied on smart grid cyber physical systems. Our information security analysis utilized a dynamic Agent Based Game Theoretic (ABGT) simulation. Such simulations can be verified using a closed form game theory analytic approach to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our study on the electric sector failure scenarios from the NESCOR Working Group Study. We extracted four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.
Keywords: cyber-physical systems; game theory; power engineering computing; power system security; security of data; smart power grids; ABGT simulation; agent based game theoretic simulation; closed form game theory analytic approach; electric sector failure; electric sector functional domain; information assets; information security analysis; rationalized game theoretic rules; security analysis; smart grid cyber physical computing infrastructures; Analytical models; Computer security; Control systems; Games; Government; Smart grids (ID#: 16-11071)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7376647&isnumber=7376572
L. Kwiat, C. A. Kamhoua, K. A. Kwiat, J. Tang and A. Martin, “Security-Aware Virtual Machine Allocation in the Cloud: A Game Theoretic Approach,” Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, New York City, NY, 2015, pp. 556-563. doi: 10.1109/CLOUD.2015.80
Abstract: With the growth of cloud computing, many businesses, both small and large, are opting to use cloud services compelled by a great cost savings potential. This is especially true of public cloud computing which allows for quick, dynamic scalability without many overhead or long-term commitments. However, one of the largest dissuasions from using cloud services comes from the inherent and unknown danger of a shared platform such as the hyper visor. An attacker can attack a virtual machine (VM) and then go on to compromise the hyper visor. If successful, then all virtual machines on that hyper visor can become compromised. This is the problem of negative externalities, where the security of one player affects the security of another. This work shows that there are multiple Nash equilibria for the public cloud security game. It also demonstrates that we can allow the players' Nash equilibrium profile to not be dependent on the probability that the hyper visor is compromised, reducing the factor externality plays in calculating the equilibrium. Finally, by using our allocation method, the negative externality imposed onto other players can be brought to a minimum compared to other common VM allocation methods.
Keywords: cloud computing; game theory; probability; security of data; virtual machines; cloud services; game theoretic approach; multiple Nash equilibria; negative externality; public cloud computing; public cloud security game; security-aware virtual machine allocation method; Cloud computing; Games; Nash equilibrium; Resource management; Security; Virtual machine monitors; Virtual machining; Cloud Computing; cyber security; externality; virtual machine allocation (ID#: 16-11072)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214090&isnumber=7212169
P. Aggarwal, Z. Maqbool, A. Grover, V. S. C. Pammi, S. Singh and V. Dutt, “Cyber Security: A Game-Theoretic Analysis of Defender and Attacker Strategies in Defacing-Website Games,” Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, London, 2015, pp. 1-8. doi: 10.1109/CyberSA.2015.7166127
Abstract: The rate at which cyber-attacks are increasing globally portrays a terrifying picture upfront. The main dynamics of such attacks could be studied in terms of the actions of attackers and defenders in a cyber-security game. However currently little research has taken place to study such interactions. In this paper we use behavioral game theory and try to investigate the role of certain actions taken by attackers and defenders in a simulated cyber-attack scenario of defacing a website. We choose a Reinforcement Learning (RL) model to represent a simulated attacker and a defender in a 2×4 cyber-security game where each of the 2 players could take up to 4 actions. A pair of model participants were computationally simulated across 1000 simulations where each pair played at most 30 rounds in the game. The goal of the attacker was to deface the website and the goal of the defender was to prevent the attacker from doing so. Our results show that the actions taken by both the attackers and defenders are a function of attention paid by these roles to their recently obtained outcomes. It was observed that if attacker pays more attention to recent outcomes then he is more likely to perform attack actions. We discuss the implication of our results on the evolution of dynamics between attackers and defenders in cyber-security games.
Keywords: Web sites; computer crime; computer games; game theory; learning (artificial intelligence); RL model; attacker strategies; attacks dynamics; behavioral game theory; cyber-attacks; cyber-security game; defacing Website games; defender strategies; game-theoretic analysis; reinforcement learning; Cognitive science; Computational modeling; Computer security; Cost function; Games; Probabilistic logic; attacker; cognitive modeling; cyber security; defender; reinforcement-learning model (ID#: 16-11073)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166127&isnumber=7166109
G. Rontidis, E. Panaousis, A. Laszka, T. Dagiuklas, P. Malacaria and T. Alpcan, “A Game-Theoretic Approach for Minimizing Security Risks in the Internet-of-Things,” Communication Workshop (ICCW), 2015 IEEE International Conference on, London, 2015, pp. 2639-2644. doi: 10.1109/ICCW.2015.7247577
Abstract: In the Internet-of-Things (IoT), users might share part of their data with different IoT prosumers, which offer applications or services. Within this open environment, the existence of an adversary introduces security risks. These can be related, for instance, to the theft of user data, and they vary depending on the security controls that each IoT prosumer has put in place. To minimize such risks, users might seek an “optimal” set of prosumers. However, assuming the adversary has the same information as the users about the existing security measures, he can then devise which prosumers will be preferable (e.g., with the highest security levels) and attack them more intensively. This paper proposes a decision-support approach that minimizes security risks in the above scenario. We propose a non-cooperative, two-player game entitled Prosumers Selection Game (PSG). The Nash Equilibria of PSG determine subsets of prosumers that optimize users' payoffs. We refer to any game solution as the Nash Prosumers Selection (NPS), which is a vector of probabilities over subsets of prosumers. We show that when using NPS, a user faces the least expected damages. Additionally, we show that according to NPS every prosumer, even the least secure one, is selected with some non-zero probability. We have also performed simulations to compare NPS against two different heuristic selection algorithms. The former is proven to be approximately 38% more effective in terms of security-risk mitigation.
Keywords: Internet of Things; game theory; security of data; Nash equilibrium; Nash prosumers selection; decision support; noncooperative game; optimal prosumer set; prosumers selection Game; security risk minimization; two player game; user data theft; Cascading style sheets; Conferences; Game theory; Games; Internet of things; Security; Silicon (ID#: 16-11074)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247577&isnumber=7247062
M. Ghorbani and M. R. Hashemi, “Networked IDS Configuration in Heterogeneous Networks — A Game Theory Approach,” Electrical Engineering (ICEE), 2015 23rd Iranian Conference on, Tehran, 2015, pp. 1000-1005. doi: 10.1109/IranianCEE.2015.7146357
Abstract: Intrusion Detection Systems (IDSs) are an essential component of any network security architecture. Their importance is emphasized in today's heterogeneous and complex networks, where a variety of network assets are constantly subject to a large number of attacks. As the network traffic increases, the importance of proper IDS configuration is reinforced. For instance, the larger the number of detection libraries are, the larger number of attacks is expected to be detected. A larger number of libraries implies that the computational complexity is increased, which may reduce system performance. There is always a tradeoff between security enforcement level and system performance. Many papers in the literature have exploited Game theory to address this problem by including different factors in their proposed models. In this paper, we propose a game theoretic approach to determine the networked IDS configuration in heterogeneous networks. We utilize a more efficient way to tune IDS configuration, including library selection, based on the type and value of protected network assets; the interdependencies between assets are considered in the model. Unlike most existing methods, in the proposed game model the impact of each particular attack is considered to be different for each asset. The problem has been modeled as a non-cooperative multi-person nonzero-sum stochastic game. The existence of stationary Nash equilibrium for this game has been demonstrated.
Keywords: computational complexity; computer network security; game theory; stochastic processes; telecommunication traffic; complex networks; detection libraries; game model; game theory approach; heterogeneous networks; intrusion detection systems; library selection; network assets; network security architecture; network traffic; networked IDS configuration; noncooperative multiperson nonzero-sum stochastic game; security enforcement level; stationary Nash equilibrium; Conferences; Decision support systems; Electrical engineering; IDS; Nash equilibrium; Network Security; Stochastic Games (ID#: 16-11075)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7146357&isnumber=7146167
S. Wei et al., “On Effectiveness of Game Theoretic Modeling and Analysis Against Cyber Threats for Avionic Systems,” Digital Avionics Systems Conference (DASC), 2015 IEEE/AIAA 34th, Prague, 2015, pp. 4B2-1-4B2-13. doi: 10.1109/DASC.2015.7311417
Abstract: Cyber-attack defense requires network security situation awareness through distributed collaborative monitoring, detection, and mitigation. An issue of developing and demonstrating innovative and effective situational awareness techniques for avionics has increased in importance in the last decade. In this paper, we first conducted a game theoretical based modeling and analysis to study the interaction between an adversary and a defender. We then introduced the implementation of game-theoretic analysis on an Avionics Sensor-based Defense System (ASDS), which consists of distributed passive and active network sensors. A trade-off between defense and attack strategy was studied via existing tools for game theory (Gambit). To further enhance the defense and mitigate attacks, we designed and implemented a multi-functional web display to integrate the game theocratic analysis. Our simulation validates that the game theoretical modeling and analysis can help the Avionics Sensor-based Defense System (ASDS) adapt detection and response strategies to efficiently and dynamically deal with various cyber threats.
Keywords: aerospace computing; avionics; distributed sensors; game theory; security of data; ASDS; Gambit; active network sensors; avionic systems; avionics sensor-based defense system; cyber threats; cyber-attack defense; distributed collaborative detection; distributed collaborative mitigation; distributed collaborative monitoring; distributed passive network sensors; game theoretic modeling; multifunctional Web display; network security situation awareness techniques; Monitoring (ID#: 16-11076)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7311417&isnumber=7311321
L. Luu, R. Saha, I. Parameshwaran, P. Saxena and A. Hobor, “On Power Splitting Games in Distributed Computation: The Case of Bitcoin Pooled Mining,” Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, Verona, 2015, pp. 397-411. doi: 10.1109/CSF.2015.34
Abstract: Several new services incentivize clients to compete in solving large computation tasks in exchange for financial rewards. This model of competitive distributed computation enables every user connected to the Internet to participate in a game in which he splits his computational power among a set of competing pools -- the game is called a computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the “block withholding attack”. This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars’ worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete.
Keywords: cryptographic protocols; data mining; electronic money; game theory; Bitcoin crypto currency; Bitcoin network; Bitcoin pool; Internet; attack strategy; bitcoin pooled mining; block withholding attack; competitive distributed computation; computational power splitting game; desktop machine; financial reward; game-theoretic analysis; mixed strategy; pool protocol; public pool; Analytical models; Computational modeling; Cryptography; Games; Online banking; Protocols; Bitcoin; Cryptocurrency; Distributed computation (ID#: 16-11077)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243747&isnumber=7243713
A. Al-Talabani, A. Nallanathan and H. X. Nguyen, “Enhancing Secrecy Rate in Cognitive Radio via Game Theory,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417698
Abstract: This paper investigates the game theory based cooperation method to optimize the PHY security in both primary and secondary transmissions of a cognitive radio network (CRN) that include a primary transmitter (PT), a primary receiver (PR), a secondary transmitter (ST), a secondary receiver (SR) and an eavesdropper (ED). In CRNs, the primary terminals may decide to lease its own given bandwidth for a fraction of time to the secondary nodes in exchange for appropriate remuneration. We consider the ST as a trusted relay for primary transmission in the presence of the ED. The ST forwards the source message in a decode-and-forward (DF) fashion and, at the same time, allows part of its available power to be used to transmit an artificial noise (i.e., jamming signal) to enhance secrecy rates and avoid the employment of a separate jammer. In order to allocate power between message and jamming signals, we formulate and solve optimization problem of maximizing the primary secrecy rate (PSR) and secondary secrecy rate (SSR). We then analyse the cooperation between the primary and secondary transmitters from a game-theoretic perspective, where we model their interaction as a Stackelberg game. Finally, we apply numerical examples to illustrate the impact of the Stackelberg game on the achievable PSR and SSR. It shows that spectrum leasing based on trading secondary access for cooperation by means of relay and jammer is a promising framework for enhancing secrecy rate in cognitive radio.
Keywords: cognitive radio; decode and forward communication; game theory; jamming; optimisation; radio receivers; radio spectrum management; radio transmitters; CRN primary transmission; CRN secondary transmission; DF fashion; PHY security optimization; Stackelberg game theory; cognitive radio network; decode-and-forward fashion; eavesdropper; jammer employment; optimization problem; primary receiver; primary secrecy rate; primary transmitter; secondary receiver; secondary secrecy rate; secondary transmitter; secrecy rate enhancement; source message forwarding; spectrum leasing; Games; Jamming; Optimization; Radio transmitters; Receivers; Relays; Security (ID#: 16-11078)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417698&isnumber=7416057
R. Muthukkumar, D. Manimegalai and A. Siva Santhiya, “Game-Theoretic Approach to Detect Selfish Attacker in Cognitive Radio Ad-Hoc Networks,” Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, Chennai, 2015, pp. 1-5. doi: 10.1109/ICSCN.2015.7219888
Abstract: In wireless communication, spectrum resources are utilized by authorities in particular fields. Most of the elements in spectrum are idle. Cognitive radio is a promising technique for allocating the idle spectrum into unlicensed users. Security shortage is a major challenging issue in cognitive radio ad-hoc networks (CRAHNs) that makes performance degradation on spectrum sensing and sharing. A selfish user pre-occupies the accessible bandwidth for their prospect usage and prohibits the progress secondary users whose makes the requirement for spectrum utility. Game theoretic model is proposed to detect the selfish attacker in CRAHNs. Channel state information (CSI) is considered to inform each user's channel handing information. The two strategy of Nash Equilibrium game model such as pure and mixed strategy for secondary users (SUs) and selfish secondary users (SSUs) are investigated and the selfish attacker is detected. Moreover a novel belief updating system is also proposed to the secondary users for knowing the CSI of the primary user. A simulation result shows that, game theoretic model is achieved to increase the detection rate of selfish attackers.
Keywords: cognitive radio; game theory; radio spectrum management; Nash Equilibrium game model; channel state information; cognitive radio ad-hoc networks; game-theoretic approach; security shortage; selfish attacker; selfish secondary users; spectrum resources; spectrum sensing; spectrum sharing; Ad hoc networks; Cognitive radio; Games; Nash equilibrium; Security; Sensors; Channel state information; Cognitive Radio; Game theoretical model (ID#: 16-11079)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219888&isnumber=7219823
M. H. R. Khouzani, P. Mardziel, C. Cid and M. Srivatsa, “Picking vs. Guessing Secrets: A Game-Theoretic Analysis,” Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, Verona, 2015, pp. 243-257. doi: 10.1109/CSF.2015.24
Abstract: Choosing a hard-to-guess secret is a prerequisite in many security applications. Whether it is a password for user authentication or a secret key for a cryptographic primitive, picking it requires the user to trade-off usability costs with resistance against an adversary: a simple password is easier to remember but is also easier to guess, likewise, a shorter cryptographic key may require fewer computational and storage resources but it is also easier to attack. A fundamental question is how one can optimally resolve this trade-off. A big challenge is the fact that an adversary can also utilize the knowledge of such usability vs. security trade-offs to strengthen its attack. In this paper, we propose a game-theoretic framework for analyzing the optimal trade-offs in the face of strategic adversaries. We consider two types of adversaries: those limited in their number of tries, and those that are ruled by the cost of making individual guesses. For each type, we derive the mutually-optimal decisions as Nash Equilibria, the strategically pessimistic decisions as maximin, and optimal commitments as Strong Stackelberg Equilibria of the game. We establish that when the adversaries are faced with a capped number of guesses, the user's optimal trade-off is a uniform randomization over a subset of the secret domain. On the other hand, when the attacker strategy is ruled by the cost of making individual guesses, Nash Equilibria may completely fail to provide the user with any level of security, signifying the crucial role of credible commitment for such cases. We illustrate our results using numerical examples based on real-world samples and discuss some policy implications of our work.
Keywords: game theory; message authentication; private key cryptography; Nash equilibria; attacker strategy; cryptographic key; cryptographic primitive; game-theoretic analysis; maximin; mutually-optimal decisions; optimal commitments; password; pessimistic decisions; secret guessing; secret key; secret picking; security applications; strategic adversaries; strong Stackelberg equilibria; uniform randomization; usability costs; user authentication; Authentication; Cryptography; Dictionaries; Games; Probability distribution; Usability; Attacker-Defender Games; Decision Theory; Game Theory; Maximin; Nash Equilibrium; Password Attacks; Strong Stackelberg Equilibrium; Usability-Security Trade-off (ID#: 16-11080)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243737&isnumber=7243713
L. Wei, A. H. Moghadasi, A. Sundararajan and A. I. Sarwat, “Defending Mechanisms for Protecting Power Systems Against Intelligent Attacks,” System of Systems Engineering Conference (SoSE), 2015 10th, San Antonio, TX, 2015, pp. 12-17. doi: 10.1109/SYSOSE.2015.7151941
Abstract: The power system forms the backbone of a modern society, and its security is of paramount importance to nation's economy. However, the power system is vulnerable to intelligent attacks by attackers who have enough knowledge of how the power system is operated, monitored and controlled. This paper proposes a game theoretic approach to explore and evaluate strategies for the defender to protect the power systems against such intelligent attacks. First, a risk assessment is presented to quantify the physical impacts inflicted by attacks. Based upon the results of the risk assessment, this paper represents the interactions between the attacker and the defender by extending the current zero-sum game model to more generalized game models for diverse assumptions concerning the attacker's motivation. The attacker and defender's equilibrium strategies are attained by solving these game models. In addition, a numerical illustration is demonstrated to warrant the theoretical outcomes.
Keywords: game theory; power system protection; defending mechanisms; generalized game models; intelligent attacks; risk assessment; zero-sum game model; Games; Load modeling; Nash equilibrium; Numerical models; Power systems; Power system security (ID#: 16-11081)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7151941&isnumber=7151900
M. Emami-Taba, M. Amoui and L. Tahvildari, “Strategy-Aware Mitigation Using Markov Games for Dynamic Application-Layer Attacks,” High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on, Daytona Beach Shores, FL, 2015, pp. 134-141. doi: 10.1109/HASE.2015.28
Abstract: Targeted and destructive natures of strategies used by attackers to break down the system require mitigation approaches with dynamic awareness. In the domain of adaptive software security, the adaptation manager of a self-protecting software is responsible for selecting countermeasures to prevent or mitigate attacks immediately. Making a right decision in each and every situation is one of the most challenging aspects of engineering self-protecting software systems. Inspired by the game theory, in this research work, we model the interactions between the attacker and the adaptation manager as a two-player zero-sum Markov game. Using this game-theoretic approach, the adaptation manager can refine its strategies in dynamic attack scenarios by utilizing what has learned from the system's and adversary's actions. We also present how this approach can be fitted to the well-known MAPE-K architecture model. As a proof of concept, this research conducts a study on a case of dynamic application-layer denial of service attacks. The simulation results demonstrate how our approach performs while encountering different attack strategies.
Keywords: Markov processes; game theory; security of data; MAPE-K architecture model; adaptation manager; adaptive software security domain; application-layer denial of service attacks; attack strategy; dynamic application-layer attacks; dynamic attack scenario; game-theoretic approach; self-protecting software systems; strategy-aware mitigation approach; two-player zero-sum Markov game; Adaptation models; Computer crime; Game theory; Games; Adaptive Security; Dynamic Application-Layer Attacks; Game Theory; Markov Games (ID#: 16-11082)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027424&isnumber=7027398
S. Nardi, C. Della Santina, D. Meucci and L. Pallottino, “Coordination of Unmanned Marine Vehicles for Asymmetric Threats Protection,” OCEANS 2015 - Genova, Genoa, 2015, pp. 1-7. doi: 10.1109/OCEANS-Genova.2015.7271413
Abstract: A coordination protocol for systems of unmanned marine vehicles is proposed for protection against asymmetric threats. The problem is first modelled in a game theoretic framework, as a potential game. Then an extension of existing learning algorithms is proposed to address the problem of tracking the possibly moving threat. The approach is evaluated in scenarios of different geometric complexity such as open sea, bay, and harbours. Performance of the approach is evaluated in terms of a security index that will allow us to obtain a tool for team sizing. The tool provides the minimum number of marine vehicles to be used in the system, given a desired security level to be guaranteed and the maximum threat velocity.
Keywords: autonomous underwater vehicles; game theory; learning (artificial intelligence); oceanographic techniques; protocols; security; asymmetric threat protection; coordination protocol; game theoretic framework; geometric complexity; learning algorithm; maximum threat velocity; security index; unmanned marine vehicle coordination; Games; Heuristic algorithms; Monitoring; Robot kinematics; Robot sensing systems (ID#: 16-11083)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7271413&isnumber=7271237
C. A. Kamhoua, A. Ruan, A. Martin and K. A. Kwiat, “On the Feasibility of an Open-Implementation Cloud Infrastructure: A Game Theoretic Analysis,” 2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC), Limassol, 2015,
pp. 217-226. doi: 10.1109/UCC.2015.38
Abstract: Trusting a cloud infrastructure is a hard problem, which urgently needs effective solutions. There are increasing demands for switching to the cloud in the sectors of financial, healthcare, or government etc., where data security protections are among the highest priorities. But most of them are left unsatisfied, due to the current cloud infrastructures' lack of provable trustworthiness. Trusted Computing (TC) technologies implement effective mechanisms for attesting to the genuine behaviors of a software platform. Integrating TC with cloud infrastructure shows a promising method for verifying the cloud's behaviors, which may in turn facilitate provable trustworthiness. However, the side effect of TC also brings concerns: exhibiting genuine behaviors might attract targeted attacks. Consequently, current Trusted Cloud proposals only integrate limited TC capabilities, which hampers the effective and practical trust establishment. In this paper, we aim to justify the benefits of a fully Open-Implementation cloud infrastructure, which means that the cloud's implementation and configuration details can be inspected by both the legitimate and malicious cloud users. We applied game theoretic analysis to discover the new dynamics formed between the Cloud Service Provider (CSP) and cloud users, when the Open-Implementation strategy is introduced. We conclude that, even though Open-Implementation cloud may facilitate attacks, vulnerabilities or misconfiguration are easier to discover, which in turn reduces the total security threats. Also, cyber threat monitoring and sharing are made easier in an Open-Implementation cloud. More importantly, the cloud's provable trustworthiness will attract more legitimate users, which increases CSP's revenue and helps lowering the price. This eventually creates a virtuous cycle, which will benefit both the CSP and legitimate users.
Keywords: cloud computing; game theory; open systems; security of data; trusted computing; CSP revenue; TC technologies; cloud details; cloud service provider; cloud trustworthiness; cyber threat monitoring; data security protections; fully open-implementation cloud infrastructure; game theoretic analysis; legitimate cloud users; malicious cloud users; open-implementation cloud; open-implementation cloud strategy; software platform; total security threats; trusted computing technologies; Cloud computing; Computational modeling; Games; Hardware; Security; Virtual machine monitors; Cloud Computing; Game Analysis; Trusted Computing (ID#: 16-11084)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7431413&isnumber=7431374
Z. Wang, J. Wu, G. Cheng and Y. Jiang, “Mutine: A Mutable Virtual Network Embedding with Game-Theoretic Stochastic Routing,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417811
Abstract: In network virtualization, virtual network embedding is mostly static, which maps each virtual link onto a single predictable path, thus offering a significant advantage for adversaries to eavesdrop or intercept a certain virtual network. However, existing works on multipath embedding just focus on performance and survivability, instead of maximizing the routing unpredictability to avoid link attacks. In this paper, we present a mutable virtual network embedding framework which maps each virtual link onto a set of substrate links with a game-theoretic optimal stochastic routing policy. Firstly, we model the virtual network embedding in the context of stochastic routing with its effectiveness quantified by game theory. Then, in node mapping algorithm, we define a security capacity matrix to evaluate substrate nodes, thus overcoming two disadvantages of existing resource capacity metric. In link mapping algorithm, we work out the optimal stochastic routing policies with satisfying capacity, delay and cycle-free constraints. The simulation results indicate that our framework can significantly improve the probability that packets are not attacked, with little expense of request acceptance ratio and average routing hops.
Keywords: computer network reliability; stochastic games; telecommunication network routing; virtualisation; Mutine; game theoretic stochastic routing; link mapping algorithm; multipath embedding; mutable virtual network embedding; network virtualization; optimal stochastic routing policy; substrate links; virtual network mapping; Delays; Game theory; Resource management; Routing; Security; Stochastic processes; Substrates (ID#: 16-11085)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417811&isnumber=7416057
B. Li, “Secure Learning and Mining in Adversarial Environments [Extended Abstract],” 2015 IEEE International Conference on Data Mining Workshop (ICDMW), Atlantic City, NJ, 2015, pp. 1538-1539. doi: 10.1109/ICDMW.2015.44
Abstract: Machine learning and data mining have become ubiquitous tools in modern computing applications and large enterprise systems benefit from its adaptability and intelligent ability to infer patterns that can be used for prediction or decision-making. Great success has been achieved by applying machine learning and data mining to the security settings for large dataset, such as in intrusion detection, virus detection, biometric identity recognition, and spam filtering. However, the strengths of the learning systems, such as the adaptability and ability to infer patterns, can also become their vulnerabilities when there are adversarial manipulations during the learning and predicting process. Considering the fact that the traditional learning strategies could potentially introduce security faults into the learning systems, robust machine learning techniques against the sophisticated adversaries need to be studied, which is referred to as secure learning and mining through this abstract. Based on the goal of secure learning and mining, I aim to analyze the behavior of learning systems in adversarial environments by studying different kinds of attacks against the learning systems. Then design robust learning algorithms to counter the corresponding malicious behaviors based on the evaluation and prediction of the adversaries' goal and capabilities. The interactions between the defender and attackers are modeled as different forms of games, therefore game theoretic analysis are applied to evaluate and predict the constraints for both participants to deal with the real world large dataset.
Keywords: data mining; game theory; learning (artificial intelligence); game theoretic analysis; machine learning; robust learning algorithm; secure learning; ubiquitous tool; Analytical models; Cost function; Data mining; Electronic mail; Games; Learning systems; Robustness; adversarial learning (ID#: 16-11086)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7395856&isnumber=7395635
E. Moiseeva and M. R. Hesamzadeh, “Strategic Bidding by a Risk-Averse Firm with a Portfolio of Renewable Resources,” PowerTech, 2015 IEEE Eindhoven, Eindhoven, 2015, pp. 1-6. doi: 10.1109/PTC.2015.7232550
Abstract: The possibility of market power abuse in the systems with a high amount of flexible power sources (hydro units and energy storage) is believed to be very low. However, in practice strategic owners of these resources may limit the ramping rate whenever the price spikes occur. This is particularly relevant for the power systems with a high penetration of wind power, due to the high levels of uncertainty. In this paper we propose a bilevel game-theoretic model to investigate the effect of this type of strategic bidding. The lower-level is a security-constrained economic dispatch carried out by the system operator. The upper-level is a profit-maximization problem solved by a risk-averse company owning a varied portfolio of energy sources: energy storage, hydro power units, wind generators, as well as traditional generators. We represent the uncertainty affecting the decision making by introducing a set of wind power scenarios and variable competitors' price bids. The resulting mathematical problem with equilibrium constraints (MPEC) is recast as a single-stage mixed-integer linear program (MILP) and solved with CPLEX. In the case study we demonstrate the effect of withholding the ramp-rate on the social welfare.
Keywords: game theory; Integer programming; linear programming; power generation dispatch; power generation economics; power markets; wind power plants; CPLEX; MILP; MPEC; bilevel game-theoretic model; energy storage; hydro power units; profit-maximization problem; renewable resources portfolio; risk-averse company; risk-averse firm; security-constrained economic dispatch; single-stage mixed-integer linear program; strategic bidding; wind generators; wind power scenarios; Companies; Electricity supply industry; Generators; Portfolios; Production; Uncertainty; Wind power generation (ID#: 16-11087)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232550&isnumber=7232233
F. Gabry, R. Thobaben and M. Skoglund, “Secrecy Games in Cognitive Radio Networks with Multiple Secondary Users,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 143-148. doi: 10.1109/CNS.2015.7346822
Abstract: In this paper we investigate secrecy games in cognitive radio networks with multiple secondary pairs and secrecy constraints. We consider the cognitive channel model with multiple secondary pairs where the secondary receivers are treated as eavesdroppers with respect to the primary transmission. For this novel network model, we derive achievable rate regions when secondary pairs are allowed to use the channel simultaneously. We then investigate the spectrum sharing mechanisms using several game theoretic models, namely 1) a single-leader multiple-follower Stackelberg game with the primary transmitter as the leader and the secondary transmitters as followers; 2) a non-cooperative power control game between the secondary transmitters if they can access the channel simultaneously; and 3) an auction between a primary auctioneer and secondary bidders which allows the primary transmitter to exploit the competitive interaction between the secondary transmitters. We illustrate through numerical simulations the equilibrium outcomes of the analyzed games and the impact of the competition between the secondary transmitters on the utility performance of every node in the cognitive radio network.
Keywords: cognitive radio; game theory; numerical analysis; power control; radio receivers; radio spectrum management; radio transmitters; telecommunication control; Stackelberg game; cognitive radio networks; game theoretic models; multiple secondary pairs; multiple secondary users; noncooperative power control game; numerical simulations; primary auctioneer; primary transmitter; secondary bidders; secondary receivers; secondary transmitters; secrecy constraints; secrecy games; single-leader multiple-follower; spectrum sharing; Data communication; Games; Jamming; Radio transmitters; Receivers; Security (ID#: 16-11088)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346822&isnumber=7346791
G. Gianini, M. Cremonini, A. Rainini, G. L. Cota and L. G. Fossi, “A Game Theoretic Approach to Vulnerability Patching,” Information and Communication Technology Research (ICTRC), 2015 International Conference on, Abu Dhabi, 2015, pp. 88-91. doi: 10.1109/ICTRC.2015.7156428
Abstract: Patching vulnerabilities is one of the key activities in security management. For most commercial systems however the number of relevant vulnerabilities is very high; as a consequence only a subset of them can be actually fixed: due to bounded resources, choosing them according to some optimal criterium is a critical challenge for the security manager. One has also to take into account, though, that even delivering attacks on vulnerabilities requires a non-negligible effort: also a potential attacker will always be constrained by bounded resources. Choosing which vulnerabilities to attack according to some optimality criterium is also a difficult challenge for a hacker. Here we argue that if both types of players are rational, wishing to maximize their ROI and aware of the two sides of the problem, their respective strategies can be discussed more naturally within a Game Theory (GT) framework. We develop the fact that the above described attack/defense scenario can be mapped onto a variant of GT models known as Search Games: we call this variant Enhanced Vulnerability Patching game. Under the hypothesis of rationality of the players, GT provides a prediction for their behavior in terms of a probability distribution over the possible choices: this result can help in supporting a semi-automatic choice of patch management with constrained resources. In this work we model and solve few prototypical instances of this class of games and outline the path towards more realistic and accurate GT models.
Keywords: computer crime; game theory; search problems; statistical distributions; GT models; ROI; bounded resources; enhanced vulnerability patching game; game theoretic approach; hacker; optimality criterium; patch management; probability distribution; search games; security management; security manager; Computer hacking; Game theory; Games; Linear systems; Mathematical model (ID#: 16-11089)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7156428&isnumber=7156393
J. Abegunde, H. Xiao and J. Spring, “Resilient Tit-For-Tat (RTFT) A Game Solution for Wireless Misbehaviour,” Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, Dubrovnik, 2015, pp. 904-909. doi: 10.1109/IWCMC.2015.7289203
Abstract: The vulnerability of wireless networks to selfish and misbehaving nodes is a well known problem. The Tit-For-Tat (TFT) strategy has been proposed as a game theoretic solution to the problem, however the TFT suffers from a deadlock vulnerability. We present a modified TFT algorithm, the Resilient Tit-For-Tat (RTFT) algorithm in which we introduce the concept of alternative strategies to complement the default strategy. This combination enables us to model a non-cooperative game in which nodes are able change their strategies in order to maximize their utilities in selfish and misbehaviour scenarios. We demonstrate the viability of our proposal with simulation results.
Keywords: game theory; telecommunication security; wireless LAN; RTFT algorithm; Tit-For-Tat strategy; deadlock vulnerability; game theoretic solution; misbehaving nodes; noncooperative game; resilient Tit-For-Tat algorithm; selfish nodes; wireless misbehaviour; wireless networks vulnerability; Games; IEEE 802.11 Standard; Mathematical model; Media Access Protocol; Thin film transistors; Throughput; Game Theory; IEEE 802.11; MAC Layer Security; Resilient MAC Protocol; Tit-For-Tat; Wireless Networks (ID#: 16-11090)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289203&isnumber=7288920
M. Chessa, J. Grossklags and P. Loiseau, “A Game-Theoretic Study on Non-monetary Incentives in Data Analytics Projects with Privacy Implications,” Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, Verona, 2015, pp. 90-104. doi: 10.1109/CSF.2015.14
Abstract: The amount of personal information contributed by individuals to digital repositories such as social network sites has grown substantially. The existence of this data offers unprecedented opportunities for data analytics research in various domains of societal importance including medicine and public policy. The results of these analyses can be considered a public good which benefits data contributors as well as individuals who are not making their data available. At the same time, the release of personal information carries perceived and actual privacy risks to the contributors. Our research addresses this problem area. In our work, we study a game-theoretic model in which individuals take control over participation in data analytics projects in two ways: 1) individuals can contribute data at a self-chosen level of precision, and 2) individuals can decide whether they want to contribute at all (or not). From the analyst's perspective, we investigate to which degree the research analyst has flexibility to set requirements for data precision, so that individuals are still willing to contribute to the project, and the quality of the estimation improves. We study this tradeoffs scenario for populations of homogeneous and heterogeneous individuals, and determine Nash equilibrium that reflect the optimal level of participation and precision of contributions. We further prove that the analyst can substantially increase the accuracy of the analysis by imposing a lower bound on the precision of the data that users can reveal.
Keywords: data analysis; data privacy; game theory; incentive schemes; social networking (online); Nash equilibrium; data analytics; digital repositories; game theoretic study; nonmonetary incentives; personal information; privacy implications; social network sites; Data privacy; Estimation; Games; Noise; Privacy; Sociology; Statistics; Non-cooperative game; non-monetary incentives; population estimate; privacy; public good (ID#: 16-11091)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243727&isnumber=7243713
D. Tosh, S. Sengupta, C. Kamhoua, K. Kwiat and A. Martin, “An Evolutionary Game-Theoretic Framework for Cyber-Threat Information Sharing,” Communications (ICC), 2015 IEEE International Conference on, London, 2015, pp. 7341-7346. doi: 10.1109/ICC.2015.7249499
Abstract: The initiative to protect against future cyber crimes requires a collaborative effort from all types of agencies spanning industry, academia, federal institutions, and military agencies. Therefore, a Cybersecurity Information Exchange (CYBEX) framework is required to facilitate breach/patch related information sharing among the participants (firms) to combat cyber attacks. In this paper, we formulate a non-cooperative cybersecurity information sharing game that can guide: (i) the firms (players)1 to independently decide whether to “participate in CYBEX and share” or not; (ii) the CYBEX framework to utilize the participation cost dynamically as incentive (to attract firms toward self-enforced sharing) and as a charge (to increase revenue). We analyze the game from an evolutionary game-theoretic strategy and determine the conditions under which the players' self-enforced evolutionary stability can be achieved. We present a distributed learning heuristic to attain the evolutionary stable strategy (ESS) under various conditions. We also show how CYBEX can wisely vary its pricing for participation to increase sharing as well as its own revenue, eventually evolving toward a win-win situation.
Keywords: evolutionary computation; game theory; security of data; CYBEX framework; ESS; academia; collaborative effort; combat cyber attacks; cyber crimes; cyber threat information sharing; cybersecurity information exchange; evolutionary game theoretic framework; evolutionary game theoretic strategy; evolutionary stable strategy; federal institutions; military agencies; self-enforced evolutionary stability; spanning industry; Computer security; Games; Information management; Investment; Sociology; Statistics; CYBEX; Cybersecurity; Evolutionary Game Theory; Incentive Model; Information Sharing (ID#: 16-11092)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249499&isnumber=7248285
S. Salimi, E. A. Jorswieck, M. Skoglund and P. Papadimitratos, “Key Agreement over an Interference Channel with Noiseless Feedback: Achievable Region & Distributed Allocation,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 59-64. doi: 10.1109/CNS.2015.7346811
Abstract: Secret key establishment leveraging the physical layer as a source of common randomness has been investigated in a range of settings. We investigate the problem of establishing, in an information-theoretic sense, a secret key between a user and a base-station (BS) (more generally, part of a wireless infrastructure), but for two such user-BS pairs attempting the key establishment simultaneously. The challenge in this novel setting lies in that a user can eavesdrop another BS-user communications. It is thus paramount to ensure the two keys are established with no leakage to the other user, in spite the interference across neighboring cells. We model the system with BS-user communication through an interference channel and user-BS communication through a public channel. We find the region including achievable secret key rates for the general case that the interference channel (IC) is discrete and memoryless. Our results are examined for a Gaussian IC. In this setup, we investigate the performance of different transmission schemes for power allocation. The chosen transmission scheme by each BS essentially affects the secret key rate of the other BS-user. Assuming base stations are trustworthy but that they seek to maximize the corresponding secret key rate, a game-theoretic setting arises to analyze the interaction between the base stations. We model our key agreement scenario in normal form for different power allocation schemes to understand performance without cooperation. Numerical simulations illustrate the inefficiency of the Nash equilibrium outcome and motivate further research on cooperative or coordinated schemes.
Keywords: Gaussian channels; channel allocation; game theory; private key cryptography; radiofrequency interference; wireless channels; BS-user communication; Gaussian IC; Nash equilibrium; Noiseless Feedback; base station; game theoretic; interference channel allocation; key agreement; power allocation; public channel; secret key establishment; Base stations; Downlink; Interference channels; Resource management; Security; Yttrium (ID#: 16-11093)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346811&isnumber=7346791
W. Tong and S. Zhong, “Resource Allocation in Pollution Attack and Defense: A Game-Theoretic Perspective,” Communications (ICC), 2015 IEEE International Conference on, London, 2015, pp. 3057-3062. doi: 10.1109/ICC.2015.7248793
Abstract: Pollution attacks can cause severe damages in network coding systems. Many approaches have been proposed to defend against pollution attacks. However, the current approaches implicitly assume that the defender has adequate resources to defend against pollution attacks. When the resources of the defender are limited, they provide no information for the defender to allocate the resources to get better defense performance. In this paper, we consider the case that the defender's resources are limited and study how the defender allocates resources to defend against pollution attacks. We first propose a two-player strategic game to model the interactions between the defender and the attacker. Then, two algorithms are proposed to find the best response strategy for the defender. Finally, we conducted extensive simulations to evaluate the proposed algorithms. The results demonstrate that our algorithms can significantly improve the utility of the defender, with reasonable computation time.
Keywords: game theory; network coding; radiocommunication; resource allocation; telecommunication security; defender resources; network coding systems; pollution attack; two-player strategic game; Games; Pollution (ID#: 16-11094)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7248793&isnumber=7248285
Bing Wang, Wenjing Lou and Y. T. Hou, “Modeling the Side-Channel Attacks in Data Deduplication with Game Theory,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 200-208. doi: 10.1109/CNS.2015.7346829
Abstract: The cross-user data deduplication improves disk space efficiency of cloud storage by keeping only one copy of same files among all service users. As a result, the cloud storage service is able to offer a considerable amount of storage at an attractive price. Therefore, people begin to use cloud storage such as Dropbox and Google Drive not only as data backup but also as their primary storage for everyday usage. However, the cross-user data deduplication also rises data privacy concerns. A side-channel attack called “confirmation-of-a-file” and its variant “learn-the-remaining-information” breach the user data privacy through observing the deduplication operation of the cloud storage server. These attacks allow malicious users to pinpoint specific files if they exist in the cloud. The existing solutions sacrifice either the network bandwidth efficiency or the storage efficiency to defend the side-channel attacks without analyzing the defensive cost from the standpoint of cloud storage providers. Because profit is the key factor that motivates cloud service providers, the question that how to defend the side-channel attacks efficiently in terms of cost is not only important but also practical. However, this question remains unanswered. In this paper, we try to address this problem using game theory. We model the interaction between the attacker and the defender, i.e., the cloud storage server, using a game-theoretic framework. Our framework captures underlying complexity of the side-channel attack problem under several practical assumptions. We prove there exists a unique solution of the problem, i.e., a mixed-strategy Nash equilibrium. Our simulation results show the efficiency of our scheme.
Keywords: cloud computing; cryptography; data privacy; game theory; storage management; cloud service providers; cloud storage; data deduplication; side-channel attacks; Cloud computing; Data privacy; Encryption; Game theory; Servers (ID#: 16-11095)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346829&isnumber=7346791
W. Lausenhammer, D. Engel and R. Green, “A Game Theoretic Software Framework for Optimizing Demand Response,” Innovative Smart Grid Technologies Conference (ISGT), 2015 IEEE Power & Energy Society, Washington, DC, 2015, pp. 1-5. doi: 10.1109/ISGT.2015.7131861
Abstract: Demand response (DR) is a crucial and necessary aspect of the smart grid, particularly when considering the optimization of both, power consumption and generation. While many benefits of DR are currently under study, an issue of particular concern is optimizing end-users' power consumption profiles at various levels. This study proposes a fundamental, game theoretic software framework for DR simulation that is capable of investigating the effect of optimizing multiple electric appliances by utilizing game theoretic algorithms. Initial results show that by shifting the switch-on time of three household appliances provides a savings of up to 6%.
Keywords: domestic appliances; game theory; power consumption; smart power grids; DR simulation; demand response optimization; electric household appliance; game theoretic software framework; power generation; smart grid; Game theory; Games; Home appliances; Load management; Load modeling; Schedules; Smart grids (ID#: 16-11096)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131861&isnumber=7131775
T. -W. Chiang and J.- H. R. Jiang, “Property-Directed Synthesis of Reactive Systems from Safety Specifications,” Computer-Aided Design (ICCAD), 2015 IEEE/ACM International Conference on, Austin, TX, 2015, pp. 794-801. doi: 10.1109/ICCAD.2015.7372652
Abstract: Reactive system synthesis from safety specifications is a promising approach to the correct-by-construction methodology. The synthesis process is often divided into two separate steps: First, check specification realizability by computing the winning region of states under a game-theoretic interpretation; second, synthesize the implementation circuit based on the computed winning region if the specification is realizable. Moreover, recent results suggest that methods based on satisfiability (SAT) solving outperform those based on Binary Decision Diagrams (BDDs) especially on large benchmark instances. In this paper, we focus on the the winning region computation and propose a SAT-based algorithm. By adopting the concepts from the state-of-the-art model checking algorithm property directed reachability (PDR, a.k.a. IC3), we aim at devising an efficient computation method for automatic controller synthesis. Experimental results on the benchmarks from the synthesis competition (SyntComp 2014) show that our proposed algorithm outperforms the existing SAT-based and QBF-based methods by some margin.
Keywords: binary decision diagrams; computability; formal specification; game theory; security of data; QBF-based methods; SAT-based algorithm; correct-by-construction methodology; game-theoretic interpretation; model checking algorithm; property directed reachability; property-directed synthesis; reactive system synthesis; reactive systems; safety specifications; satisfiability; Arrays; Benchmark testing; Boolean functions; Games; Input variables; Safety (ID#: 16-11097)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7372652&isnumber=7372533
A. Ashok and M. Govindarasu, “Cyber-Physical Risk Modeling and Mitigation for the Smart Grid Using A Game-Theoretic Approach,” Innovative Smart Grid Technologies Conference (ISGT), 2015 IEEE Power & Energy Society, Washington, DC, 2015, pp. 1-5. doi: 10.1109/ISGT.2015.7131842
Abstract: Traditional probabilistic risk assessment approaches do not capture 'threats' in the risk modeling. In this paper, we propose a game-theoretic approach for cyber-physical risk modeling and mitigation that allows modeling of 'threats' by including attacker behavior, and can be adapted for dynamic attack scenarios. We also introduce a cyber-physical cost modeling framework that captures attacker actions in the cyber layer, attack impacts on the physical layer, and defender actions both in cyber and physical layer. We provide some insights into the benefits of applying game theory for cyber-physical risk modeling and mitigation through a simple, intuitive case study on a 3-bus power system. Finally, we identify some practical challenges and limitations for applying game theory to large systems and conclude the paper with some directions for future work.
Keywords: power engineering computing; power system security; risk analysis; smart power grids; Cyber-physical risk modeling; cyber-physical cost modeling; dynamic attack; game theoretic approach; physical layer; power system; probabilistic risk assessment approach; smart grid; Game theory; Games; Mathematical model; Physical layer; Security; Smart grids (ID#: 16-11098)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131842&isnumber=7131775
F. Farhidi and K. Madani, “A Game Theoretic Analysis of the Conflict over Iran's Nuclear Program,” Systems, Man, and Cybernetics (SMC), 2015 IEEE International Conference on, Kowloon, 2015, pp. 617-622. doi: 10.1109/SMC.2015.118
Abstract: Investigation of the contradictory aspects of modern diplomacy is essential to a valid understanding of the working of the political system. Among these aspects, uncertainty infuses the norms of response to the conflicts. Iran's nuclear program is an example, which has intensified a lot of clashes in the region. Here, we develop a stylized strategic model to address the process of conflict resolution in the current negotiation. Reaching an agreement has been challenging due to the conflict of interests of the players in this game. While the Western countries are worried about Iran's nuclear program, and the potential problems that can be incremented in the region, Iran claims that it is a peaceful program that pursues no threats to its neighbors. The proposed game theory model tries to verify and rationalize the announced framework agreement in negotiation to identify the potential agreement options between Iran and P5+1 countries.
Keywords: game theory; nuclear explosions; politics; weapons; Iran nuclear program; P5+1 countries; Western countries; conflict resolution process; game theoretic analysis; game theory model; political system; stylized strategic model; Analytical models; Economics; Energy resolution; Game theory; Games; Mathematical model; Security; Conflict resolution; Game theory; International politics; Iran; Nuclear program (ID#: 16-11099)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7379250&isnumber=7379111
N. Kitajima, N. Yanai, T. Nishide, G. Hanaoka and E. Okamoto, “Constructions of Fail-Stop Signatures for Multi-Signer Setting,” Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 112-123. doi: 10.1109/AsiaJCIS.2015.26
Abstract: Fail-stop signatures (FSS) provide the security for a signer against a computationally unbounded adversary by enabling the signer to provide a proof of forgery. Conventional FSS schemes are for a single-signer setting, but in the real world, there is a case where a countersignature of multiple signers (e.g. A signature between a bank, a user, and a consumer) is required. In this work, we propose a framework of FSS capturing a multi-signer setting and call the primitive fail-stop multisignatures (FSMS). We propose a generic construction of FSMS via the bundling homomorphisms proposed by Pfitzmann and then propose a provably secure instantiation of the FSMS scheme from the factoring assumption. Our proposed schemes can be also extended to fail-stop aggregate signatures (FSAS).
Keywords: digital signatures; FSAS; FSMS scheme; bundling homomorphisms; fail-stop aggregate signatures; generic construction; multisigner setting; primitive fail-stop multisignatures; proof of forgery; single-signer setting; Adaptation models; Computational modeling; Forgery; Frequency selective surfaces; Games; Public key; Fail-stop multisignatures; Fail-stop signatures; Family of bundling homomorphisms; Information-theoretic security (ID#: 16-11100)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153945&isnumber=7153836
B. Fang, Z. Qian, W. Shao, W. Zhong and T. Yin, “Game-Theoretic Precoding for Cooperative MIMO SWIPT Systems with Secrecy Consideration,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-5. doi: 10.1109/GLOCOM.2015.7417614
Abstract: In this paper, we study the secrecy precoding problem for simultaneous wireless information and power transfer (SWIPT) in a multiple-input multiple-output (MIMO) relay network. The problem is formulated as a noncooperative game, where the network utility, defined as the nonnegative sum of the achievable secrecy rate and the harvested energy, is regarded as the common payoff, and the source and the relay are assumed as two rational game players. The formulated game is proven to be a potential game, which always processes at least one pure-strategy Nash equilibrium (NE), and the optimal transmit strategy profile that maximizes the network utility also constitutes a pure-strategy NE of it. Since the best-response problems of the proposed game constitute difference convex (DC)-type programming problems, we solve them by employing a successive convex approximation (SCA) method. With the SCA method, the two best-response problems can be iteratively solved through successive convex programming of their convexified versions. Then, based on the best-response dynamic, a distributed precoding algorithm is developed to obtain a feasible NE solution the proposed game. Numerical simulations are further provided to demonstrate it. Results show that our algorithm can converge fast to a near-optimal solution with guaranteed convergence.
Keywords: MIMO communication; approximation theory; convex programming cooperative communication; game theory; precoding; telecommunication security; DC-type programming problem; NE; SCA method; cooperative MIMO SWIPT system; distributed precoding algorithm; game constitute difference convex-type programming problem; game theoretic precoding; multiple input multiple output relay network; noncooperative game; pure-strategy Nash equilibrium; secrecy consideration; simultaneous wireless information and power transfer; successive convex approximation method; successive convex programming; Erbium; Games; Heuristic algorithms; MIMO; Relays; Security (ID#: 16-11101)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417614&isnumber=7416057
S. D. Bopardikar, A. Speranzon and C. Langbort, “Trusted Computation with an Adversarial Cloud,” American Control Conference (ACC), 2015, Chicago, IL, 2015, pp. 2445-2452. doi: 10.1109/ACC.2015.7171099
Abstract: We consider the problem of computation in a cloud environment where either the data or the computation may be corrupted by an adversary. We assume that a small fraction of the data is stored locally at a client during the upload process to the cloud and that this data is trustworthy. We formulate the problem within a game theoretic framework where the client needs to decide an optimal fusion strategy using both non-trusted information from the cloud and local trusted data, given that the adversary on the cloud is trying to deceive the client by biasing the output to a different value/set of values. We adopt an Iterated Best Response (IBR) scheme for each player to update its action based on the opponent's announced computation. At each iteration, the cloud reveals its output to the client, who then computes the best response as a linear combination of its private local estimate and of the untrusted cloud output. We characterize equilibrium conditions for both the scalar and vector cases of the computed value of interest. Necessary and sufficient conditions for convergence for the IBR are derived and insightful geometric interpretations of such conditions is discussed for the vector case. Numerical results are presented showing the convergence conditions are relatively tight.
Keywords: cloud computing; game theory; geometry; iterative methods; optimisation; security of data; trusted computing; vectors; IBR scheme; adversarial cloud computing; game theoretic framework; geometric interpretation; iterated best response; optimal fusion strategy; trusted computation; vector case; Algorithm design and analysis; Convergence; Cost function; Games; Protocols; Random variables; Security; Adversarial Machine Learning; Equilibrium; Game theory; Trusted Computation (ID#: 16-11102)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7171099&isnumber=7170700
B. Kasiri, I. Lambadaris, F. R. Yu and H. Tang, “Privacy-Preserving Distributed Cooperative Spectrum Sensing in Multi-Channel Cognitive Radio MANETs,” Communications (ICC), 2015 IEEE International Conference on, London, 2015, pp. 7316-7321. doi: 10.1109/ICC.2015.7249495
Abstract: Location privacy preservation in multi-channel cognitive radio mobile ad hoc networks (CR-MANETs) is a challenging issue, where the network does not rely on a trusted central entity to impose privacy-preserving protocols. Furthermore, even though the multi-channel CR-MANETs have numerous advantages, utilization of multiple channels degrades the location privacy, by disclosing more information about CRs. In this paper, location privacy is studied for cooperative spectrum sensing (CSS) in multi-channel CR-MANETs.We first quantify the location privacy. Then, we propose a new privacy-preserving distributed cooperative spectrum sensing scheme for multi-channel CR-MANETs. We design a new anonymization method based on random manipulation of the exchanged signal-to-noise ratio (SNR). Afterwards, a coalitional game-theoretic distributed channel assignment is proposed to maximize location privacy and sensing performance over each channel in the network. Simulation results show that the proposed scheme can enhance sensing performance and location privacy over multiple channels.
Keywords: channel allocation; cooperative communication; data privacy; game theory; mobile ad hoc networks; signal detection; telecommunication security; MANET; coalitional game theory; distributed channel assignment; distributed cooperative spectrum sensing; location privacy preservation; multichannel cognitive radio; privacy preserving spectrum sensing; signal-to-noise ratio random manipulation; Bismuth; Channel allocation; Games; Information systems; Privacy; Sensors; Signal to noise ratio; Privacy preservation; cognitive radio; spectrum sensing (ID#: 16-11103)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249495&isnumber=7248285
R. Zhang and Q. Zhu, “Secure and Resilient Distributed Machine Learning Under Adversarial Environments,” Information Fusion (Fusion), 2015 18th International Conference on, Washington, DC, 2015, pp. 644-651. doi: (not provided)
Abstract: With a large number of sensors and control units in networked systems, the decentralized computing algorithms play a key role in scalable and efficient data processing for detection and estimation. The well-known algorithms are vulnerable to adversaries who can modify and generate data to deceive the system to misclassify or misestimate the information from the distributed data processing. This work aims to develop secure, resilient and distributed machine learning algorithms under adversarial environment. We establish a game-theoretic framework to capture the conflicting interests between the adversary and a set of distributed data processing units. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environment, and enhancing the resilience of the machine learning through dynamic distributed learning algorithms. We use Spambase Dataset to illustrate and corroborate our results.
Keywords: distributed processing; game theory; learning (artificial intelligence); sensors; Nash equilibrium; Spambase Dataset; adversarial environments; decentralized computing algorithms; distributed data processing units; distributed machine learning algorithms; dynamic distributed learning algorithm; game-theoretic framework; information misclassification; information misestimation; networked systems; sensors; Games; Heuristic algorithms; Machine learning algorithms; Security; Training; Training data (ID#: 16-11104)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266621&isnumber=7266535
Chenguang Zhang and Zeqing Yao, “A Game Theoretic Model of Targeting in Cyberspace,” Estimation, Detection and Information Fusion (ICEDIF), 2015 International Conference on, Harbin, 2015, pp. 335-339. doi: 10.1109/ICEDIF.2015.7280218
Abstract: Targeting is the fundamental work in cyberspace operational plan. This paper investigates the basic tradeoffs and decision processes involved in cyber targeting and proposes a simple game theoretic model for cyberspace targeting to support operational plan. Then an optimal targeting strategy decision algorithm applying the game theoretic model is developed. The key component of this game theoretic model is its ability to predict equilibrium. The paper ends up with an example on showing how the game theoretic model supports targeting decision-making, which demonstrates the simplicity and effectiveness of this decision-making model.
Keywords: Internet; decision making; game theory; security of data; cyber targeting; cyberspace operational plan; cyberspace targeting; decision process; decision-making; game theoretic model; optimal targeting strategy decision algorithm; Analytical models; Biology; Cyberspace; Decision making; Games; Lead; Terrorism; cyberspace; targeting; zero-sum games (ID#: 16-11105)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280218&isnumber=7280146
L. Xu, C. Jiang, J. Wang, Y. Ren, J. Yuan and M. Guizani, “Game Theoretic Data Privacy Preservation: Equilibrium and Pricing,” Communications (ICC), 2015 IEEE International Conference on, London, 2015, pp. 7071-7076. doi: 10.1109/ICC.2015.7249454
Abstract: Privacy issues arising in the process of collecting, publishing and mining individuals' personal data have attracted much attention in recent years. In this paper, we consider a scenario where a data collector collects data from data providers and then publish the data to a data user. To protect data providers' privacy, the data collector performs anonymization on the data. Anonymization usually causes a decline of data utility on which the data user's profit depends, meanwhile, data providers' would provide more data if anonymity is strongly guaranteed. How to make a trade-off between privacy protection and data utility is an important question for data collector. In this paper we model the interactions among data providers/collector/user as a game, and propose a general approach to find the Nash equilibriums of the game. To elaborate the analysis, we also present a specific game formulation which takes k-anonymity as the anonymization method. Simulation results show that the game theoretical analysis can help the data collector to deal with the privacy-utility trade-off.
Keywords: data privacy; game theory; Nash equilibriums; anonymization method; data utility; game theoretic data privacy preservation; privacy issues; Data models; Data privacy; Games; Information systems; Nash equilibrium; Security; data anonymization; privacy preserving (ID#: 16-11106)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249454&isnumber=7248285
D. Niyato, P. Wang, D. I. Kim, Z. Han and L. Xiao, “Game Theoretic Modeling of Jamming Attack in Wireless Powered Communication Networks,” Communications (ICC), 2015 IEEE International Conference on, London, 2015, pp. 6018-6023. doi: 10.1109/ICC.2015.7249281
Abstract: In wireless powered networks, a user can make a request and use the wireless energy transferred from an energy source for its data transmission. However, due to broadcast nature of wireless energy transfer (e.g., RF energy), a malicious node (i.e., an attacker) can also intercept the energy and use it to perform an attack by jamming the data transmission of the user. We consider such a jamming attack where the user and attacker are aware of each other. We formulate a game theoretic model to analyze the energy request and data transmission policy of the user and the attack policy of the attacker when the user and the attacker both want to maximize their own rewards. We use an iterative algorithm designed based on the best response dynamics to obtain the solution defined in terms of the constrained Nash equilibrium. The numerical results show not only the convergence of the proposed algorithm, but also the optimal reward of the user under different energy cost constraints.
Keywords: game theory; inductive power transmission; iterative methods; jamming; telecommunication power management; telecommunication security; attack policy; constrained Nash equilibrium; data transmission policy; energy request; game theoretic modeling; iterative algorithm; jamming attack; malicious node; wireless energy transfer; wireless powered communication networks; Communication system security; Data communication; Energy states; Energy storage; Games; Jamming; Wireless communication; Wireless powered communication networks; constrained stochastic game; wireless jamming (ID#: 16-11107)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249281&isnumber=7248285
B. Rashidi and C. Fung, “A Game-Theoretic Model for Defending Against Malicious Users in RecDroid,” Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, Ottawa, ON, 2015, pp. 1339-1344. doi: 10.1109/INM.2015.7140492
Abstract: RecDroid is a smartphone permission response recommendation system which utilizes the responses from expert users in the network to help inexperienced users. However, in such system, malicious users can mislead the recommendation system by providing untruthful responses. Although detection system can be deployed to detect the malicious users, and exclude them from recommendation system, there are still undetected malicious users that may cause damage to RecDroid. Therefore, relying on environment knowledge to detect the malicious users is not sufficient. In this work, we present a game-theoretic model to analyze the interaction (request/response) between RecDroid users and RecDroid system using a static Bayesian game formulation. In the game RecDroid system chooses the best response strategy to minimize its loss from malicious users. We analyze the game model and explain the Nash equilibrium in a static scenario under different conditions. Through the static game model we discuss the strategy that RecDroid can adopt to disincentivize attackers in the system, so that attackers are discouraged to perform malicious users attack. Finally, we discuss several game parameters and their impact on players' outcome.
Keywords: game theory; recommender systems; smart phones; user interfaces; RecDroid; game-theoretic model; malicious users; smartphone permission response recommendation system; Analytical models; Bayes methods; Conferences; Games; Mobile communication; Nash equilibrium; Security (ID#: 16-11108)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140492&isnumber=7140257
C. Kamhoua, A. Martin, D. K. Tosh, K. A. Kwiat, C. Heitzenrater and S. Sengupta, “Cyber-Threats Information Sharing in Cloud Computing: A Game Theoretic Approach,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 382-389. doi: 10.1109/CSCloud.2015.80
Abstract: Cybersecurity is among the highest priorities in industries, academia and governments. Cyber-threats information sharing among different organizations has the potential to maximize vulnerabilities discovery at a minimum cost. Cyber-threats information sharing has several advantages. First, it diminishes the chance that an attacker exploits the same vulnerability to launch multiple attacks in different organizations. Second, it reduces the likelihood an attacker can compromise an organization and collect data that will help him launch an attack on other organizations. Cyberspace has numerous interconnections and critical infrastructure owners are dependent on each other's service. This well-known problem of cyber interdependency is aggravated in a public cloud computing platform. The collaborative effort of organizations in developing a countermeasure for a cyber-breach reduces each firm's cost of investment in cyber defense. Despite its multiple advantages, there are costs and risks associated with cyber-threats information sharing. When a firm shares its vulnerabilities with others there is a risk that these vulnerabilities are leaked to the public (or to attackers) resulting in loss of reputation, market share and revenue. Therefore, in this strategic environment the firms committed to share cyber-threats information might not truthfully share information due to their own self-interests. Moreover, some firms acting selfishly may rationally limit their cybersecurity investment and rely on information shared by others to protect themselves. This can result in under investment in cybersecurity if all participants adopt the same strategy. This paper will use game theory to investigate when multiple self-interested firms can invest in vulnerability discovery and share their cyber-threat information. We will apply our algorithm to a public cloud computing platform as one of the fastest growing segments of the cyberspace.
Keywords: cloud computing; data protection; game theory; security of data; cyber defense; cyber interdependency; cyber-breach; cyber-threat information; cyber-threat information sharing; cybersecurity investment; cyberspace; data collection; firm investment cost reduction; firm protection; game theoretic approach; public cloud computing platform; strategic environment; vulnerability discovery maximization; Cloud computing; Computer security; Games; Information management; Organizations; Virtual machine monitors; cybersecurity; game theory; information sharing (ID#: 16-11109)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371511&isnumber=7371418
Ting-Hsuan Wu, Mei-Ju Shih and Hung-Yu Wei, “Tiered Licensed-Assisted Access with Paid Prioritization: A Game Theoretic Approach for Unlicensed LTE,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 346-351. doi: (not provided)
Abstract: The network congestion is caused by the rapidly growing data traffic and the limited wireless radio resources. In addition to the licensed spectrum, the access to unlicensed spectrum (e.g., LAA) brings hope for the service provider (SP) to mitigate the deficiency of radio resources. The premium peering deal with the content providers (CPs) can be an approach to efficiently allocate the scarce radio resources to the CPs with higher traffic load and QoS requirement. This work contributes to a content premium pricing framework for one SP and several CPs, where the SP possesses both LTE and LAA. Through the four-stage Stackelberg game, job market signaling game and second price auction, we derive the optimal bandwidth demand of each CP, the optimal amounts of licensed bandwidth and unlicensed bandwidth required by the SP, the premium access fee and basic access fee. Analysis shows that the CPs and the SP all benefit from the premium access deal. Furthermore, there is a tradeoff between improvement and variability of the SP's profit when introducing LAA.
Keywords: Long Term Evolution; game theory; quality of service; telecommunication traffic; CP; QoS requirement; SP; basic access fee; content providers; data traffic; game theoretic approach; licensed spectrum; network congestion; optimal bandwidth demand; paid prioritization; premium access; radio resources; service provider; tiered licensed assisted access; traffic load; unlicensed LTE; wireless radio resources; Barium; Chlorine; Games; Indium tin oxide; Reliability; Signal to noise ratio (ID#: 16-11110)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332593&isnumber=7332527
Z. Xu and Q. Zhu, “A Cyber-Physical Game Framework for Secure and Resilient Multi-Agent Autonomous Systems,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 5156-5161. doi: 10.1109/CDC.2015.7403026
Abstract: The increasing integration of autonomous systems with publicly available networks exposes them to cyber attackers. An adversary can launch a man-in-the-middle attack to gain control of the system and inflict maximum damages with collision and suicidal attacks. To address this issue, this work establishes an integrative game and control framework to incorporate security into the automatic designs, and take into account the cyber-physical nature and the real-time requirements of the system. We establish a cyber-physical signaling game to develop an impact-aware cyber defense mechanism and leverage model-predictive control methods to design cyber-aware control strategies. The integrative framework enables the co-design of cyber-physical systems to minimize the inflicted systems, leading to online updating the cyber defense and physical layer control decisions. We use unmanned aerial vehicles (UAVs) to illustrate the algorithm, and corroborate the analytical results in two case studies.
Keywords: autonomous aerial vehicles; game theory; multi-agent systems; predictive control; UAV; autonomous systems; cyber-aware control strategies; cyber-physical game framework; impact-aware cyber defense mechanism; integrated game-theoretic framework; integrative game and control framework; man-in-the-middle attack; model-predictive control methods; multi-agent autonomous systems; suicidal attacks; unmanned aerial vehicles; Control systems; Games; Physical layer; Predictive control; Real-time systems; Receivers; Security (ID#: 16-11111)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7403026&isnumber=7402066
L. Maghrabi and E. Pfluegel, “Moving Assets to the Cloud: A Game Theoretic Approach Based on Trust,” Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, London, 2015, pp. 1-5. doi: 10.1109/CyberSA.2015.7166120
Abstract: Increasingly, organisations and individuals are relying on external parties to store, maintain and protect their critical assets. The use of public clouds is commonly considered advantageous in terms of flexibility, scalability and cost effectiveness. On the other hand, the security aspects are complex and many resulting challenges remain unresolved. In particular, one cannot rule out the existence of internal attacks carried out by a malicious cloud provider. In this paper, we use game theory in order to aid assessing the risk involved in moving critical assets of an IT system to a public cloud. Adopting a user perspective, we model benefits and costs that arise due to attacks on the user's asset, exploiting vulnerabilities on either the user's system or the cloud. A novel aspect of our approach is the use of the trust that the user may have in the cloud provider as an explicit parameter T in the model. For some specific values of T, we show the existence of a pure Nash equilibrium and compute a mixed equilibrium corresponding to an example scenario.
Keywords: cloud computing; critical infrastructures; data protection; game theory; trusted computing; IT system; Nash equilibrium; critical asset protection; Cloud computing; Computational modeling; Games; Nash equilibrium; Risk management; Security (ID#: 16-11112)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166120&isnumber=7166109
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Hard Problems: Human Behavior and Security 2015 (Part 2) |
Human behavior creates the most complex of hard problems for the Science of Security community. The research work cited here was presented in 2015.
J. G. Proudfoot, J. L. Jenkins, J. K. Burgoon and J. F. Nunamaker, “Deception Is in the Eye of the Communicator: Investigating Pupil Diameter Variations in Automated Deception Detection Interviews,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 97-102. doi: 10.1109/ISI.2015.7165946
Abstract: Deception is pervasive, often leading to adverse consequences for individuals, organizations, and society. Information systems researchers are developing tools and evaluating sensors that can be used to augment human deception judgments. One sensor exhibiting particular promise is the eye tracker. Prior work evaluating eye trackers for deception detection has focused on the detection and interpretation of brief eye behavior variations in response to stimuli (e.g, images) or interview questions. However, research is needed to understand how eye behaviors evolve over the course of an interaction with a deception detection system. Using latent growth curve modeling, we test how pupil diameter evolves over one's interaction with a deception detection system. The results indicate that pupil diameter changes over the course of a deception detection interaction, and that these trends are indicative of deception during the interaction, regardless if incriminating target items are shown.
Keywords: behavioural sciences computing; gaze tracking; image sensors; object detection; automated deception detection interviews; communicator eye; deception detection interaction; deception detection system; eye behavior variations; eye stimuli; eye tracker; human deception judgments; information systems; latent growth curve modeling; pupil diameter variations; sensor; Accuracy; Analytical models; Information systems; Interviews; Organizations; Sensors; deception detection systems; eye tracking; pupil diameter
(ID#: 16-9649)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165946&isnumber=7165923
M. Oulehla, “Investigation into Google Play Security Mechanisms via Experimental Botnet,” 2015 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Abu Dhabi, 2015, pp. 591-596. doi: 10.1109/ISSPIT.2015.7394406
Abstract: Mobile devices such as smartphones and tablets have become a common part of human society of the 21st century and their popularity is continuously growing. However, certain research papers imply that popularity and security do not reach the same level. They suggest that there are security weaknesses allowing publishing applications with malicious behavior on Google Play. For test reasons of Google Play security mechanisms, a special pair of applications has been developed. The former is a testing application containing a mobile botnet client. It has been designed to be resistant against security scans based on dynamic analysis but its malicious intentions have been presented in uncovered form into the code of application. Such testing application has been published on Google Play. The latter is represented by a malware application with the sole purpose of being fraudulently installed on mobile devices without any security verification including Google Play. Certain interesting results have been raised by the research. Based on these results, useful future research directions to security of mobile device field have emerged.
Keywords: computer network security; invasive software; mobile handsets; Google Play security mechanism; malicious behavior; malware application; mobile botnet client; mobile device; publishing applications; security verification; security weaknesses; testing application; Google; Malware; Mobile communication; Servers; Smart phones; Android; C&C server; Google Pay; bot; botmaster; mobile botnet; mobile devices (ID#: 16-9650)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7394406&isnumber=7394243
G. Xu et al., “Towards Trustworthy Participants in Social Participatory Networks,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 194-199. doi: 10.1109/CSCloud.2015.55
Abstract: By leveraging online social networks as an underlying infrastructure, Social Participatory Network (SPN) has been becoming a new paradigm of participatory sensing systems. However, a significant barrier to the widespread use of SPN applications is their vulnerability to various forms of malicious attacks. Such threats inhibit human participation and thus the viability of SPN systems in everyday use. To solve this problem, this paper proposes a trust evaluation framework for participants to encourage wider human participation in SPN. The proposal is based on the Tianjin University's own existing SPN system, named CRCS (ClassRoom Cloud System), which enables participants to use the cloud resources for online lessons or library study. It derives the trust value of participants by using entropy-weight method and data mining algorithms to deal with the behaviors data of participants. Our proposed solution can detect malicious participants easily, and more importantly, it outperforms other work for its low cost and simple deployment. For now, though our solution is based on a specified SPN system, we are confident that this solution is highly applicable to most other SPN systems.
Keywords: cloud computing; data mining; social networking (online); trusted computing; ubiquitous computing; CRCS; classroom cloud system; cloud resource; data mining algorithm; entropy-weight method; malicious attack; online social network; participatory sensing system; social participatory network; trust evaluation framework; trustworthy participant; Computer crime; Data mining; Prototypes; Sensors; System performance; Training; Training data; CRCS; Data Mining; Entropy-weight; Social Participatory Network; Trust Evaluation Framework (ID#: 16-9651)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371480&isnumber=7371418
S. Ojha and S. Sakhare, “Image Processing Techniques for Object Tracking in Video Surveillance — A Survey,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-6. doi: 10.1109/PERVASIVE.2015.7087180
Abstract: Many researchers are getting attracted in the field of object tracking in video surveillance, which is an important application and emerging research area in image processing. Video tracking is the process of locating a moving object or multiple objects over a time using camera. Due to key features of video surveillance, it has a variety of uses like human-computer interactions, security and surveillance, video communication, traffic control, public areas such as airports, underground stations, mass events, etc. Tracking a target in a cluttered premise is still one of the challenging problems of video surveillance. A sequential flow of moving object detection, its classification, tracking and identifying the behavior completes the processing framework of video surveillance. This paper takes insight into tracking methods, their categorization into different types, focuses on important and useful tracking methods. In this paper, we provide a brief overview of tracking strategies like region based, active contour based, etc with their positive and negative aspects. Different tracking methods are mentioned with detailed description. We review general strategies under literature survey on different techniques and finally stating the analysis of possible research directions.
Keywords: cameras; image classification; object detection; object tracking; video surveillance; active contour based tracking; camera; classification; image processing technique; moving object detection; object tracking; region based tracking; target tracking; video surveillance; video tracking; Computer vision; Feature extraction; Image color analysis; Object tracking; Shape; Video surveillance; Motion segmentation; object representation (ID#: 16-9652)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087180&isnumber=7086957
Y. Xiang, L. Wang and Y. Zhang, “Power Grid Adequacy Evaluation Involving Substation Cybersecurity Issues,” Innovative Smart Grid Technologies Conference (ISGT), 2015 IEEE Power & Energy Society, Washington, DC, 2015, pp. 1-5. doi: 10.1109/ISGT.2015.7131815
Abstract: Modern power systems heavily rely on the associated cyber network, so it is crucial to develop novel methods to evaluate the overall power system adequacy considering the substation cybersecurity issues. In this study, human dynamic is applied to simulate the temporal behavior pattern of cyber attackers. The Markov game and static game are utilized to model the intelligent attack/defense behaviors in different attack scenarios. A novel framework for power system adequacy assessment incorporating the cyber and physical failures is proposed. Simulations are conducted based on a representative reliability test system, and the influences of critical parameters on system adequacy are carefully examined. It is concluded that effective measures should be implemented to ensure the overall system adequacy, and informed decisions should be made to allocate the limited resources for enhancing the cybersecurity of cyber-physical power grids.
Keywords: Markov processes; failure analysis; game theory; power grids; power system faults; power system reliability; power system security; security of data; substation protection; Markov game; cyber failure; cyber network; cyber-physical power grid adequacy evaluation; intelligent attack behavior; intelligent defense behavior; overall power system adequacy evaluation; physical failure; power system adequacy assessment; representative reliability test system; static game; substation cybersecurity issues; temporal behavior pattern simulation; Computer security; Game theory; Games; Power system dynamics; Substations; Adequacy assessment; cyber security; cyber-physical systems; human dynamics (ID#: 16-9653)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131815&isnumber=7131775
S. Choi, D. Zage, Y. R. Choe and B. Wasilow, “Physically Unclonable Digital ID,” Mobile Services (MS), 2015 IEEE International Conference on, New York, NY, 2015, pp. 105-111. doi: 10.1109/MobServ.2015.24
Abstract: The Center for Strategic and International Studies estimates the annual cost from cyber crime to be more than $400 billion. Most notable is the recent digital identity thefts that compromised millions of accounts. These attacks emphasize the security problems of using clonable static information. One possible solution is the use of a physical device known as a Physically Unclonable Function (PUF). PUFs can be used to create encryption keys, generate random numbers, or authenticate devices. While the concept shows promise, current PUF implementations are inherently problematic: inconsistent behavior, expensive, susceptible to modeling attacks, and permanent. Therefore, we propose a new solution by which an unclonable, dynamic digital identity is created between two communication endpoints such as mobile devices. This Physically Unclonable Digital ID (PUDID) is created by injecting a data scrambling PUF device at the data origin point that corresponds to a unique and matching descrambler/hardware authentication at the receiving end. This device is designed using macroscopic, intentional anomalies, making them inexpensive to produce. PUDID is resistant to cryptanalysis due to the separation of the challenge response pair and a series of hash functions. PUDID is also unique in that by combining the PUF device identity with a dynamic human identity, we can create true two-factor authentication. We also propose an alternative solution that eliminates the need for a PUF mechanism altogether by combining tamper resistant capabilities with a series of hash functions. This tamper resistant device, referred to as a Quasi-PUDID (Q-PUDID), modifies input data, using a black-box mechanism, in an unpredictable way. By mimicking PUF attributes, Q-PUDID is able to avoid traditional PUF challenges thereby providing high-performing physical identity assurance with or without a low performing PUF mechanism. Three different application scenarios with mobile devices for PUDID and Q-PUDI- have been analyzed to show their unique advantages over traditional PUFs and outline the potential for placement in a host of applications.
Keywords: authorisation; cryptography; random number generation; PUF; Q-PUDID; center for strategic and international studies; clonable static information; cryptanalysis; descrambler-hardware authentication; device authentication; digital identity thefts; dynamic human identity; encryption keys; hash functions; physically unclonable digital ID; physically unclonable function; quasi-PUDID; random number generation; two-factor authentication; Authentication; Cryptography; Immune system; Optical imaging; Optical sensors; Servers; access control; authentication; biometrics; cloning; computer security; cyber security; digital signatures; identification of persons; identity management systems; mobile hardware security (ID#: 16-9654)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226678&isnumber=7226653
K. Gai, M. Qiu, L. C. Chen and M. Liu, “Electronic Health Record Error Prevention Approach Using Ontology in Big Data,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 752-757. doi: 10.1109/HPCC-CSS-ICESS.2015.168
Abstract: Electronic Health Record (EHR) systems have been playing a dramatically important role in tele-health domains. One of the major benefits of using EHR systems is assisting physicians to gain patients' healthcare information and shorten the process of the medical decision making. However, physicians' inputs still have a great impact on making decisions that cannot be checked by EHR systems. This consequence can be influenced by human behaviors or physicians' knowledge structures. An efficient approach of alerting to the unusual decisions is an urgent requirement for current EHR systems. This paper proposes a schema using ontology in big data to generate an alerting mechanism to assist physicians to make a proper medical diagnosis. The proposed model is Ontology-based EHR Error Prevention Model (OEHR-EPM), which is implemented by a proposed algorithm, Error Prevention Adjustment Algorithm (EPAA). The ontological approach uses Protege to represent the knowledge-based ontology. The proposed schema has been examined by our experiments and the experimental results show that our schema has a higher-level accuracy rate and acceptable operating time performance.
Keywords: Big Data; decision making; electronic health records; health care; ontologies (artificial intelligence); Big data; EHR system; EPAA; OEHR-EPM; electronic health record error prevention approach; error prevention adjustment algorithm; healthcare information; knowledge-based ontology; medical decision making; medical diagnosis; ontology-based EHR error prevention model; Algorithm design and analysis; Diseases; Electronic medical records; Medical diagnostic imaging; Ontologies; Electronic health records; big data; cloud computing; error prevention; ontology (ID#: 16-9655)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336248&isnumber=7336120
J. David Schaffer, “Evolving Spiking Neural Networks: A Novel Growth Algorithm Corrects the Teacher,” Computational Intelligence for Security and Defense Applications (CISDA), 2015 IEEE Symposium on, Verona, NY, 2015, pp. 1-8. doi: 10.1109/CISDA.2015.7208630
Abstract: Spiking neural networks (SNNs) have generated considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome length for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. In experiments, the algorithm discovered SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. On a second task, a sequence detector, several related discriminating designs were found, all made “errors” in that they fired when input spikes were simultaneous (i.e. not strictly in sequence), but not when they were out of sequence. They also fired when the sequence was too close for the teacher to have declared they were in sequence. That is, evolution produced these behaviors even though it was not explicitly rewarded for doing so. We are optimistic that this technology might be scaled up to produce robust SNN designs that humans would be hard pressed to produce.
Keywords: brain; genetic algorithms; genetics; neural nets; topology; Neumann machine; SNN; algorithm grow O(n); brains; central pattern generator; evolutionary computation; gene-driven network growth algorithm; genetic algorithm; genome length; network topology; robust spike bursting behavior; spiking neural network; Algorithm design and analysis; Bioinformatics; Biological neural networks; Buildings; Design methodology; Genomics; Robustness; Genetic algorithms; noise robustness; sequence detector; spiking neural networks; tonic burster; topology growth algorithm (ID#: 16-9656)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208630&isnumber=7208613
Shraddha G. Mhatre, Satishkumar Varma and Rupali Nikhare, “Visual Surveillance Using Absolute Difference Motion Detection,” Technologies for Sustainable Development (ICTSD), 2015 International Conference on, Mumbai, 2015, pp. 1-5. doi: 10.1109/ICTSD.2015.7095848
Abstract: Surveillance is the monitoring of the behavior, activities, or other changing information, usually of people for the purpose of influencing, managing, directing, or protecting them. As security is becoming the primary concern of society and hence having a security system is becoming a big requirement. Video surveillance plays a vital role in security systems. This paper describes the ability to recognise objects and humans, to describe their actions and interactions from information acquired by sensors using absolute difference motion detection technique. Real-time implementation is achieved by using a Global System for Mobile Communication (GSM) modem for SMS (Short Message Service) notification. The ablity of tracking and recognition of the visual device was implemented using OpenCVTM for displaying an output. The detected objects motion is being captured and stored in HDD.
Keywords: cellular radio; disc drives; electronic messaging; hard discs; image motion analysis; image recognition; image sensors; modems; object detection; object tracking; security; video surveillance; GSM modem; Global System for Mobile Communication; HDD; OpenCV; SMS notification; absolute difference motion detection technique; hard disc drives; security system; short message service; visual device; visual surveillance; Cameras; GSM; Modems; Motion detection; Noise; Surveillance; Tracking; GSM system; Motion detection methods; Visual surveillance; Web Cam/ External camera (ID#: 16-9657)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7095848&isnumber=7095833
P. A. Legg, O. Buckley, M. Goldsmith and S. Creese, “Caught in the Act of an Insider Attack: Detection and Assessment of Insider Threat,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7446229
Abstract: The greatest asset that any organisation has are its people, but they may also be the greatest threat. Those who are within the organisation may have authorised access to vast amounts of sensitive company records that are essential for maintaining competitiveness and market position, and knowledge of information services and procedures that are crucial for daily operations. In many cases, those who have such access do indeed require it in order to conduct their expected workload. However, should an individual choose to act against the organisation, then with their privileged access and their extensive knowledge, they are well positioned to cause serious damage. Insider threat is becoming a serious and increasing concern for many organisations, with those who have fallen victim to such attacks suffering significant damages including financial and reputational. It is clear then, that there is a desperate need for more effective tools for detecting the presence of insider threats and analyzing the potential of threats before they escalate. We propose Corporate Insider Threat Detection (CITD), an anomaly detection system that is the result of a multi-disciplinary research project that incorporates technical and behavioural activities to assess the threat posed by individuals. The system identifies user and role-based profiles, and measures how users deviate from their observed behaviours to assess the potential threat that a series of activities may pose. In this paper, we present an overview of the system and describe the concept of operations and practicalities of deploying the system. We show how the system can be utilised for unsupervised detection, and also how the human analyst can engage to provide an active learning feedback loop. By adopting an accept or reject scheme, the analyst is capable of refining the underlying detection model to better support their decision-making process and significant reduce the false positive rate.
Keywords: business data processing; learning (artificial intelligence); security of data; CITD; active learning feedback loop; anomaly detection system; authorised access; corporate insider threat detection; decision making process; insider attack; multidisciplinary research project; sensitive company records; unsupervised detection; Analytical models; Business; Electronic mail; Feature extraction; Libraries; Measurement; Media (ID#: 16-9658)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7446229&isnumber=7190491
S. Biradar, S. B. Malipatil and C. Naikodi, “Releasing Energy of Compromised Nodes in a Secured Heterogeneous Ad-Hoc Network (MANETs),” Advanced Computing and Communication Systems, 2015 International Conference on, Coimbatore, 2015, pp. 1-6. doi: 10.1109/ICACCS.2015.7324103
Abstract: Heterogeneous Nodes in a Mobile Ad Hoc NET-work(MANET) are having very constrained resources like memory, bandwidth, CPU speed, battery life etc. Here, Heterogeneous Nodes means, all/few nodes are having variety of functionality. Irrespective of having higher secured nodes and security algorithms in MANETs, some time, the honest nodes can be accessed by fraud/malicious nodes or simply attacked by cracking security walls, in the rare case a node itself can also turn into a malicious node or acting on abnormal behaviour. This kind of scenario makes hard for tuning, hence human may not be able to acquire and catch a fraud/malicious/turned node to avoid adversary affect or misusing node's data for different purpose which is hazardous in some cases like Border Monitoring. In this novel approach, we tweak such scenarios, discharging the energy of heterogeneous node which is a valuable resources of MANETs called as retiring or invalidating a node, hence such node may not be a part of genuine communication.
Keywords: mobile ad hoc networks; resource allocation; telecommunication power management; telecommunication security; Border Monitoring; CPU speed; battery life; constrained resources; fraud-malicious nodes; heterogeneous nodes; mobile ad hoc network; secured heterogeneous ad-hoc network; security algorithms; Ad hoc networks; Batteries; Mathematical model; Mobile computing Receivers; Routing; Security; MANET; genuine node; invalidating; malicious node (ID#: 16-9659)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7324103&isnumber=7324014
G. Bottazzi and G. F. Italiano, “Fast Mining of Large-Scale Logs for Botnet Detection: A Field Study,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 1989-1996. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.295
Abstract: Botnets are considered one of the most dangerous species of network-based attack today because they involve the use of very large coordinated groups of hosts simultaneously. The behavioral analysis of computer networks is at the basis of the modern botnet detection methods, in order to intercept traffic generated by malwares for which signatures do not exist yet. Defining a pattern of features to be placed at the basis of behavioral analysis, puts the emphasis on the quantity and quality of information to be caught and used to mark data streams as normal or abnormal. The problem is even more evident if we consider extensive computer networks or clouds. With the present paper we intend to show how heuristics applied to large-scale proxy logs, considering a typical phase of the life cycle of botnets such as the search for C&C Servers through AGDs (Algorithmically Generated Domains), may provide effective and extremely rapid results. The present work will introduce some novel paradigms. The first is that some of the elements of the supply chain of botnets could be completed without any interaction with the Internet, mostly in presence of wide computer networks and/or clouds. The second is that behind a large number of workstations there are usually “human beings” and it is unlikely that their behaviors will cause marked changes in the interaction with the Internet in a fairly narrow time frame. Finally, AGDs can highlight, at the moment, common lexical features, detectable quickly and without using any black/white list.
Keywords: cloud computing; computer network security; data mining; digital signatures; invasive software; AGD; C and C Servers; Internet; abnormal data streams; algorithmically generated domains; botnet detection methods; botnet life cycle; computer network behavioral analysis; feature pattern; information quality; information quantity; large-scale proxy log mining; malwares; network-based attack; normal data streams; workstations; Cloud computing; Data mining; Feature extraction; Malware; Servers; botnet; heuristics; logs; mining; proxy (ID#: 16-9660)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363341&isnumber=7362962
D. Prochazkova, J. Prochazka, Z. Prochazka, H. Patakova and V. Strymplova, “System Approach to Study of Traffic Accidents with Hazardous Substances Presence,” Smart Cities Symposium Prague (SCSP), 2015, Prague, 2015, pp. 1-8. doi: 10.1109/SCSP.2015.7181553
Abstract: The traffic accidents with presence of hazardous substances have been occurred at transportation on roads, rail roads, rivers, seas, oceans and in air. To the origination of such accident there has been contributed the items as: vehicle design, traffic speed, roadway design, environ round roadway, skill and defects in driver's behavior, and also the properties of shipped hazardous substances. On the basis of integral safety concept the considered accidents are solved as mobile sources of risks. The paper contains the results of research obtained by critical analysis of impacts of relevant accidents in the word and in the Czech Republic, and proposals of measures for upgrade of safety of considered shipping that improve protection of humans from hazardous substances.
Keywords: accidents; design engineering; hazards; road safety; road traffic; transportation; Czech Republic; hazardous substances; human protection; integral safety concept; rail roads; roadway design; system approach; traffic accidents; traffic speed; transportation; vehicle design; Accidents; Contamination; Indexes; Pediatrics; Roads; critical infrastructure; human security; impacts; safety
(ID#: 16-9661)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181553&isnumber=7181533
A. Gandhamal and S. Talbar, “Evaluation of Background Subtraction Algorithms for Object Extraction,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-6. doi: 10.1109/PERVASIVE.2015.7087065
Abstract: There is an increase in need of video surveillance applications. Intelligent video surveillance (IVS) includes public safety and security applications, including authenticity control, crowd flow direction and crowd analysis, human behaviour detection and analysis etc. The critical part of IVS system is proper foreground estimation using background subtraction algorithms. This is a challenging task due to variations in illumination, background motion due cluttering noise like swaying trees, flowing water, etc. and the slow moving objects introduce noise in the background estimated. We have precisely concentrated on such challenges. The purpose of this evaluation is to give an overview and categorization of the approaches based on the performance measures like Precision, Recall, F measures (F1), Similarity, Matching Index and Average Classification Error. And also the available techniques are compared based on the computational complexity parameters in terms of Big-O along with their limitations for the improvement in the efficiency of the background subtraction algorithms.
Keywords: object detection; video surveillance; IVS system; background subtraction algorithms; foreground estimation; intelligent video surveillance; object extraction; Algorithm design and analysis; Brightness; Discrete cosine transforms; Estimation; Indexes; Measurement; Standards; Background Estimation; Background Subtraction; Object Segmentation; Video Surveillance (ID#: 16-9662)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087065&isnumber=7086957
J. Neel et al., “Big RF for Homeland Security Applications,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7225294
Abstract: As homeland security network deployments evolve to rely on increasingly large amounts of data from a growing variety of data sources, the ability to synthesize actionable information will become progressively more challenging. A similar problem is seen in the Information Technology (IT) domain, which is pursuing Big Data techniques to gain new insights from the relationships among the mountains of data. We believe that by applying the Big Data lessons learned in the IT world to homeland security networking and electromagnetic spectrum (EMS) problems (an application that we call “Big RF”), networks can be made more effective and efficient, commanders can gain new understanding of behaviors, problems can be identified and rectified more quickly, and many complex network management problems currently requiring human intervention can be automated. This paper examines the parallels between Big Data problems and emerging cognitive radio and related wireless applications, appropriate Big Data tools for Big RF, new Big RF applications for homeland security networks, and other developments needed to enable warfighters, first responders, network managers, and cognitive radios to maximize the capabilities offered by Big Data applied to RF domain problems.
Keywords: Big Data; cognitive radio; national security; Big Data problems; Big RF applications; first responders; homeland security applications; homeland security network deployments; network managers; warfighters; Databases; Electromagnetics; Facebook; NASA; Radio frequency; Big Data; Big RF; Cognitive Radio; Homeland Security; REM (ID#: 16-9663)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225294&isnumber=7190491
M. Alaskar, S. Vodanovich and K. N. Shen, “Evolvement of Information Security Research on Employees’ Behavior: A Systematic Review and Future Direction,” System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 4241-4250. doi: 10.1109/HICSS.2015.508
Abstract: Information Security (IS) is one of the biggest concerns for many organizations. This concern has led many to focus a huge effort into studying different IS areas. One of these critical areas is the human aspect, where investigation of employees' behaviors has emerged as an important topic. In this paper, we conduct a systematic review of all empirical studies published on this topic. The review will highlight the theoretical and methodological development and the dissemination of related empirical studies in academic journals throughout the years. At the end of the review, future research considerations are discussed and shared.
Keywords: educational administrative data processing; personnel; publishing; security of data; academic journals; employee behavior; information security; Ethics; Human factors; Information security; Organizations; Systematics (ID#: 16-9664)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070327&isnumber=7069647
N. A. Zanjani, G. Lilis, G. Conus and M. Kayal, “Energy Book for Buildings: Occupants Incorporation in Energy Efficiency of Buildings,” Smart Cities and Green ICT Systems (SMARTGREENS), 2015 International Conference on, Lisbon, 2015, pp. 1-6. doi: (not provided)
Abstract: This paper addresses a bottom-up approach for energy management in buildings. Future smart cities will need smart citizens, thus developing an interface to connect humans to their energy usage becomes a necessity. The goal is to give a touch of energy to occupants' daily behaviours and activities and making them aware of their decisions' consequences in terms of energy consumption, its cost and carbon footprint. Second, to allow people directly interacting and controling their living spaces, that means individual contributions to their feeling of comfort. Finally, a software solution to keep track of all personal energy related events is suggested and its possible features are explained.
Keywords: air pollution; building management systems; buildings (structures); energy conservation; energy consumption; energy management systems; smart cities; bottom-up approach; building energy efficiency; building energy management; carbon footprint; energy booking; energy consumption cost; energy usage; future smart cities; human comfort; occupant incorporation; smart citizens; Buildings; Energy consumption; Energy management; Monitoring; Security; Software; Temperature measurement; BEMS; Building Energy Management System; HBI; Human-Building Interactions; Human-Building Interface; Smart Buildings; Smart Cities; Smart Occupants (ID#: 16-9665)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7297959&isnumber=7297901
D. Petters and E. Waters, “Modelling Emotional Attachment: An Integrative Framework for Architectures and Scenarios,” Neural Networks (IJCNN), 2015 International Joint Conference on, Killarney, 2015, pp. 1-8. doi: 10.1109/IJCNN.2015.7280431
Abstract: Humans possess a strong innate predisposition to emotionally attach to familiar people around them who provide physical or emotional security. Attachment Theory describes and explains diverse phenomena related to this predisposition, including: infants using their carers as secure-bases from which to explore, and havens of safety to return to when tired or anxious, the development of attachment patterns over ontogenetic and phylogenetic development, and emotional responses to separation and loss throughout the lifespan. This paper proposes that one way for computational modelling to integrate these phenomena is to organise them within temporally nested scenarios, with moment to moment phenomena organised within ontogenetic and phylogenetic sequences. A number of existing agent-based models and robotic attachment simulations capture attachment behaviour, but individual simulations created with different tools and modelling approaches typically do not integrate easily with each other. Two ways to better integrate attachment model are proposed. First, a number of simulations are described that have been created with the same agent-based modelling toolkit, so showing that moment to moment secure base behaviour and the development of individual differences in attachment security can be simulated with closely related architectural designs. Secondly, an integrative modelling approach is proposed where the evaluation of, and comparison between attachment models is guided by reference to a shared conceptual framework for architectures provided by the CogAff schema. This approach can integrate a broad range of emotional processes including: the formation of a set of richer internal representations; and loss of control that can occur in emotional episodes.
Keywords: psychology; CogAff schema; agent-based model; agent-based modelling toolkit; architectural design; attachment security; attachment theory; computational modelling; emotional security; integrative modelling approach; phylogenetic development; robotic attachment simulation; Bioinformatics; Biological system modeling; Computer architecture; Genomics; Phylogeny; Robots
(ID#: 16-9666)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280431&isnumber=7280295
Tao Feng et al., “An Investigation on Touch Biometrics: Behavioral Factors on Screen Size, Physical Context and Application Context,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7225318
Abstract: With increasing privacy concerns and security demands present within mobile devices, behavioral biometric solutions, such as touch based user recognition, have been researched as of recent. However, several vital contextual behavior factors (i.e., screen size, physical and application context) and how those effect user identification performance, remains unaddressed in previous studies. In this paper we first introduce a context-aware mobile user recognition method. Then a comparative experiment to evaluate the impacts of these factors in relation to user identification performance is presented. Experimental results have demonstrated that a user's touch screen usage behavior may be affected given different contextual behavior information. Furthermore, several interesting occurrences have been found in the results: (1) screen size of a smartphone device changes the way a user touches and holds the device. A larger screen size will provide more potential methods of interacting with the device and in effect, a higher user recognition accuracy as well; and (2) application context and physical activity context can aid in achieving higher accuracy for user recognition.
Keywords: behavioural sciences; biometrics (access control); data privacy; human computer interaction; mobile computing; social aspects of automation; touch sensitive screens; application context; behavioral biometric solutions; behavioral factors; context-aware mobile user recognition method; contextual behavior factors; contextual behavior information; mobile devices; physical activity context; privacy concerns; security demands; smartphone device screen size; touch based user recognition; touch biometrics; touch screen usage behavior; user identification performance; Authentication; Biometrics (access control); Context; Feature extraction; Mobile communication; Mobile handsets; Performance evaluation (ID#: 16-9667)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225318&isnumber=7190491
J. Chen, F. Shen, D. Z. Chen and P. J. Flynn, “Iris Recognition Based on Human-Interpretable Features,” Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, Hong Kong, 2015, pp. 1-6. doi: 10.1109/ISBA.2015.7126352
Abstract: The iris is a stable biometric that has been widely used for human recognition in various applications. However, official deployment of the iris in forensics has not been reported. One of the main reasons is that the current iris recognition techniques in hard to visually inspect by examiners. To further promote the maturity of iris recognition in forensics, one way is to make the similarity between irises visualizable and interpretable. Recently, a human-in-the-loop iris recognition system was developed, based on detecting and matching iris crypts. Building on this framework, we propose a new approach for detecting and matching iris crypts automatically. Our detection method is able to capture iris crypts of various sizes. Our matching scheme is designed to handle potential topological changes in the detection of the same crypt in different acquisitions. Our approach outperforms the known visible feature based iris recognition method on two different datasets, by over 19% higher rank one hit rate in identification and over 46% lower equal error rate in verification.
Keywords: feature extraction; image capture; image matching; iris recognition; object detection; topology; biometrics; equal error rate; forensics; hit rate; human recognition; human-in-the-loop iris recognition system; human-interpretable features; iris crypt detection; iris crypt matching; iris crypts capture; topological changes; Cryptography; Feature extraction; Forensics; Gray-scale; Image segmentation; Iris; Iris recognition (ID#: 16-9668)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7126352&isnumber=7126341
M. Mitchell, R. Patidar, M. Saini, P. Singh, A. I. Wang and P. Reiher, “Mobile Usage Patterns and Privacy Implications,” Pervasive Computing and Communication Workshops (PerCom Workshops), 2015 IEEE International Conference on, St. Louis, MO, 2015, pp. 457-462. doi: 10.1109/PERCOMW.2015.7134081
Abstract: Privacy is an important concern for mobile computing. Users might not understand the privacy implications of their actions and therefore not alter their behavior depending on where they move, when they do so, and who is in their surroundings. Since empirical data about the privacy behavior of users in mobile environments is limited, we conducted a survey study of ~600 users recruited from Florida State University and Craigslist. Major findings include: (1) People often exercise little caution preserving privacy in mobile computing environments; they perform similar computing tasks in public and private. (2) Privacy is orthogonal to trust; people tend to change their computing behavior more around people they know than strangers. (3) People underestimate the privacy threats of mobile apps, and comply with permission requests from apps more often than operating systems. (4) Users' understanding of privacy is different from that of the security community, suggesting opportunities for additional privacy studies.
Keywords: data privacy; human factors; mobile computing; operating systems (computers); Craigslist; Florida State University; empirical data; mobile applications; mobile computing environments; mobile usage patterns; operating systems; permission requests; privacy threats; security community; user computing behavior; users privacy behavior; Encryption; IEEE 802.11 Standards; Mobile communication; Mobile computing; Mobile handsets; Portable computers; Privacy; privacy; security (ID#: 16-9669)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7134081&isnumber=7133953
Weihui Zhu, Xiang Fu and Weihong Han, “Online Anomaly Detection on E-Commerce Based on Variable-Length Behavior Sequence,” 11th International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM 2015), Shanghai, 2015, pp. 1-8. doi: 10.1049/cp.2015.0757
Abstract: User behavior-based anomaly detection is currently one of the major concerns of system security research. For ecommerce, this paper proposes an online anomaly detection method, based on variable-length sequences of user behavior. The algorithm includes a training stage and a detection stage. In the training stage, we mainly use the variable-length sequences to represent the correlation between the contiguous operations, and also the correlation between the related items. It makes the representation ability of our model stronger. In the detection stage, in consideration of the legitimate user's behavior patterns likely having a deviation from normal behavior patterns and the illegitimate user's behavior patterns likely being consistent with normal behavior patterns in short time, we use a windowed smooth approach to avoid such problems affecting the result when the decision value is calculated. Meanwhile, we calculate the IDF-value of every pattern in the normal user behavior pattern database, and then the pattern whose IDF-value is below the threshold would be ignored in the detection stage (The lower the IDF-value, the lower the degree of recognition). Experimental results show that our algorithm can detect anomaly in real time effectively, and could meet the needs of real-time process both in accuracy and speed.
Keywords: electronic commerce; human computer interaction; security of data; e-commerce; online anomaly detection method; system security; variable-length behavior sequence; Anomaly Detection; Electronic Commerce; IDF; User Behavior; Variable-Length Sequence (ID#: 16-9670)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7446889&isnumber=7386275
S. J. Elliott, K. O’Connor, E. Bartlow, J. J. Robertson and R. M. Guest, “Expanding the Human-Biometric Sensor Interaction Model to Identity Claim Scenarios,” Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, Hong Kong, 2015, pp. 1-6. doi: 10.1109/ISBA.2015.7126362
Abstract: Biometric technologies represent a significant component of comprehensive digital identity solutions, and play an important role in crucial security tasks. These technologies support identification and authentication of individuals based on their physiological and behavioral characteristics. This has led many governmental agencies to choose biometrics as a supplement to existing identification schemes, most prominently ID cards and passports. Studies have shown that the success of biometric systems relies, in part, on how humans interact and accept such systems. In this paper, the authors build on previous work related to the Human-Biometric Sensor Interaction (HBSI) model and examine it with respect to the introduction of a token (e.g. an electronic passport or identity card) into the biometric system. The role of the imposter within an Identity Claim scenario has been integrated to expand the HBSI model into a full version, which is able to categorise potential False Claims and Attack Presentations.
Keywords: biometrics (access control); sensors; HBSI model; ID cards; attack presentations; behavioral characteristics; biometric technologies; digital identity solutions; false claims; governmental agencies; human-biometric sensor interaction model; identity claim scenarios; individual authentication; individual identification; passports; physiological characteristics; security task; token; Adaptation models; Authentication; Biological system modeling; Fingerprint recognition; Measurement; Usability (ID#: 16-9671)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7126362&isnumber=7126341
A. Al-Nemrat and C. Benzaid, “Cybercrime Profiling: Decision-Tree Induction, Examining Perceptions of Internet Risk and Cybercrime Victimisation,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 1380-1385. doi: 10.1109/Trustcom.2015.534
Abstract: The Internet can be a double-edged sword. While offering a range of benefits, it also provides an opportunity for criminals to extend their work to areas previously unimagined. Every country faces the same challenges regarding the fight against cybercrime and how to effectively promote security for its citizens and organisations. The main aim of this study is to introduce and apply a data-mining technique (decision-tree) to cybercrime profiling. This paper also aims to draw attention to the growing number of cybercrime victims, and the relationship between online behaviour and computer victimisation. This study used secondhand data collected for a study was carried out using Jordan a s a case study to investigate whether or not individuals effectively protect themselves against cybercrime, and to examine how perception of law influences actions towards incidents of cybercrime. In Jordan, cybercafes have become culturally acceptable alternatives for individuals wishing to access the Internet in private, away from the prying eyes of society.
Keywords: Internet; computer crime; data mining; decision trees; human computer interaction; law; Internet risk perceptions; Jordan; computer victimisation; cybercrime profiling; cybercrime victimisation; data-mining technique; decision-tree induction; law perception; online behaviour; Additives; Complexity theory; Computer crime; Decision trees; Electronic mail; Classification tree; Cybercrime profiling; Data mining; Digital forensics (ID#: 16-9672)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345442&isnumber=7345233
J. S. Wu, W. C. Lin, C. T. Lin and T. E. Wei, “Smartphone Continuous Authentication Based on Keystroke and Gesture Profiling,” Security Technology (ICCST), 2015 International Carnahan Conference on, Taipei, 2015, pp. 191-197. doi: 10.1109/CCST.2015.7389681
Abstract: Recently, smartphones have become increasingly popular, whereas data leakage continues to be a serious problem for many large organizations. Consequently, smartphone applications containing sensitive personal or company data are at risk when targeted by attackers. Continuous and passive authentication is a popular scheme for secretly classifying users' identities based on their own unique touch motions (i.e., keystrokes and gestures). However, previous methods are inadequate when classifying users' singular touch motions. In this paper, we propose a novel continuous authentication method. The proposed method not only profiles behavioral biometrics from keystrokes and gestures, it also acquires the specific properties of a one-touch motion during the user's interaction with the smartphone. We demonstrate that the manner by which a user uses the touchscreen - that is, the specific location touched on the screen, the drift from when a finger moves up and down, the area touched, and the pressure used - reflects unique physical and behavioral biometrics. Moreover, the speed of the Gesture Segment (GS) is defined to extract a meaningful velocity segment. Experiments conducted to evaluate the proposed method for combining keystroke and gesture behavior demonstrate its effectiveness and accuracy.
Keywords: gesture recognition; human computer interaction; message authentication; pattern classification; smart phones; touch sensitive screens; GS; gesture profiling; gesture segment; keystroke profiling; one-touch motion; smart phone continuous authentication; touch screen; user identity classification; user interaction; Authentication; Feature extraction; Iris recognition; Mobile communication; Mobile handsets; data leakage prevention; machine learning; sensitive data protection; smartphone movement behavior (ID#: 16-9673)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389681&isnumber=7389647
W. R. Flores, H. Holm, M. Ekstedt and M. Nohlberg, “Investigating the Correlation Between Intention and Action in the Context of Social Engineering in Two Different National Cultures,” System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 3508-3517. doi: 10.1109/HICSS.2015.422
Abstract: In this paper, we shed a light on the intention-action relationship in the context of external behavioral information security threats. Specifically, external threats caused by employees' social engineering security actions were examined. This was done by examining the correlation between employees' reported intention to resist social engineering and their self-reported actions of hypothetical scenarios as well as observed action in a phishing experiment. Empirical studies including 1787 employees pertaining to six different organizations located in Sweden and USA laid the foundation for the statistical analysis. The results suggest that employees' intention to resist social engineering has a significant positive correlation of low to medium strength with both self-reported action and observed action. Furthermore, a significant positive correlation between social engineering actions captured through written scenarios and a phishing experiment was identified. Due to data being collected from employees from two different national cultures, an exploration of potential moderating effect based on national culture was also performed. Based on this analysis we identified that the examined correlations differ between Swedish, and US employees. The findings have methodological contribution to survey studies in the information security field, showing that intention and self-reported behavior using written scenarios can be used as proxies of observed behavior under certain cultural contexts rather than others. Hence, the results support managers operating in a global environment when assessing external behavioral information security threats in their organization.
Keywords: behavioural sciences computing; cultural aspects; human factors; personnel; security of data; social sciences computing; statistical analysis; Sweden; Swedish employees; US employees; USA; employee intention; employee social engineering security actions; external behavioral information security threats; information security field; intention-action correlation; intention-action relationship; national cultures; phishing experiment; self-reported action; self-reported behavior; Context; Correlation; Cultural differences; Information security; Organizations; Resists (ID#: 16-9674)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070237&isnumber=7069647
G. Shikkenawis and S. K. Mitra, “Locality Preserving Discriminant Projection,” Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, Hong Kong, 2015, pp. 1-6. doi: 10.1109/ISBA.2015.7126365
Abstract: Face is the most powerful biometric as far as human recognition system is concerned which is not the case for machine vision. Face recognition by machine is yet incomplete due to adverse, unconstrained environment. Out of several attempts made in past few decades, subspace based methods appeared to be more accurate and robust. In the present proposal, a new subspace based method is developed. It preserves the local geometry of data points, here face images. In particular, it keeps the neighboring points which are from the same class close to each other and those from different classes far apart in the subspace. The first part can be seen as a variant of locality preserving projection (LPP) and the combination of both the parts is mentioned as locality preserving discriminant projection (LPDP). The performance of the proposed subspace based approach is compared with a few other contemporary approaches on some benchmark databases for face recognition. The current method seems to perform significantly better.
Keywords: biometrics (access control); face recognition; geometry; visual databases; LPDP; LPP; benchmark databases; contemporary approach; data point local geometry; face image; human recognition system; locality preserving discriminant projection; subspace based method; Benchmark testing; Databases; Error analysis; Face; Face recognition; Lighting; Training (ID#: 16-9675)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7126365&isnumber=7126341
A. Farooq, J. Isoaho, S. Virtanen and J. Isoaho, “Information Security Awareness in Educational Institution: An Analysis of Students’ Individual Factors,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 352-359. doi: 10.1109/Trustcom.2015.394
Abstract: The purpose of this paper is to study information security awareness (ISA) among university students and further analyze how different individual factors impact it. Through descriptive survey approach, a questionnaire consisting of 30 items was circulated in our university, resulting in 614 usable responses. Here the ISA is considered as a combination of knowledge and behavior. Factors such as age, gender, level of education, field of study, nationality, area of living, working experience and ISA training are considered as individual factors. Perceived ISA level among the students is also examined. For the overall study, arithmetic mean and standard deviation are used. For analyzing the effect of different individual factors, Pearson's coefficient of correlation is computed. Gender, living place and information security related training have statistically significant correlation with attained ISA level, whereas, factors such as age, nationality, discipline and level of education have statistically insignificant correlation with attained ISA level. Furthermore, gender and training have statistical significant correlation with the perceived ISA as well as the dimensions of ISA, that is, knowledge and behavior. Factors such as age and experience have significant correlation with perceived ISA, whereas, living area correlates with knowledge only.
Keywords: age issues; educational administrative data processing; educational institutions; gender issues; human factors; security of data; statistical analysis; training; ISA training factor; age factor; area-of-living factor; education level factor; educational institution; field-of-study factor; gender factor; information assets; information security awareness; nationality factor; statistical significant correlation; students individual factors analysis; working experience factor; Context; Correlation; Information security; Information technology; Training; Age; Behavior; Demographic Factors; Educational Disciplines; Gender; Information Security Awareness; Knowledge; Miscellaneous Security Issues; Security; Threats (ID#: 16-9676)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345302&isnumber=7345233
F. Yao, S. Y. Yerima, B. Kang and S. Sezer, “Event-Driven Implicit Authentication for Mobile Access Control,” Next Generation Mobile Applications, Services and Technologies, 2015 9th International Conference on, Cambridge, 2015, pp. 248-255. doi: 10.1109/NGMAST.2015.47
Abstract: In order to protect user privacy on mobile devices, an event-driven implicit authentication scheme is proposed in this paper. Several methods of utilizing the scheme for recognizing legitimate user behavior are investigated. The investigated methods compute an aggregate score and a threshold in real-time to determine the trust level of the current user using real data derived from user interaction with the device. The proposed scheme is designed to: operate completely in the background, require minimal training period, enable high user recognition rate for implicit authentication, and prompt detection of abnormal activity that can be used to trigger explicitly authenticated access control. In this paper, we investigate threshold computation through standard deviation and EWMA (exponentially weighted moving average) based algorithms. The result of extensive experiments on user data collected over a period of several weeks from an Android phone indicates that our proposed approach is feasible and effective for lightweight real-time implicit authentication on mobile smartphones.
Keywords: authorisation; data privacy; human computer interaction; message authentication; mobile computing; mobile radio; moving average processes; telecommunication security; trusted computing; EWMA; abnormal activity detection; aggregate score; event-driven implicit authentication scheme; explicitly authenticated access control; exponentially weighted moving average based algorithms; legitimate user behavior recognition; mobile access control; mobile devices; standard deviation; threshold computation; trust level; user interaction; user privacy protection; user recognition rate; Aggregates; Authentication; Browsers; Context; History; Mobile handsets; Training; behavior-based authentication; implict authentication; mobile access control (ID#: 16-9677)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373251&isnumber=7373199
A. K. Lim and C. Thuemmler, “Opportunities and Challenges of Internet-Based Health Interventions in the Future Internet,” Information Technology - New Generations (ITNG), 2015 12th International Conference on, Las Vegas, NV, 2015, pp. 567-573. doi: 10.1109/ITNG.2015.95
Abstract: Internet-based health interventions are behavioral treatments aim at changing behaviors to promote healthy living and prevent diseases and illness. This paper first discusses the benefits and effectiveness of Internet-based health interventions. It continues to explore the opportunities and challenges of Internet-based health interventions made possible by the Future Internet and emerging technologies. Identifying the psychological and social barriers can help to improve the delivery of healthcare interventions in a number of ways, including assuring privacy and security, building trust and promoting equal access. Addressing these barriers can ultimately lead to greater acceptance of new technologies and improved health outcomes.
Keywords: Internet; health care; human factors; Internet-based health interventions; behavioral treatments; future Internet; psychological barriers; social barriers; technology acceptance; Data mining; Diseases; Mobile communication; Privacy; Psychology; Future Internet; Internet of Everything; Internet of Things; Psychological and Social Barriers; eHealth (ID#: 16-9678)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113533&isnumber=7113432
Z. Sahnoune, E. Aïmeur, G. E. Haddad and R. Sokoudjou, “Watch Your Mobile Payment: An Empirical Study of Privacy Disclosure,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 934-941. doi: 10.1109/Trustcom.2015.467
Abstract: Using a smartphone as payment device has become a highly attractive feature that is increasingly influencing user acceptance. Electronic wallets, near field communication, and mobile shopping applications, are all incentives that push users to adopt m-payment. Hence, this makes the sensitive data that already exists on everyone's smartphone easily collated to their financial transaction details. In fact, misusing m-payment can be a real privacy threat. The existing privacy issues regarding m-payment are already numerous, and can be caused by different factors. We investigate, through an empirical survey-based study, the different factors and their potential correlations and regression values. We identify three factors that influence directly privacy disclosure: the user's privacy concerns, his risk perception, and the protection measure appropriateness. These factors are impacted by indirect ones, which are linked to the users' and the technology's characteristics, and the behaviour of institutions and companies. In order to analyse the impact of each factor, we define a new research model for privacy disclosure based on several hypotheses. The study is mainly based on a five-item scale survey, and on the modelling of structural equations. In addition to the impact estimations for each factor, our study results indicate that the privacy disclosure in m-payment is primarily caused by the “protection measure appropriateness”, which, in its turn, impacted by “the m-payment convenience.” We discuss in this paper the research model, the methodology, the findings and their significance.
Keywords: Internet; data privacy; human factors; mobile commerce; near-field communication; regression analysis; risk analysis; smart phones; electronic wallets; financial transaction details; m-payment; mobile payments; mobile shopping applications; near field communication; payment device; privacy disclosure; privacy threat; regression values; risk perception; smartphone; structural equation modelling; technology characteristics; user acceptance; user privacy concerns; Context; Data privacy; Mobile communication; Mobile handsets; Privacy; Security; Software; privacy concerns; privacy perception; privacy policies; structural equation modeling
(ID#: 16-9679)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345375&isnumber=7345233
D. Rissacher and D. Galy, “Cardiac Radar for Biometric Identification Using Nearest Neighbour of Continuous Wavelet Transform Peaks,” Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, Hong Kong, 2015, pp. 1-6. doi: 10.1109/ISBA.2015.7126356
Abstract: This work explores the use of cardiac data acquired by a 2.4 GHz radar system as a potential biometric identification tool. Monostatic and bistatic systems are used to record data from human subjects over two visits. Cardiac data is extracted from the radar recordings and an ensemble average is computed using ECG as a time reference. The Continuous Wavelet Transform is then computed to provide time-frequency analysis of the average radar cardiac cycle and a nearest neighbor technique is applied to demonstrate that a cardiac radar system has some promise as a biometric identification technology currently producing Rank-1 accuracy of 19% and Rank-5 accuracy of 42% over 26 subjects.
Keywords: biometrics (access control); electrocardiography; medical signal processing; time-frequency analysis; wavelet transforms; ECG; biometric identification; biometric identification tool; cardiac radar system; continuous wavelet transform; continuous wavelet transform peaks; nearest neighbour; radar system; time reference; time-frequency analysis; Accuracy; Continuous wavelet transforms; Electrocardiography; Feature extraction; Radar (ID#: 16-9680)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7126356&isnumber=7126341
A. Aggarwal and P. Kumaraguru, “What They Do in Shadows: Twitter Underground Follower Market,” Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 93-100. doi: 10.1109/PST.2015.7232959
Abstract: Internet users and businesses are increasingly using online social networks (OSN) to drive audience traffic and increase their popularity. In order to boost social presence, OSN users need to increase the visibility and reach of their online profile, like - Facebook likes, Twitter followers, Instagram comments and Yelp reviews. For example, an increase in Twitter followers not only improves the audience reach of the user but also boosts the perceived social reputation and popularity. This has led to a scope for an underground market that provides followers, likes, comments, etc. via a network of fraudulent and compromised accounts and various collusion techniques. In this paper, we landscape the underground markets that provide Twitter followers by studying their basic building blocks - merchants, customers and phony followers. We charecterize the services provided by merchants to understand their operational structure and market hierarchy. Twitter underground markets can operationalize using a premium monetary scheme or other incentivized freemium schemes. We find out that freemium market has an oligopoly structure with few merchants being the market leaders. We also show that merchant popularity does not have any correlation with the quality of service provided by the merchant to its customers. Our findings also shed light on the characteristics and quality of market customers and the phony followers provided by underground market. We draw comparison between legitimate users and phony followers, and find out key identifiers to separate such users. With the help of these differentiating features, we build a supervised learning model to predict suspicious following behaviour with an accuracy of 89.2%.
Keywords: human factors; learning (artificial intelligence); oligopoly; social networking (online); Facebook likes; Instagram comments; OSN users; Twitter followers; Yelp reviews; customers; fraudulent network; incentivized freemium schemes; market hierarchy; market leaders; merchant popularity; oligopoly structure; online profile; online social networks; operational structure; perceived social popularity; perceived social reputation; phony followers; premium monetary scheme; quality of service; social presence; supervised learning model; suspicious following behaviour prediction; underground follower market; Business; Data collection; Facebook; Measurement; Media; Quality of service; Twitter (ID#: 16-9681)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232959&isnumber=7232940
R. Subramanian et al., “Orientation Invariant Gait Matching Algorithm Based on the Kabsch Alignment,” Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, Hong Kong, 2015, pp. 1-8. doi: 10.1109/ISBA.2015.7126347
Abstract: Accelerometer and gyroscope sensors in smart phones capture the dynamics of human gait that can be matched to arrive at identity authentication measures of the person carrying the phone. Any such matching method has to take into account the reality that the phone may be placed at uncontrolled orientations with respect to the human body. In this paper, we present a novel orientation invariant gaitmatching algorithm based on the Kabsch alignment. The algorithm consists of simple, intuitive, yet robust methods for cycle splitting, aligning orientation, and comparing gait signals. We demonstrate the effectiveness of the method using a dataset from 101 subjects, with the phone placed in uncontrolled orientations in the holster and in the pocket, and collected on different days. We find that the orientation invariant gait algorithm results in a significant reduction in error: up to a 9% reduction in equal error rate, from 30.4% to 21.5% when comparing data captured on different days. On the McGill dataset from 20 subjects, which is the other dataset with orientation variation, we find a more pronounced effect; the identification rate increased from 67.5% to 96.5%. On the OU-ISIR data, which has data from 745 subjects, the equal error rates are as low as 6.3%, which is among the best reported in the literature.
Keywords: accelerometers; gait analysis; gyroscopes; image matching; smart phones; Kabsch alignment; McGill dataset; OU-ISIR data; accelerometer; cycle splitting; gait signal comparison; gyroscope sensors; human gait dynamics; identity authentication measures; orientation alignment; orientation invariant gait matching algorithm; Acceleration; Gravity; Gyroscopes; Intelligent sensors; Legged locomotion; Probes; Distribution Statement A: Approved for Public release; Distribution Unlimited (ID#: 16-9682)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7126347&isnumber=7126341
S. Intarasothonchun and W. Srimuang, “Improving Performance of Classification Intrusion Detection Model by Weighted Extreme Learning Using Behavior Analysis of the Attack,” 2015 International Computer Science and Engineering Conference (ICSEC), Chiang Mai, 2015, pp. 1-5. doi: 10.1109/ICSEC.2015.7401431
Abstract: This research was aimed to develop classification intrusion detection model by Weighted ELM which presented in [8], bringing analysis of 42 attributes to find the ones related to each format of attack, remaining only 13 attributes which were chosen to use in Weighted ELM working system in order to classify various attack formats and compared to experimental result with SVM+GA [7] and Weighted ELM techniques [8]. The result showed that New Weighted ELM was quite accurate in classifying every format of attack, which the presented working system of the method used RBF Kernel Activation Function and defined Trade-off Constant C value at 22 = 4, giving validity value to be Normal = 99.21%, DoS = 99.97%, U2R = 99.59%, R2L - 99.04% and Probing Attack = 99.13%, average validity value was at 99.39% Comparing to Weighted ELM in [8], found that, the presented method could improve the effectiveness of the former method enable to more classify R2L from 93.94% to 99.04%, and from 96.94% to 99.13% for Probing Attack meanwhile DoS and U2R had lower effectiveness, yet there was resemble effectiveness.
Keywords: learning (artificial intelligence); pattern classification; security of data; RBF kernel activation function; SVM+GA; behavior analysis; classification intrusion detection model; probing attack; weighted ELM; weighted extreme learning method; High definition video; Intrusion Detection; Trade-off Constant C; Weighted ELM; behavior analysis (ID#: 16-9683)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7401431&isnumber=7401392
M. A. E. Fadl, B. Abbey and K. S. Choi, “Effect of IT Trading Platform on Financial Risk-Taking and Portfolio Performance,” System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 3298-3306. doi: 10.1109/HICSS.2015.398
Abstract: As a fast growing area in Finance, Information technology (IT) plays an important role in how traders trade online. Investigating whether online trading has a significant effect on financial returns and risks is central to this inquiry. This study, using perceived usefulness and satisfaction categories, addresses how the IT trading platform affects the trader's trading risk-taking behavior and stock portfolio performance. We examined two unique data sets: 2,726 proprietary online trading accounts and 178 professional investors' field survey. The results revealed that while the perceived usefulness category presented significant differences between the risk-taking groups and significant impact on stock portfolio performance, the satisfaction category showed no significant results.
Keywords: electronic commerce; human factors; investment; risk management; IT trading platform; financial returns; financial risk-taking; information technology; online trading; perceived usefulness category; professional investors; risk-taking groups; satisfaction category; stock portfolio performance; trading risk-taking behavior; Biological system modeling; Computers; Customer satisfaction; Finance; Portfolios; Security (ID#: 16-9684)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070213&isnumber=7069647
J. B. Fernando and K. Morikawa, “Improvement of Human Identification Accuracy by Wavelet of Peak-Aligned ECG,” Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, Hong Kong, 2015, pp. 1-6. doi: 10.1109/ISBA.2015.7126358
Abstract: In this paper, a novel method of human identification using electrocardiogram (ECG) is proposed. In the method, while normalizing RR interval, in addition to normalized signal where time interval of P wave, Q wave, R wave, S wave relatively to R wave is unaligned, normalized signal where time interval of those peaks is aligned is also generated. Wavelet transform is then applied to both normalized signals and feature vector is extracted from their wavelet coefficients. ECG data are collected from 10 subjects using a pair of dry electrodes which are held by two fingers. Experiment results show that adding wavelet of peak-aligned ECG improves the classification accuracy, where the maximum accuracy is 100%, 97%, and 90% for data measured in more than 20 seconds, 5 seconds, and 3 seconds respectively.
Keywords: electrocardiography; feature extraction; medical signal processing; signal classification; wavelet transforms; P wave time interval; Q wave time interval; R wave time interval; RR interval normalization; S wave time interval; classification accuracy improvement; dry electrodes; electrocardiogram; feature vector; human identification accuracy; normalized signal; peak-aligned ECG wavelets; unaligned R wave; wavelet coefficients; wavelet transform; Accuracy; Electrocardiography; Electrodes; Feature extraction; Time measurement; Wavelet transforms (ID#: 16-9685)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7126358&isnumber=7126341
Nai-Wei Lo, Chi- Kai Yu and Chao Yang Hsu, “Intelligent Display Auto-Lock Scheme for Mobile Devices,” Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 48-54. doi: 10.1109/AsiaJCIS.2015.30
Abstract: In recent years people in modern societies have heavily relied on their own intelligent mobile devices such as smartphones and tablets to get personal services and improve work efficiency. In consequence, quick and simple authentication mechanisms along with energy saving consideration are generally adopted by these smart handheld devices such as screen auto-lock schemes. When a smart device activates its screen lock mode to protect user privacy and data security on this device, its screen auto-lock scheme will be executed at the same time. Device user can setup the length of time period to control when to activate the screen lock mode of a smart device. However, it causes inconvenience for device users when a short time period is set for invoking screen auto-lock. How to get balance between security and convenience for individual users to use their own smart devices has become an interesting issue. In this paper, an intelligent display (screen) auto-lock scheme is proposed for mobile users. It can dynamically adjust the unlock time period setting of an auto-lock scheme based on derived knowledge from past user behaviors.
Keywords: authorisation; data protection; display devices; human factors; mobile computing; smart phones; authentication mechanisms; data security; energy saving; intelligent display auto-lock scheme; intelligent mobile devices; mobile users; personal services; screen auto-lock schemes; smart handheld devices; tablets; unlock time period; user behaviors; user convenience; user privacy protection; user security; work efficiency improvement; Authentication; IEEE 802.11 Standards; Mathematical model; Smart phones; Time-frequency analysis; Android platform; display auto-lock; smartphone (ID#: 16-9686)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153935&isnumber=7153836
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Hard Problems: Human Behavior and Security 2015 (Part 1) |
Human behavior creates the most complex of hard problems for the Science of Security community. The research work cited here was presented in 2015.
Y. Yang, N. Vlajic and U. T. Nguyen, “Web Bots that Mimic Human Browsing Behavior on Previously Unvisited Web-Sites: Feasibility Study and Security Implications,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 757-758. doi: 10.1109/CNS.2015.7346921
Abstract: In the past, there have been many attempts at developing accurate models of human-like browsing behavior. However, most of these attempts/models suffer from one of following drawbacks: they either require that some previous history of actual human browsing on the target web-site be available (which often is not the case); or, they assume that 'think times' and 'page popularities' follow the well-known Poisson and Zipf distribution (an old hypothesis that does not hold well in the modern-day WWW). To our knowledge, our work is the first attempt at developing a model of human-like browsing behavior that requires no prior knowledge or assumption about human behavior on the target site. The model is founded on a more general theory that defines human behavior as an 'interest-driven' process. The preliminary simulation results are very encouraging - web bots built using our model are capable of mimicking real human browsing behavior 1000-fold better compared to bots that deploy random crawling strategy.
Keywords: Internet; Poisson distribution; Web sites; Web bots; Web-sites; Zipf distribution; human-like browsing behavior; interest-driven process; Computer hacking; Electronic mail; History; Software; Web pages; bot modeling; interest-driven human browsing (ID#: 16-9609)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346921&isnumber=7346791
M. Kilger, “Integrating Human Behavior into the Development of Future Cyberterrorism Scenarios,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 693-700. doi: 10.1109/ARES.2015.105
Abstract: The development of future cyber terrorism scenarios is a key component in building a more comprehensive understanding of cyber threats that are likely to emerge in the near-to mid-term future. While developing concepts of likely new, emerging digital technologies is an important part of this process, this article suggests that understanding the psychological and social forces involved in cyber terrorism is also a key component in the analysis and that the synergy of these two dimensions may produce more accurate and detailed future cyber threat scenarios than either analytical element alone.
Keywords: computer crime; human factors; terrorism; cyber threats; cyberterrorism scenarios; digital technologies; human behavior; psychological force; social force; Computer crime; Computer hacking; Organizations; Predictive models; Psychology; Terrorism; cyberterrorism; motivation; psychological; scenario; social (ID#: 16-9610)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299981&isnumber=7299862
N. Rastogi, “Sentences and Circumplexes: Prediction of Human Behaviour and Human Emotional States in Social Media,” Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, Noida, 2015, pp. 463-465. doi: 10.1109/ICGCIoT.2015.7380508
Abstract: This report explores the usage of Circumplexes on an individual depth. Understanding and defining a person on basis of their emotions and interests. Using Circumplexes for valence of emotions [1]. Circumplex has been redefined for this type of work. The goal is to understand human emotions and predict them. Emotional states and Ideas has been treated differently throughout this report. Security of individuals, prediction of human behaviour has been dealt in this model. How this model can radically change the things like mobile application, artificial intelligence, internet services, video games and different media. Rating people on their emotional state (pleasure or displeasure). A basic understanding of how a person will be affected or react to a certain situations or about his/her interest in things can be predicted more accurately.
Keywords: Internet; behavioural sciences computing; computer games; mobile computing; social networking (online); Internet services; artificial intelligence; emotional state; human behaviour prediction; human emotional states; mobile application; social media; video games; Predictive models; Circumplex; emotional granularity; facial recognition; optical character recognition; social trends; voice recognition (ID#: 16-9611)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380508&isnumber=7380415
O. Banos et al., “Mining Minds: An Innovative Framework for Personalized Health and Wellness Support,” Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2015 9th International Conference on, Istanbul, 2015, pp. 1-8. doi: 10.4108/icst.pervasivehealth.2015.259083
Abstract: The world is witnessing a spectacular shift in the delivery of health and wellness care. The key ingredient of this transformation consists in the use of revolutionary digital technologies to empower people in their self-management as well as to enhance traditional care procedures. While substantial domain-specific contributions have been provided to that end in the recent years, there is a clear lack of platforms that may orchestrate, and intelligently leverage, all the data, information and knowledge generated through these technologies. This work presents Mining Minds, an innovative framework that builds on the core ideas of the digital health and wellness paradigms to enable the provision of personalized healthcare and wellness support. Mining Minds embraces some of the currently most prominent digital technologies, ranging from Big Data and Cloud Computing to Wearables and Internet of Things, and state-of-the-art concepts and methods, such as Context-Awareness, Knowledge Bases or Analytics, among others. This paper aims at thoroughly describing the efficient and rational combination and interoperation of these modern technologies and methods through Mining Minds, while meeting the essential requirements posed by a framework for personalized health and wellness support.
Keywords: Big Data; Internet of Things; cloud computing; data mining; health care; Mining Minds; big data; context-awareness; digital health and wellness paradigms; digital technologies; domain-specific contributions; health and wellness care; knowledge analytics; knowledge bases; personalized health and wellness support; traditional care procedures; wearables; Biomedical monitoring; Data mining; Medical services; Mobile communication; Monitoring; Privacy; Security; digital health; human behavior; quantified-self; user experience (ID#: 16-9612)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7349350&isnumber=7349344
S. S. Yau, A. B. Buduru and V. Nagaraja, “Protecting Critical Cloud Infrastructures with Predictive Capability,” Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, New York City, NY, 2015, pp. 1119-1124. doi: 10.1109/CLOUD.2015.165
Abstract: Emerging trends in cyber system security breaches, including those in critical infrastructures involving cloud systems, such as in applications of military, homeland security, finance, utilities and transportation systems, have shown that attackers have abundant resources, including both human and computing power, to launch attacks. The sophistication and resources used in attacks reflect that the attackers may be supported by large organizations and in some cases by foreign governments. Hence, there is an urgent need to develop intelligent cyber defense approaches to better protecting critical cloud infrastructures. In order to have much better protection for critical cloud infrastructures, effective approaches with predictive capability are needed. Much research has been done by applying game theory to generating adversarial models for predictive defense of critical infrastructures. However, these approaches have serious limitations, some of which are due to the assumptions used in these approaches, such as rationality and Nash equilibrium, which may not be valid for current and emerging cloud infrastructures. Another major limitation of these approaches is that they do not capture probabilistic human behaviors accurately, and hence do not incorporate human behaviors. In order to greatly improve the protection of critical cloud infrastructures, it is necessary to predict potential security breaches on critical cloud infrastructures with accurate system-wide causal relationship and probabilistic human behaviors. In this paper, the challenges and our vision on developing such proactive protection approaches are discussed.
Keywords: cloud computing; critical infrastructures; data protection; inference mechanisms; security of data; Nash equilibrium; adversarial models; critical cloud infrastructures; cyber system security breaches; game theory; intelligent cyber defense approaches; predictive defense; proactive protection approaches; probabilistic human behaviors; probabilistic reasoning; Accuracy; Bayes methods; Game theory; Measurement; Organizations; Probabilistic logic; Security; Critical cloud infrastructures; pro-active protection; security breaches (ID#: 16-9613)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214175&isnumber=7212169
Y. Yang, N. Vlajic and U. T. Nguyen, “Next Generation of Impersonator Bots: Mimicking Human Browsing on Previously Unvisited Sites,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 356-361. doi: 10.1109/CSCloud.2015.93
Abstract: The development of Web bots capable of exhibiting human-like browsing behavior has long been the goal of practitioners on both side of security spectrum - malicious hackers as well as security defenders. For malicious hackers such bots are an effective vehicle for bypassing various layers of system/network protection or for obstructing the operation of Intrusion Detection Systems (IDSs). For security defenders, the use of human-like behaving bots is shown to be of great importance in the process of system/network provisioning and testing. In the past, there have been many attempts at developing accurate models of human-like browsing behavior. However, most of these attempts/models suffer from one of following drawbacks: they either require that some previous history of actual human browsing on the target web-site be available (which often is not the case), or, they assume that 'think times' and 'page popularities' follow the well-known Poisson and Zipf distribution (an old hypothesis that does not hold well in the modern-day WWW). To our knowledge, our work is the first attempt at developing a model of human-like browsing behavior that requires no prior knowledge or assumption about human behavior on the target site. The model is founded on a more general theory that defines human behavior as an 'interest-driven' process. The preliminary simulation results are very encouraging - web bots built using our model are capable of mimicking real human browsing behavior 1000-fold better compared to bots that deploy random crawling strategy.
Keywords: Internet; Poisson distribution; Web sites; computer crime; invasive software; IDS; Web bots; Web-site; Zipf distribution; human behavior; human browsing behavior; human-like behaving bots; human-like browsing behavior; impersonator bot; intrusion detection system; network protection; next generation; random crawling strategy; security defender; security spectrum-malicious hacker; system protection; unvisited site; Computer hacking; History; Predictive models; Web pages; bot modeling; interest-driven human browsing (ID#: 16-9614)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371507&isnumber=7371418
S. Sharma, S. P. Rajeev and P. Devearux, “An Immersive Collaborative Virtual Environment of a University Campus for Performing Virtual Campus Evacuation Drills and Tours for Campus Safety,” Collaboration Technologies and Systems (CTS), 2015 International Conference on, Atlanta, GA, 2015, pp. 84-89. doi: 10.1109/CTS.2015.7210404
Abstract: The use of a collaborative virtual reality environment for training and virtual tours has been increasingly recognized an as alternative to traditional real life tours for university campuses. Our proposed application shows an immersive collaborative virtual reality environment for performing virtual online campus tours and evacuation drills using Oculus Rift head mounted displays. The immersive collaborative virtual reality environment also offers a unique way for training in emergencies for campus safety. The participant can enter the collaborative virtual reality environment setup on the cloud and participate in the evacuation drill or a tour which leads to considerable cost advantages over large scale real life exercises. This paper presents an experimental design approach to gather data on human behavior and emergency response in a university campus environment among a set of players in an immersive virtual reality environment. We present three ways for controlling crowd behavior: by defining rules for computer simulated agents, by providing controls to the users to navigate in the VR environment as autonomous agents, and by providing controls to the users with a keyboard/ joystick along with an immersive VR head set in real time. Our contribution lies in our approach to combine these three methods of behavior in order to perform virtual evacuation drills and virtual tours in a multi-user virtual reality environment for a university campus. Results from this study can be used to measure the effectiveness of current safety, security, and evacuation procedure for campus safety.
Keywords: educational institutions; groupware; helmet mounted displays; multi-agent systems; safety; virtual reality; Oculus Rift head mounted displays; VR environment; autonomous agents; campus safety; computer simulated agents; crowd behavior control; emergency response; experimental design approach; human behavior; immersive VR head set; immersive collaborative virtual reality environment; multiuser virtual reality environment; university campus; virtual campus evacuation drills; virtual campus evacuation tours; virtual online campus tours; Buildings; Computational modeling; Computers; Servers; Solid modeling; Three-dimensional displays; Virtual reality; behavior simulation; collaborative virtual environment; evacuation (ID#: 16-9615)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210404&isnumber=7210375
M. C. Giuroiu and T. Marita, “Gesture Recognition Toolkit Using a Kinect Sensor,” Intelligent Computer Communication and Processing (ICCP), 2015 IEEE International Conference on, Cluj-Napoca, 2015, pp. 317-324. doi: 10.1109/ICCP.2015.7312678
Abstract: Computational modeling of human behavior has become a very important field of computer vision. Gesture recognition allows people to interact with machines in a natural way without the use of dedicated I/O devices. This paper presents a simple system that can recognize dynamic and static gestures using the depth map and the higher level output (skeleton and facial features) provided by a Kinect sensor. Two approaches are chosen for the recognition task: the Dynamic Time Warping Algorithm is used to recognize dynamic gestures, while a Bayesian classifier is used for the static gestures/postures. In contrast with some specialized methods presented in the literature, the current approach is very generic and can be used with minimal modification for recognizing a large variety of gestures. As a result, it can be deployed in a multitude of fields from security (monitoring rooms and sending alarm signals), medicine (helping people with physical disabilities) to education and so on. The tests results show that the system is accurate, easy to use and highly customizable.
Keywords: Bayes methods; computer vision; gesture recognition; human computer interaction image classification; Bayesian classifier; Kinect sensor; dynamic time warping algorithm; gesture recognition toolkit; human behavior; human computer interaction; Face; Gesture recognition; Heuristic algorithms; Joints; Thumb; Yttrium; Kinect; Naïve Bayes classifier; depth map; dynamic time warping; gesture recognition (ID#: 16-9616)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312678&isnumber=7312586
J. S. More and C. Lingam, “Reality Mining Based on Social Network Analysis,” Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, Mumbai, 2015, pp. 1-6. doi: 10.1109/ICCICT.2015.7045752
Abstract: Data Mining is the extraction of hidden predictive information from large databases. The process of discovering interesting, useful, nontrivial patterns from large spatial datasets is called spatial data mining. When time gets associated it becomes spatio temporal data mining. The study of spatio temporal data mining is of great concern for the study of mobile phone sensed data. Reality Mining is defined as the study of human social behavior based on mobile phone sensed data. It is based on the data collected by sensors in mobile phones, security cameras, RFID readers, etc. All allow for the measurement of human physical and social activity. In this paper Netviz, Gephi and Weka tools have been used to convert and analyze the Facebook. Further, analysis of a reality mining dataset is also presented.
Keywords: data mining; feature extraction; mobile handsets; sensor fusion; social networking (online); spatiotemporal phenomena; Facebook; Gephi tools; Netviz tools; RFID readers; Weka tools; hidden predictive information extraction; human physical activity; human social activity; human social behavior; mobile phone sensed data; reality mining; security cameras; social network analysis; spatiotemporal data mining; Computers; Data mining; Educational institutions; Mobile handsets; Spatial databases; Data Mining; Reality Mining; Social Network Analysis (ID#: 16-9617)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045752&isnumber=7045627
A. Ross, “The Human Threat,” Intelligent Rail Infrastructure, Birmingham, 2015, pp. 1-26. doi: 10.1049/ic.2015.0056
Abstract: This presentation focuses on the human behaviour of individuals within an organisation as well as the organisational factors that drive security-related outcomes.
Keywords: organisational aspects; security; QinetiQ; human behaviour; human threat; organisational factors; security-related outcomes (ID#: 16-9618)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7329640&isnumber=7202761
C. D. Frowd et al., “Facial Stereotypes and Perceived Mental Illness,” 2015 Sixth International Conference on Emerging Security Technologies (EST), Braunschweig, 2015, pp. 62-68. doi: 10.1109/EST.2015.25
Abstract: It is well established that we carry stereotypes that impact on human perception and behaviour (e.g. G.W. Allport, “The nature of prejudice“. Reading, MA: Addison-Wesley, 1954). Here, we investigate the possibility that we hold a stereotype for a face indicating that its owner may have a mental illness. A three-stage face-perception experiment suggested the presence of such a stereotype. Participants first rated 200 synthetic male faces from the EvoFIT facial-composite system for perceived mental illness (PMI). These faces were used to create a computer-based rating scale that was used by a second sample of participants to make a set of faces appear mentally ill. There was evidence to suggest that the faces that participants identified using the PMI scale differed along this dimension (although not entirely as expected). In the final stage of the study, another set of synthetic faces were created by artificially increasing and decreasing levels along the scale. Participants were asked to rate these items for PMI and for six criminal types. It was found that participants assigned higher PMI ratings (cf. veridical) for items with inflated PMI (although there was no reliable difference in ratings between veridical faces and faces with decreased PMI). Implications of the findings are discussed.
Keywords: image representation; medical disorders; psychology; EvoFIT facial-composite system; computer-based rating scale; criminal types; facial stereotypes; human behaviour; human perception; inflated PMI scale; perceived mental illness; synthetic faces; synthetic male faces; three-stage face-perception; veridical faces; Face; Law enforcement; Psychology; Security; Shape; Sociology; Statistics; stereotype; victimisation; serious crime; EvoFIT (ID#: 16-9619)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7429272&isnumber=7429252
X. Yu, T. Pei, K. Gai and L. Guo, “Analysis on Urban Collective Call Behavior to Earthquake,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conferen on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1302-1307. doi: 10.1109/HPCC-CSS-ICESS.2015.71
Abstract: Despite recent advances in uncovering the quantitative features of human activities in routine life, human call behavior in earthquake is still less clear. Taking the data of mobile phone records produced by the users in 3 days in the northwest of China, we systematically analyze the characteristics of mobile phone call patterns to earthquake by using ratio analysis and degree distribution method. We find that as a whole, earthquake brings about the significant growth in the indices of data volume, phone volume, duration, distant call volume and local call volume, etc. And the duration of the calls has a significant increase than that of the number of the calls. From the temporal perspective, we discover that at the first 2 hours that earthquake took place, people tend to make much more local calls rather than distant calls. However, unlike the local call, the big volume of distant calls last a whole day except for 3 hours in the afternoon. More interesting, although there is the biggest data volume and phone volume in the day that earthquake happened, only those who contact less than 10 phone number dominate the biggest data volume and phone volume. This demonstrates the pattern difference between the business calls and the private calls. The in-depth understanding of human behavior in emergency help us understand many complex socio-economic phenomena, and find applications in public opinion monitoring, disease control, transportation system design, calling center services, and information recommendation.
Keywords: earthquakes; mobile handsets; socio-economic effects; calling center services; complex socioeconomic phenomena; degree distribution method; disease control; earthquake; information recommendation; mobile phone characteristics; pattern difference; public opinion monitoring; ratio analysis; transportation system; urban collective call behavior analysis; Cities and towns; Earthquakes; Embedded systems; Indexes; Mobile communication; Mobile handsets; Probability distribution; Emergency; calling behavior; degree distribution; ratio (ID#: 16-9620)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336347&isnumber=7336120
Y. Jung and Y. Yoon, “Behavior Tracking Model in Dynamic Situation Using the Risk Ratio EM,” Information Networking (ICOIN), 2015 International Conference on, Cambodia, 2015, pp. 444-448. doi: 10.1109/ICOIN.2015.7057942
Abstract: Closed Circuit Television (CCTV) system has been popular in daily life such as traffic, airport, street and public place. The common goal of CCTV system is the prevention of crime and disorder by observing objects. In the future, smart CCTV camera combined with mobile phone will be used to protect human from crime and dangerous situations. Intelligent CCTV system in public place will monitor human behavior in real-time and transfer image data to control tower for the security purpose. In this paper, we propose an abnormal behavioral tracking model for prediction of abnormal situation by using Expectation Maximization (EM) algorithm combined with Viterbi algorithm. The tracking model will detect objects from CCTV image in dynamic environment for the prediction of dangerous situation. This tracking system has five main steps. (1) The detection of object and their environment, (2) Feature extraction from objects and situations such as human body posture, weather, and time (3) Location information such as object trajectory and area safety level (4) knowledge update and decision making (5) prediction of abnormal situation and maximized risk rates.
Keywords: behavioural sciences computing; closed circuit television; expectation-maximisation algorithm; feature extraction; object detection; object recognition; object tracking; CCTV system; Viterbi algorithm; abnormal behavioral tracking model; abnormal situation prediction; closed circuit television; decision making; expectation maximization algorithm; knowledge update; location information; risk rate maximisation; risk ratio EM algorithm; Computational modeling; Decision making; Event detection; Meteorology; Safety; Trajectory; Videos; CCTV; Expectation Maximization (EM); Tracking Abnormal behavior (ID#: 16-9621)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7057942&isnumber=7057846
F. H. Khan, M. E. Ali and H. Dev, “A Hierarchical Approach for Identifying User Activity Patterns from Mobile Phone Call Detail Records,” Networking Systems and Security (NSysS), 2015 International Conference on, Dhaka, 2015, pp. 1-6. doi: 10.1109/NSysS.2015.7043535
Abstract: With the increasing use of mobile devices, now it is possible to collect different data about the day-to-day activities of personal life of the user. Call Detail Record (CDR) is the available dataset at large-scale, as they are already constantly collected by the mobile operator mostly for billing purpose. By examining this data it is possible to analyze the activities of the people in urban areas and discover the human behavioral patterns of their daily life. These datasets can be used for many applications that vary from urban and transportation planning to predictive analytics of human behavior. In our research work, we have proposed a hierarchical analytical model where this CDR Dataset is used to find facts on the daily life activities of urban users in multiple layers. In our model, only the raw CDR data are used as the input in the initial layer and the outputs from each consecutive layer is used as new input combined with the original CDR data in the next layers to find more detailed facts, e.g., traffic density in different areas in working days and holidays. So, the output in each layer is dependent on the results of the previous layers. This model utilized the CDR Dataset of one month collected from the Dhaka city, which is one of the most densely populated cities of the world. So, our main focus of this research work is to explore the usability of these types of dataset for innovative applications, such as urban planning, traffic monitoring and prediction, in a fashion more appropriate for densely populated areas of developing countries.
Keywords: mobile handsets; telecommunication network planning; Dhaka city; mobile devices; mobile operator; mobile phone call detail records; traffic monitoring; transportation planning; urban planning; Analytical models; Cities and towns; Data models; Employment Mobile handsets; Poles and towers; Transportation (ID#: 16-9622)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043535&isnumber=7042935
S. M. Ho, J. T. Hancock, C. Booth, X. Liu, S. S. Timmarajus and M. Burmester, “Liar, Liar, IM on Fire: Deceptive Language-Action Cues in Spontaneous Online Communication,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 157-159. doi: 10.1109/ISI.2015.7165960
Abstract: With an increasing number of online users, the potential danger of online deception grows accordingly - as does the importance of better understanding human behavior online to mitigate these risks. One critical element to address such online threat is to identify intentional deception in spontaneous online communication. For this study, we designed an interactive online game that creates player scenarios to encourage deception. Data was collected and analyzed in October 2014 to identify certain deceptive cues. Players' interactive dialogue was analyzed using linear regression analysis. The results reveal that certain language features are highly significant predictors of deception in synchronous, spontaneous online communication.
Keywords: Internet; computer games; computer mediated communication; interactive systems; regression analysis; CMC; computer mediated communication; intentional deception identification; interactive online game; language-action cue; linear regression analysis; online communication; Computer mediated communication; Computer science; Detectors; Games; Linear regression; Media; Pragmatics; computer-mediated communication; interpersonal deception theory; language-action features; regression analysis
(ID#: 16-9623)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165960&isnumber=7165923
I. E. van Vuuren, E. Kritzinger and C. Mueller, “Identifying Gaps in IT Retail Information Security Policy Implementation Processes,” 2015 Second International Conference on Information Security and Cyber Forensics (InfoSec), Cape Town, South Africa, 2015, pp. 126-133. doi: 10.1109/InfoSec.2015.7435517
Abstract: With a considerable amount of support in literature, there is no doubt that the human factor is a major weakness in preventing Information Security (IS) breaches. The retail industry is vulnerable to human inflicted breaches due to the fact that hackers rely on their victims' lack of security awareness, knowledge and understanding, security behavior and the organization's inadequate security measures for protecting itself and its clients. The true level of security in technology and processes relies on the people involved in the use and implementation thereof [1]. Therefore, the implementation of IS requires three elements namely: human factors, organizational aspects and technological controls [2]. All three of these elements have the common feature of human intervention and therefore security gaps are inevitable. Each element also functions as both security control and security vulnerability. The paper addresses these elements and identifies the human aspect of each through current and extant literature which spawns new human-security elements. The purpose of this research is to provide evidence that the IT sector of the South African retail industry is vulnerable to the human factor as a result of the disregard for human-security elements. The research points out that the IT sector of the South African retail industry is lacking trust and does not pay adequate attention to security awareness and awareness regarding security accountability. Furthermore, the IT sector of the South African retail industry is lacking: 1) IS policies, 2) process and procedure documentation for creating visibility, and 3) transparency necessary to promote trust. These findings provide support that the identified gaps, either directly or indirectly, relate to trust, and therefore, might be major contributing factors to the vast number of breaches experienced in the South African retail industry. These findings may also provide valuable insight into combatting the human factor of IS w- thin the IT sector, irrespective of industry, which choose to follow an IS model built on the foundation of trust.
Keywords: Collaboration; Companies; Computer hacking; Human factors; Industries; ability; acceptance; accountability; benevolence; collaboration; communication; human factor; integrity; knowledge; management workflows; organizational aspects; policies; procedures; retail industry; security awareness; social engineering; technological controls; training; transparency; trust; trust factors; understanding; visibility (ID#: 16-9624)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7435517&isnumber=7435496
D. Lee, C. Liu and J. K. Hedrick, “Interacting Multiple Model-Based Human Motion Prediction for Motion Planning of Companion Robots,” 2015 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), West Lafayette, IN, USA, 2015, pp. 1-7. doi: 10.1109/SSRR.2015.7443013
Abstract: Motion planning of human-companion robots is a challenging problem and its solution has numerous applications. This paper proposes an autonomous motion planning framework for human-companion robots to accompany humans in a socially desirable manner, which takes into account the safety and comfort requirements. An Interacting Multiple Model-Unscented Kalman Filter (IMM-UKF) estimation and prediction approach is developed to estimate human motion states from sensor data and predict human position and speed for a finite horizon. Based on the predicted human states, the robot motion planning is formulated as a model predictive control (MPC) problem. Simulations have demonstrated the superior performance of the IMM-UKF approach and the effectiveness of the MPC planner in facilitating the socially desirable companion behavior.
Keywords: motion estimation; path planning; predictive control; rescue robots; robot vision; IMM-UKF estimation; MPC problem; autonomous motion planning framework; human motion state estimation; human position prediction; human speed prediction; human-companion robots; interacting multiple model-unscented Kalman filter estimation; model predictive control; model-based human motion prediction; predicted human states; prediction approach; robot motion planning; Computational modeling; Hidden Markov models; Mathematical model; Planning; Predictive models; Robot sensing systems (ID#: 16-9625)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7443013&isnumber=7442936
A. Brown and M. Abramson, “Twitter Fingerprints as Active Authenticators,” 2015 IEEE International Conference on Data Mining Workshop (ICDMW), Atlantic City, NJ, 2015, pp. 58-63. doi: 10.1109/ICDMW.2015.223
Abstract: Leveraging data drawn from the Web, or rather web analytics, has been used to gain business intelligence, increase sales, and optimize websites. Yet beyond the domain of ecommerce that web analytics is typically associated with, authentication based upon user interactions with the Web is also obtainable. Authentication is able to be achieved because just as individuals display unique mannerisms in everyday life, users interact with technology in unique manners. Leveraging these unique patterns, or “cognitive fingerprints”, for security purposes can be referred to as active authentication. Active authentication stands to add extra security without added burden, as users are allowed the capability to simply interact with technology in their natural manner. Past research on active authentication has looked at areas such as mouse pattern movements, screen tough patterns on smartphones, and web browsing behavior. Our focus here is web browsing behavior. Specifically, we seek to extend past active authentication research done on Reddit. In this research, we examine the ability of Twitter-specific features to serve as authenticators, by examining the behavior of 50 random Twitter users. Through leveraging data mining and machine learning techniques, we conduct three levels of analysis: (1) we survey the ability of Twitter-specific behavioral features from a broad perspective to determine the feasibility Twitter fingerprints as a form of active authentication, (2) we compare aggregated and non-aggregated datasets to determine whether it is better to aggregate user behavior or look at posts individually, and (3) we examine whether certain features are more important for discrimination than others. The first level of analysis suggests that the posting behavior on Twitter follows the power law of human activity and that users can be uniquely identified with a fairly decent level of accuracy. Second, we find that aggregating the data significantly improves F-scores. Lastly, ou- examination suggests that there is not any specific feature that serves as more discriminative than others. Rather, what is discriminative for one user may not be for another user.
Keywords: authorisation; data analysis; data mining; learning (artificial intelligence); social networking (online); F-score; Reddit; Twitter fingerprints; Twitter-specific behavioral features; Web analytics; Web browsing behavior; active authentication; cognitive fingerprints; data mining technique; machine learning technique; user behavior aggregation; user interaction; Conferences; Data mining; web analytics (ID#: 16-9626)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7395653&isnumber=7395635
E. Sherif, S. Furnell and N. Clarke, “Awareness, Behaviour and Culture: The ABC in Cultivating Security Compliance,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015, pp. 90-94. doi: 10.1109/ICITST.2015.7412064
Abstract: A significant volume of security breaches occur as a result of human aspects and it is consequently important for these to be given attention alongside technical aspects. Researchers have argued that security culture stimulates appropriate employees' behavior towards adherence. Therefore, work within organizations should be guided by a culture of security, with the purpose of protecting the organization's assets and affecting individual's behaviors towards better security behavior. Although security aware individuals can play an important role in protecting organizational assets, the way in which individuals behave with security controls that are implemented is crucial in protecting such assets. Should the behavior of individuals not be security compliant, it could have an impact on an organization's productivity and confidentiality of data. In this paper, key literature relating to security culture in the period of 1999-2014 is reviewed. The objective is to examine the role of security awareness, behavior, and how they can play an important role in changing the existing culture to a security culture. Some relevant security culture tools have been introduced. An overall framework to understand how security awareness and behavior can play an important role in changing an existing culture to a security culture has been developed.
Keywords: cultural aspects; security of data; ABC; confidentiality; organization productivity; organizational assets; security aware individuals; security awareness; security breaches; security compliance; security compliant; security controls; security culture tools; Computers; Current measurement; Education; Information security; Internet; Organizations; Security awareness; organisational culture; security behaviour; security culture (ID#: 16-9627)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7412064&isnumber=7412034
Pratibha and M. Ashraf, “Dominant Behavior Identification of Load Data,” Computing, Communication & Automation (ICCCA), 2015 International Conference on, Noida, 2015, pp. 142-145. doi: 10.1109/CCAA.2015.7148361
Abstract: Web applications are meant to be viewed by human user. Quality of web application is our primary concern. An application is said to be a quality application when the users do not face any problem while using it. For this purpose performance testing is needed. For getting knowledge about the performance issues (such as response time), performance testing is performed. Performance testing basically shows the behavior of the application towards the load on it. Load testing is a kind of performance testing which is performed for knowing about the application response towards the load. It is very important to find out load testing problems to make sure that load testing results are correct. This paper presents a method which mines the log execution of the web application to show the dominant behavior of the load. Dominant behavior is the expected behavior of the load on the application. This method will show the dominant behavior on the basis of IP address. On the basis of IP address one can easily find out the location of the user which increases the security parameter of the application.
Keywords: Internet; program testing; IP address; Web applications; load data; load testing problems; performance testing; Automation; Conferences; IP networks; Load modeling; Software; Stress; Testing; dominant behavior; logs (ID#: 16-9628)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148361&isnumber=7148334
B. Ryu, N. Ranasinghe, W. M. Shen, K. Turck and M. Muccio, “BioAIM: Bio-inspired Autonomous Infrastructure Monitoring,” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 780-785. doi: 10.1109/MILCOM.2015.7357539
Abstract: The Bio-inspired Autonomous Infrastructure Monitoring (BioAIM) system detects anomalous behavior during the deployment and maintenance of a wireless communication network formed autonomously by unmanned airborne nodes. A node may experience anomalous or unexpected behavior in the presence of hardware/software faults/failures, or external influence (e.g. natural weather phenomena, enemy threats). This system autonomously detects, reasons with (e.g. differentiates an anomaly from natural interference), and alerts a human operator of anomalies at runtime via a communication network formed by the Bio-inspired Artificial Intelligence Reconfiguration (BioAIR) system. In particular, BioAIM learns and builds a prediction model which describes how data from relevant sensors should change when a behavior executes under normal circumstances. Surprises occur when there are discrepancies between what is predicted and what is observed. BioAIM identifies a dynamic set of states from the prediction model and learns a structured model similar to a Markov Chain in order to quantify the magnitude of a surprise or divergence from the norm using a special similarity metric. While in operation BioAIM monitors the sensor data by testing the applicable models for each valid behavior at regular time intervals, and informs the operator when a similarity metric deviates from the acceptable threshold.
Keywords: Markov processes; autonomous aerial vehicles; fault diagnosis; radio networks; BioAIM; Markov Chain; anomalous behavior; bio-inspired artificial intelligence reconfiguration system; bio-inspired autonomous infrastructure monitoring; natural interference; natural weather phenomena; unmanned airborne nodes; wireless communication network; Biological system modeling; Biosensors; Maintenance engineering; Measurement; Monitoring; Predictive models; Adaptive systems; Cognition; Command and control systems; Communication networks; Cyber Security; Fault detection; Fault tolerant systems; Intelligent control; Unmanned Aerial Vehicles (ID#: 16-9629)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357539&isnumber=7357245
G. Maus, “Decoding, Hacking, and Optimizing Societies: Exploring Potential Applications of Human Data Analytics in Sociological Engineering, Both Internally and as Offensive Weapons,” Science and Information Conference (SAI), 2015, London, 2015, pp. 538-547. doi: 10.1109/SAI.2015.7237195
Abstract: Today's unprecedented wealth of data on human activities, augmented by proven reliable methods of algorithmically extrapolating personal information from limited data, and the means to store and analyze it opens up new vistas for in-depth understanding of individuals, as well as the potential generation of predictive models for the dynamics of human functions on individual, group, and societal scales. This has already proven to have applications in successfully forecasting behavior, techniques which are only likely to improve. To the extent that the science can move beyond a correlative understanding of the data to a causal understanding of the factors affecting behavior, it will allow new means for (perhaps covertly and deniably) influencing behavior, possibly through long causal chains that could conceal the influence of the manipulator. This offers an immense variety of applications, but this paper will particularly consider them as tools in governmental control over their citizens and as a new form of weaponry.
Keywords: Big Data; computer crime; data analysis; social sciences computing; forecasting behavior; governmental control; human data analytics; human function dynamics; offensive weapons; personal information extrapolation; predictive models; society decoding; society hacking; society optimization; sociological engineering; Accuracy; Facebook; Forecasting; Government; Media; Prediction algorithms; Predictive models; big data; cognitive security; computational sociology; machine learning; privacy; sentiment analysis; surveillance (ID#: 16-9630)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237195&isnumber=7237120
J. J. Mulcahy and S. Huang, “An Autonomic Approach to Extend the Business Value of a Legacy Order Fulfillment System,” Systems Conference (SysCon), 2015 9th Annual IEEE International, Vancouver, BC, 2015, pp. 595-600. doi: 10.1109/SYSCON.2015.7116816
Abstract: In the modern retailing industry, many enterprise resource planning (ERP) systems are considered legacy software systems that have become too expensive to replace and too costly to re-engineer. Countering the need to maintain and extend the business value of these systems is the need to do so in the simplest, cheapest, and least risky manner available. There are a number of approaches used by software engineers to mitigate the negative impact of evolving a legacy systems, including leveraging service-oriented architecture to automate manual tasks previously performed by humans. A relatively recent approach in software engineering focuses upon implementing self-managing attributes, or “autonomic” behavior in software applications and systems of applications in order to reduce or eliminate the need for human monitoring and intervention. Entire systems can be autonomic or they can be hybrid systems that implement one or more autonomic components to communicate with external systems. In this paper, we describe a commercial development project in which a legacy multi-channel commerce enterprise resource planning system was extended with service-oriented architecture an autonomic control loop design to communicate with an external third-party security screening provider. The goal was to reduce the cost of the human labor necessary to screen an ever-increasing volume of orders and to reduce the potential for human error in the screening process. The solution automated what was previously an inefficient, incomplete, and potentially error-prone manual process by inserting a new autonomic software component into the existing order fulfillment workflow.
Keywords: enterprise resource planning; service-oriented architecture; software maintenance; ERP systems; autonomic approach; autonomic behavior; autonomic control loop design; autonomic software component; business value; error-prone manual process; human error; human monitoring; hybrid systems; legacy multichannel commerce enterprise resource planning system; legacy order fulfillment system; legacy software systems; order fulfillment workflow; retailing industry; software applications; software engineering; third party security screening provider; Business; Complexity theory; Databases; Manuals; Monitoring; Software systems; autonomic computing; self-adaptive systems; self-managing systems; software evolution; systems interoperability; systems of systems
(ID#: 16-9631)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116816&isnumber=7116715
J. Morris-King and H. Cam, “Ecology-Inspired Cyber Risk Model for Propagation of Vulnerability Exploitation in Tactical Edge,” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 336-341. doi: 10.1109/MILCOM.2015.7357465
Abstract: A multitude of cyber vulnerabilities on the tactical edge arise from the mix of network infrastructure, physical hardware and software, and individual user-behavior. Because of the inherent complexity of socio-technical systems, most models of tactical cyber assurance omit the non-physical influence propagation between mobile systems and users. This omission leads to a question: how can the flow of influence across a network act as a proxy for assessing the propagation of risk? Our contribution toward solving this problem is to introduce a dynamic, adaptive ecosystem-inspired model of vulnerability exploitation and risk flow over a tactical network. This model is based on ecological characteristics of the tactical edge, where the heterogeneous characteristics and behaviors of human-machine systems enhance or degrade mission risk in the tactical environment. Our approach provides an in-depth analysis of vulnerability exploitation propagation and risk flow using a multi-agent epidemic model which incorporates user-behavior and mobility as components of the system. This user-behavior component is expressed as a time-varying parameter driving a multi-agent system. We validate this model by conducting a synthetic battlefield simulation, where performance results depend mainly on the level of functionality of the assets and services. The composite risk score is shown to be proportional to infection rates from the Standard Epidemic Model.
Keywords: human factors; military communication; mobile ad hoc networks; multi-agent systems; telecommunication computing; telecommunication network reliability; time-varying systems; dynamic adaptive ecosystem-inspired model; ecology-inspired cyber risk model; human-machine systems; mobile systems; mobile users; multiagent epidemic model; nonphysical influence propagation; risk flow; risk propagation; socio-technical system complexity; synthetic battlefield simulation; tactical cyber assurance; tactical edge; tactical network; time-varying parameter; user-behavior; vulnerability exploitation propagation; Biological system modeling; Computational modeling; Computer security; Ecosystems; Risk management; Timing; Unified modeling language; Agent-based simulation; Ecological modeling; Epidemic system; Risk propagation; Tactical edge network (ID#: 16-9632)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357465&isnumber=7357245
P. Aruna and R. Kanchana, “Face Image CAPTCHA Generation Using Particle Swarm Optimization Approach,” Engineering and Technology (ICETECH), 2015 IEEE International Conference on, Coimbatore, 2015, pp. 1-5. doi: 10.1109/ICETECH.2015.7275016
Abstract: CAPTCHA is a software programming which is introduced to differentiate the human from the robots. CATCHA intends to generate a code which can only be identified by the human and machines cannot. In the real world, due to the massive increase in the usage of smart phones, tablets and other devices with the touch screen functionality poses a many online security threats. The traditional CAPTCHA requires a help of keyboard input and does dependant of language which will not be efficient in the smart phone devices. The face CAPTCHA is the one which intends to generate a CAPTCHA by using a combination of facial images and the fake images. It is based on generating a CAPTCHA with noised real face images and the fake images which cannot be identified by the machines but humans do. In the existing work, genetic algorithm is used to select the optimized face images by using which the better optimized fpso CAPTCHA can be created. However this work lacks from the local convergence problem where it can only select the best images within the local region. To overcome this problem in this work, the particle swarm optimization method is propose which can generate the globalize solution. Particle Swarm Optimization (PSO) is a popular and bionic algorithm based on the social behavior associated with bird flocking for optimization problems. The experimental tests that were conducted were proved that the proposed methodology improves in accuracy and generates an optimized solution than the existing methodologies.
Keywords: face recognition; genetic algorithms; particle swarm optimisation; security of data; PSO; bionic algorithm; face image captcha generation; fake images; genetic algorithm; local convergence problem; particle swarm optimization approach; social behavior; Authentication; CAPTCHAs; Distortion; Face; Feature extraction; Particle swarm optimization; CAPTCHA; Distorted Image; Face Images; Particle Swarm Optimization (ID#: 16-9633)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275016&isnumber=7274993
Y. Zhou and D. Evans, “Understanding and Monitoring Embedded Web Scripts,” Security and Privacy (SP), 2015 IEEE Symposium on, San Jose, CA, 2015, pp. 850-865. doi: 10.1109/SP.2015.57
Abstract: Modern web applications make frequent use of third-party scripts, often in ways that allow scripts loaded from external servers to make unrestricted changes to the embedding page and access critical resources including private user information. This paper introduces tools to assist site administrators in understanding, monitoring, and restricting the behavior of third-party scripts embedded in their site. We developed Script Inspector, a modified browser that can intercept, record, and check third-party script accesses to critical resources against security policies, along with a Visualizer tool that allows users to conveniently view recorded script behaviors and candidate policies and a Policy Generator tool that aids script providers and site administrators in writing policies. Site administrators can manually refine these policies with minimal effort to produce policies that effectively and robustly limit the behavior of embedded scripts. Policy Generator is able to generate effective policies for all scripts embedded on 72 out of the 100 test sites with minor human assistance. In this paper, we present the designs of our tools, report on what we've learned about script behaviors using them, evaluate the value of our approach for website administrator.
Keywords: Internet; data privacy; online front-ends; security of data; Policy Generator; Script Inspector; Visualizer tool; Web application; Web browser; Web script; critical resource access; private user information; security policy; third-party script; Advertising; Browsers; Monitoring; Privacy; Robustness; Security; Visualization; Anomaly Detection; Security and Privacy Policy; Web security and Privacy (ID#: 16-9634)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163064&isnumber=7163005
Y. Sterchi and A. Schwaninger, “A First Simulation on Optimizing EDS for Cabin Baggage Screening Regarding Throughput,” Security Technology (ICCST), 2015 International Carnahan Conference on, Taipei, 2015, pp. 55-60. doi: 10.1109/CCST.2015.7389657
Abstract: Airport security screening is vital for secure air transportation. Screening of cabin baggage heavily relies on human operators reviewing X-ray images. Explosive detection systems (EDS) developed for cabin baggage screening can be a very valuable addition security-wise. Depending on the EDS machine and settings, false alarm rates increase, which could reduce throughput. A discrete event simulation was used to investigate how different machine settings of EDS, different groups of X-ray screeners, and different durations of alarm resolution with explosives trace detection (ETD) influence throughput of a specific cabin baggage screening process. For the modelling of screening behavior in the context of EDS and for the estimation of model parameters, data was borrowed from a human-machine interaction experiment and a work analysis. In a second step, certain adaptations were tested for their potential to reduce the impact of EDS on throughput. The results imply that moderate increases in the false alarm rate by EDS can be buffered by employing more experienced and trained X-ray screeners. Larger increases of the false alarm rate require a fast alarm resolution and additional resources for the manual search task.
Keywords: X-ray imaging; airports; discrete event simulation; explosive detection; national security; parameter estimation; EDS optimization; ETD; X-ray images; X-ray screeners; airport security screening; alarm resolution durations; cabin baggage screening process; explosive detection systems; explosives trace detection; false alarm rates; human-machine interaction; model parameter estimation; secure air transportation; work analysis; Explosives; Image resolution; Manuals; Security; Throughput; Training; aviation security; explosive detection systems (EDS); human factors; throughput (ID#: 16-9635)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389657&isnumber=7389647
E. Kowalczyk and A. Memon, “Extending Manual GUI Testing Beyond Defects by Building Mental Models of Software Behavior,” 2015 30th IEEE/ACM International Conference on Automated Software Engineering Workshop (ASEW), Lincoln, NE, 2015, pp. 35-41. doi: 10.1109/ASEW.2015.17
Abstract: Manual GUI testing involves providing inputs to the software via its GUI and determining the software's correctness using its outputs, one of them being the GUI itself. Because of its human-in-the-loop nature, GUI testing is known to be a time-consuming activity. In practice, it is done by junior, inexpensive testers to keep costs low at the very tail-end of the software development process. In this paper, we posit that the importance of GUI testing has suffered due to its traditional narrow role -- to detect residual software defects. Because of its human-in-the-loop nature, GUI testing has the potential to provide outputs other than defects and to be used as inputs to several downstream activities, e.g., security analysis. One such output is the mental model that the GUI tester creates during testing, a model that implicitly informs the tester of the software designer's intent. To evaluate our claim, we consider an important question used for security assessment of Android apps: “What permission-sensitive behaviors does this app exhibit?” Our assessment is based on the comparison of 2 mental models of 12 Android apps -- one derived from the app's usage and the other from its public description. We compare these two models with a third, automatically derived model -- the permissions the app seeks from the Android OS. Our results show that the usage-based model provides unique insights into app behavior. This model may be an important outcome of GUI testing, and its consistency with other behavioral information about the app could later be used in software quality assurance activities such as security assessment.
Keywords: Android (operating system); graphical user interfaces; program testing; software quality; Android apps; manual GUI testing; mental models; security assessment; software behavior; software defects; software development process; software quality assurance activities; Androids; Cognitive science; Graphical user interfaces; Humanoid robots; Security; Software; Testing (ID#: 16-9636)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7426634&isnumber=7426613
H. Abdul Majid, M. Abdul Majid, M. I. Ibrahim, W. N. S. Wan Manan and M. R. Ramli, “Investigation of Security Awareness on E-Learning System Among Lecturers and Students in Higher Education Institution,” Computer, Communications, and Control Technology (I4CT), 2015 International Conference on, Kuching, 2015, pp. 216-220. doi: 10.1109/I4CT.2015.7219569
Abstract: The advancement of computer and Internet technologies have brought teaching and learning activities to a new dimension. Learners were virtually moved out from their classrooms to a new learning environment where learning contents and materials were delivered electronically. This new environment, which is called e-learning environment uses the web and other Internet technologies to enhance teaching and learning experience. The success and failure of any e-learning system fall on how secure the system is. Security of an e-learning system is very important so that the information contained in the system is not compromised. However no matter how secure an e-learning system is the security threats always fall on human factor. Human is identified as the weakest link in information security and lack of security awareness such as password sharing will compromise the security of e-learning system. This paper studies the awareness level in information security among e-learning users, particularly students at Higher Education Institution. The study focuses on evaluating awareness level, perception and behavior of e-learning users from International Islamic University Malaysia. Results of this study helps the university authority in preparing effective and specific awareness program in e-learning security for their students.
Keywords: Internet; computer aided instruction; further education; security of data; International Islamic University Malaysia; Internet technologies; e-learning system; higher education; information security; lecturers; security awareness; students; Computers; Electronic learning; Electronic mail; Information security; awareness; behaviors; e-learning; security; students; threats (ID#: 16-9637)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219569&isnumber=7219513
T. Aoyama, H. Naruoka, I. Koshijima, W. Machii and K. Seki, “Studying Resilient Cyber Incident Management from Large-Scale Cyber Security Training,” Control Conference (ASCC), 2015 10th Asian, Kota Kinabalu, 2015, pp. 1-4. doi: 10.1109/ASCC.2015.7244713
Abstract: The study on human contribution to cyber resilience is unexplored terrain in the field of critical infrastructure security. So far cyber resilience has been discussed as an extension of the IT security research. The current discussion is focusing on technical measures and policy preparation to mitigate cyber security risks. In this human-factor based study, the methodology to achieve high resiliency of the organization by better management is discussed. A field observation was conducted in the large-scale cyber security hands-on training at ENCS (European Network for Cyber Security, The Hague, NL) to determine management challenges that could occur in a real-world cyber incident. In this paper, the possibility to extend resilience-engineering framework to assess organization's behavior in cyber crisis management is discussed.
Keywords: human factors; risk management; security of data; ENCS; European Network for Cyber Security; NL; The Hague; cyber crisis management; cyber incident management; cyber resilience; cyber security risk management; human-factor; large-scale cyber security hands-on training; resilience-engineering framework; Computer security; Games; Monitoring; Organizations; Resilience; Training; critical infrastructure; cyber security; management; resilience engineering (ID#: 16-9638)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7244713&isnumber=7244373
C. Tiwari, M. Hanmandlu and S. Vasikarla, “Suspicious Face Detection Based on Eye and other Facial Features Movement Monitoring,” 2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 2015, pp. 1-8. doi: 10.1109/AIPR.2015.7444523
Abstract: Visual surveillance and security applications were never more important than now more so due to the overwhelming ever-growing threat of terrorism. Till date the large scale video surveillance systems mostly work as a passive system in which the videos are simply stored without being monitored. Such system will be useful for post event investigation. In order to make a system that is capable of real-time monitoring, we need to develop algorithms which can analyze and understand the scene that is being monitored. Generally, humans express their intention explicitly through facial expressions, speech, eye movement, and hand gesture. According to cognitive visiomotor theory, the human eye movements are rich source of information about the human intention and behavior. If we monitor the eye movement of a person, we will be able to describe him as an abnormal suspicious person or a normal person. We track his/her Eyes and based upon the eye movement in successive frames of the input videos using the Non-linear Entropy of eyes. Results of our experiments show that Non-linear Entropy of Eyes of an abnormal person is much higher than the eye's entropy of any normal person.
Keywords: feature extraction; object detection; video signal processing; video surveillance; cognitive visiomotor theory; eye feature; facial feature; features movement monitoring; human eye movement; nonlinear entropy; scene analysis; scene understanding; security applications; suspicious face detection; visual surveillance application; Face; Feature extraction; Iris recognition; Monitoring; Nose; Tracking; Visualization (ID#: 16-9639)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7444523&isnumber=7444521
S. C. Wriessnegger, D. Hackhofer and G. R. Müller-Putz, “Classification of Unconscious Like/Dislike Decisions: First Results Towards a Novel Application for BCI Technology,” Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, Milan, 2015, pp. 2331-2334. doi: 10.1109/EMBC.2015.7318860
Abstract: More and more applications for BCI technology emerge that are not restricted to communication or control, like gaming, rehabilitation, Neuro-IS research, neuro-economics or security. In this context a so called passive BCI, a system that derives its outputs from arbitrary brain activity for enriching a human-machine interaction with implicit information on the actual user state will be used. Concretely EEG-based BCI technology enables the use of signals related to attention, intentions and mental state, without relying on indirect measures based on overt behavior or other physiological signals which is an important point e.g. in Neuromarketing research. The scope of this pilot EEG-study was to detect like/dislike decisions on car stimuli just by means of ERP analysis. Concretely to define user preferences concerning different car designs by implementing an offline BCI based on shrinkage LDA classification. Although classification failed in the majority of participants the elicited early (sub) conscious ERP components reflect user preferences for cars. In a broader sense this study should pave the way towards a “product design BCI” suitable for neuromarketing research.
Keywords: bioelectric potentials; brain-computer interfaces; electroencephalography; human computer interaction; neurophysiology; signal classification; EEG-based BCI technology; ERP analysis; arbitrary brain activity; brain-computer interface; car stimuli; human-machine interaction; linear discriminant analysis; mental state; neuromarketing research; offline BCI; passive BCI; shrinkage LDA classification; unconscious like-dislike decision classification; Automobiles; Brain; Brain-computer interfaces; Electrodes; Electroencephalography; Neuroscience; Physiology (ID#: 16-9640)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7318860&isnumber=7318236
Y. Liu, M. Ficocelli and G. Nejat, “A Supervisory Control Method for Multi-Robot Task Allocation in Urban Search and Rescue,” 2015 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), West Lafayette, IN, USA, 2015, pp. 1-6.
doi: 10.1109/SSRR.2015.7443000
Abstract: This paper presents the development of a unique supervisory control architecture for effective task allocation of a heterogeneous multi-robot team in urban search and rescue (USAR) applications. In the proposed approach, the USAR tasks of exploring large unknown cluttered environments and searching for victims are allocated to different robots in the heterogeneous team based on their capabilities. A single human operator is only needed to supervise the team and share tasks with the robots in order to maximize the use of trained operators. Furthermore, the proposed supervisory controller determines the team behavior when faced with robot failures during task execution. Extensive simulated experiments were conducted in USAR-like environments to investigate the performance of the proposed supervisory control method. The results demonstrated that the proposed approach is effective for multi-robot control in USAR applications, and is robust to varying scene scenarios and increasing team size.
Keywords: multi-robot systems; rescue robots; USAR applications; heterogeneous multirobot team; heterogeneous team; multirobot control; multirobot task allocation; supervisory control architecture; supervisory control method; supervisory controller; task execution; team behavior; urban search and rescue applications; Automata; Mathematical model; Resource management; Robot kinematics; Robot sensing systems; Supervisory control (ID#: 16-9641)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7443000&isnumber=7442936
G. He, C. Tan, D. Yu and X. Wu, “A Real-Time Network Traffic Anomaly Detection System Based on Storm,” Intelligent Human-Machine Systems and Cybernetics (IHMSC), 2015 7th International Conference on, Hangzhou, 2015, pp. 153-156. doi: 10.1109/IHMSC.2015.152
Abstract: In recent years, with more and more people shopping, chatting and video online, the Internet is playing a more and more important role in human's daily life. Since the Internet is so close to our lives, it contains so much personal information that will cause a lot of troubles or even losses when divulged. So it's necessary and urgent to find a efficient way to detect the abnormal network behavior. In this paper, we present a new detection method based on compound session. In contrast to previous methods, our approach is based on the cloud computing platform and the cluster system, using Hadoop Distributed File System (HDFS) to analysis and using Twitter Storm to make real-time network anomaly detection come true.
Keywords: IP networks; cloud computing; computer network security; distributed databases; social networking (online); HDFS; Hadoop distributed file system; Twitter Storm; abnormal network behavior detection; cloud computing platform; cluster system; compound session; personal information; real-time network traffic anomaly detection system; Downlink; Fasteners; Monitoring; Real-time systems; Storms; Telecommunication traffic; Uplink; Compound Session; Hadoop platform; enterprise user's behavior; host analytics
(ID#: 16-9642)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7334673&isnumber=7334628
N. Kuntze, C. Rudolph, G. B. Brisbois, M. Boggess, B. Endicott-Popovsky and S. Leivesley, “Security vs. Safety: Why Do People Die Despite Good Safety?,” Integrated Communication, Navigation, and Surveillance Conference (ICNS), 2015, Herdon, VA, 2015, pp. A4-1-A4-10. doi: 10.1109/ICNSURV.2015.7121213
Abstract: This paper will show in detail the differences between safety and security. An argument is made for new system design requirements based on a threat sustainable system (TSS) drawing on threat scanning, flexibility, command and control, system of systems, human factors and population dependencies. Principles of sustainability used in historical design processes are considered alongside the complex changes of technology and emerging threat actors. The paper recognises that technologies and development methods for safety do not work for security. Safety has the notion of a one or two event protection, but cyber-attacks are multi-event situations. The paper recognizes that the behaviour of interconnected systems and modern systems requirements for national sustainability. System security principles for sustainability of critical systems are considered in relation to failure, security architecture, quality of service, authentication and trust and communication of failure to operators. Design principles for operators are discussed along with recognition of human factors failures. These principles are then applied as the basis for recommended changes in systems design and discuss system control dominating the hierarchy of design decisions but with harmonization of safety requirements up to the level of sustaining security. These new approaches are discussed as the basis for future research on adaptive flexible systems that can sustain attacks and the uncertainty of fast-changing technology.
Keywords: national security; protection; safety systems; security of data; sustainable development; authentication; cyber attacks; failure; national sustainability; protection; safety; system security principles; threat scanning; threat sustainable system; trust; Buildings; Control systems; Safety; Software; Terrorism; Transportation (ID#: 16-9643)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7121213&isnumber=7121207
A. A. Khalifa, M. A. Hassan, T. A. Khalid and H. Hamdoun, “Comparison Between Mixed Binary Classification and Voting Technique for Active User Authentication Using Mouse Dynamics,” Computing, Control, Networking, Electronics and Embedded Systems Engineering (ICCNEEE), 2015 International Conference on, Khartoum, 2015, pp. 281-286. doi: 10.1109/ICCNEEE.2015.7381378
Abstract: The rapid proliferation of computing processing power has facilitated a rise in the adoption of computers in various aspects of human lives. From education to shopping and other everyday activities to critical applications in finance, banking and, recently, degree awarding online education. Several approaches for user authentication based on Behavioral Biometrics (BB) were suggested in order to identify unique signature/footprint for improved matching accuracy for genuine users and flagging for abnormal behaviors from intruders. In this paper we present a comparison between two classification algorithms for identifying users' behavior using mouse dynamics. The algorithms are based on support vector machines (SVM) classifier allowing for direct comparison between different authentication-based metrics. The voting technique shows low False Acceptance Rate(FAR) and noticeably small learning time; making it more suitable for incorporation within different authentication applications.
Keywords: behavioural sciences computing; government data processing; learning (artificial intelligence); mouse controllers (computers); pattern classification; security of data; support vector machines; FAR; SVM; active user authentication; behavioral biometrics; false acceptance rate; learning time; mixed binary classification; mouse dynamics; support vector machine; voting technique; Artificial neural networks; Biometrics (access control); active authentication; machine learning; mouse dynamics; pattern recognition; support vector machines (ID#: 16-9644)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381378&isnumber=7381351
M. Mehra and D. Pandey, “Event Triggered Malware: A New Challenge to Sandboxing,” 2015 Annual IEEE India Conference (INDICON), New Delhi, India, 2015, pp. 1-6. doi: 10.1109/INDICON.2015.7443327
Abstract: Over the years cyber attacks have turned more sophisticated, directed and lethal. In the recent times attackers have found new means to bypass advanced and sophisticated methods like sandboxing. Sandboxes emulate and analyze behavior and network in an isolated environment. Forensic investigations are performed by combining static analysis with sandbox analysis. The limitation with sandboxing is simulating Human Computer Interaction (HCI) and this is best used by malware writers for advanced threat models. Malware analysis using sandboxing is no longer considered a robust technique. This paper aims to evaluate the effectiveness of sandboxing and evasion techniques used by malwares to evade them. For this analysis we have used Trojan Upclicker which uses HCI for its injection and execution. Malware analysis was performed on sandboxes like Malwr, Anubis and a commercial sandbox based on the parameters like files created or modified, registry changes, running processes, memory mapping, network connections to outside domains, signatures and operating system changes. While Anubis failed to find any irregularity in the malware sample, Malwr was able to diagnose it as a malware. The commercial off the shelf sandbox gave comprehensive detailed results. Through this we conclude that though sandboxing is a better and less complex way of analyzing samples, it still does not assure a pinnacle spot in malware analysis. Nefarious individuals are cognizant of this shortcoming of sandboxes and are smartly developing more evading malwares. Efforts need to be put to make these sandboxes simulate HCI events more efficiently.
Keywords: human computer interaction; invasive software; Anubis; HCI; Malwr; Trojan Upclicker; cyber attacks; event triggered malware; forensic investigations; human computer interaction; malware analysis; sandbox analysis; sandboxing method; static analysis; Browsers; Malware; Monitoring; Operating systems; Organizations; Security; Virtual machining; anubis; cuckoo; dynamic analysis; malwr; sandbox (ID#: 16-9645)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7443327&isnumber=7443105
Z. Qu, T. Lu, X. Liu, Q. Wu and M. Wang, “A New Method for Human Action Recognition: Discrete HMM with Improved LBG Algorithm,” 2015 IEEE 9th International Conference on Anti-counterfeiting, Security, and Identification (ASID), Xiamen, 2015,
pp. 109-113. doi: 10.1109/ICASID.2015.7405672
Abstract: Hidden Markov Model (HMM) algorithm and Vector Quantization (VQ) algorithm are widely used in the field of speech recognition. The innovation of this paper will be the introduction of the above two algorithms into human action recognition and making them as a solution to recognize action of the continuous multi frames video. Simulated Annealing algorithm and the empty cavity processing algorithm improve vector quantization algorithm and obtain the global optimal codebook. The recognition result of the new algorithm is much better than the original algorithm and traditional algorithms. The new method realizes the identification of abnormal behavior.
Keywords: gesture recognition; hidden Markov models; simulated annealing; vector quantisation; video coding; HMM algorithm; LBG algorithm; VQ algorithm; discrete HMM; empty cavity processing algorithm; global optimal codebook; hidden Markov model algorithm; human action recognition; multiframes video; simulated annealing algorithm; speech recognition; vector quantization algorithm; HMM; LBG; action recognition; codebook; empty cavity split; simulated annealing; vector quantitation (ID#: 16-9646)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405672&isnumber=7405648
A. M. Kuruvilla and S. Varghese, “A Detection System to Counter Identity Deception in Social Media Applications,” Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, Nagercoil, 2015, pp. 1-5. doi: 10.1109/ICCPCT.2015.7159321
Abstract: Considering the current landscape of the internet where there is a plethora of social networking sites and collaborative websites like Wikipedia concern about malicious users keeping multiple accounts is of prime importance. Most of the collaborative sites allow users to easily create an account and start accessing the content. Social media services such as collaborative project's single user constantly creates many accounts with different account names not long after a block has been applied. The blocked person who creates multiple accounts is called sockpuppet. Current mechanism for detecting deception are based on human deception detection (e.g., speech or text). Although these method have high detection accuracy, but it cannot be applied in databases with large volumes of data. So they are computationally inefficient. There is an efficient method for detecting identity deception by using both Nonverbal (e.g., user activity or Movement) and Verbal Behavior (facial expression, text) in the social media environment. These methods increase high detection accuracy. Post examination and close monitoring on these methods which finds out that it can be applied to any social media environment.
Keywords: groupware; social networking (online); Internet; Wikipedia; collaborative Web sites; collaborative project; collaborative sites; human deception detection; identity deception detection; nonverbal behavior; post examination; social media applications; social media services; social networking sites; sockpuppet; Accuracy; Collaboration; Electronic publishing; Encyclopedias; Media; Deception; accuracy; nonverbal and verbal behavior; security (ID#: 16-9647)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159321&isnumber=7159156
H. Y. Shahir, U. Glasser, A. Y. Shahir and H. Wehn, “Maritime Situation Analysis Framework: Vessel Interaction Classification and Anomaly Detection,” Big Data (Big Data), 2015 IEEE International Conference on, Santa Clara, CA, 2015, pp. 1279-1289. doi: 10.1109/BigData.2015.7363883
Abstract: Maritime domain awareness is critical for protecting sea lanes, ports, harbors, offshore structures like oil and gas rigs and other types of critical infrastructure against common threats and illegal activities. Typical examples range from smuggling of drugs and weapons, human trafficking and piracy all the way to terror attacks. Limited surveillance resources constrain maritime domain awareness and compromise full security coverage at all times. This situation calls for innovative intelligent systems for interactive situation analysis to assist marine authorities and security personal in their routine surveillance operations. In this article, we propose a novel situation analysis approach to analyze marine traffic data and differentiate various scenarios of vessel engagement for the purpose of detecting anomalies of interest for marine vessels that operate over some period of time in relative proximity to each other. We consider such scenarios as probabilistic processes and analyze complex vessel trajectories using machine learning to model common patterns. Specifically, we represent patterns as left-to-right Hidden Markov Models and classify them using Support Vector Machines. To differentiate suspicious activities from unobjectionable behavior, we explore fusion of data and information, including kinematic features, geospatial features, contextual information and maritime domain knowledge. Our experimental evaluation shows the effectiveness of the proposed approach using comprehensive real-world vessel tracking data from coastal waters of North America.
Keywords: data analysis; hidden Markov models; learning (artificial intelligence); marine engineering; marine vehicles; pattern classification; probability; security; support vector machines; surveillance; traffic engineering computing; anomaly detection; complex vessel trajectory analysis; contextual information; data fusion; geospatial features; information fusion; innovative intelligent systems; interactive situation analysis; kinematic features; left-to-right hidden Markov models; machine learning; marine traffic data analysis; maritime domain awareness; maritime domain knowledge; maritime situation analysis framework; pattern classification; pattern representation; probabilistic processes; routine surveillance operations; security; support vector machines; vessel engagement; vessel interaction classification; vessel tracking data; Geospatial analysis; Hidden Markov models; Kinematics; Security; Surveillance; Time series analysis; Trajectory; Anomaly Detection; Big Data; Critical Infrastructure Protection; Intelligent Systems; Machine Learning; Maritime Domain Awareness (ID#: 16-9648)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363883&isnumber=7363706
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Hard Problems: Security Metrics 2015 |
Measurement is at the core of science. The development of accurate metrics is a major element for achieving a true Science of Security. It is also one of the hard problems to solve. The research cited here was presented in 2014 and 2015.
R. Slayton, “Measuring Risk: Computer Security Metrics, Automation, and Learning,” in IEEE Annals of the History of Computing, vol. 37, no. 2, pp. 32-45, Apr.-June 2015. doi: 10.1109/MAHC.2015.30
Abstract: Risk management is widely seen as the basis for cybersecurity in contemporary organizations, but practitioners continue to dispute its value. This article analyzes debate over computer security risk management in the 1970s and 1980s United States, using this debate to enhance our understanding of the value of computer security metrics more generally. Regulators placed a high value on risk analysis and measurement because of their association with objectivity, control, and efficiency. However, practitioners disputed the value of risk analysis, questioning the final measurement of risk. The author argues that computer security risk management was most valuable not because it provided an accurate measure of risk, but because the process of accounting for risks could contribute to organizational learning. Unfortunately, however, organizations were sorely tempted to go through the motions of risk management without engaging in the more difficult process of learning.
Keywords: risk management; security of data; automation; computer security risk management; cybersecurity; organizational learning; risk analysis; risk measurement; Computer security; Government policies; History; Measurement; Risk management; computer security; history of computing; measurement; metrics; policys; risk assessment (ID#: 16-9687)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116460&isnumber=7116418
C. Vellaithurai, A. Srivastava, S. Zonouz and R. Berthier, “CPIndex: Cyber-Physical Vulnerability Assessment for Power-Grid Infrastructures,” in IEEE Transactions on Smart Grid, vol. 6, no. 2, pp. 566-575, March 2015. doi: 10.1109/TSG.2014.2372315
Abstract: To protect complex power-grid control networks, power operators need efficient security assessment techniques that take into account both cyber side and the power side of the cyber-physical critical infrastructures. In this paper, we present CPINDEX, a security-oriented stochastic risk management technique that calculates cyber-physical security indices to measure the security level of the underlying cyber-physical setting. CPINDEX installs appropriate cyber-side instrumentation probes on individual host systems to dynamically capture and profile low-level system activities such as interprocess communications among operating system assets. CPINDEX uses the generated logs along with the topological information about the power network configuration to build stochastic Bayesian network models of the whole cyber-physical infrastructure and update them dynamically based on the current state of the underlying power system. Finally, CPINDEX implements belief propagation algorithms on the created stochastic models combined with a novel graph-theoretic power system indexing algorithm to calculate the cyber-physical index, i.e., to measure the security-level of the system's current cyber-physical state. The results of our experiments with actual attacks against a real-world power control network shows that CPINDEX, within few seconds, can efficiently compute the numerical indices during the attack that indicate the progressing malicious attack correctly.
Keywords: Bayes methods; graph theory; power engineering computing; power grids; power system control; power system security; risk management; stochastic processes; CPIndex; cyber-physical critical infrastructures; cyber-physical security indices; cyber-physical vulnerability assessment; cyber-side instrumentation probes; graph-theoretic power system indexing algorithm; interprocess communications; numerical indices; operating system assets; power network configuration; power operators; power-grid Infrastructures; power-grid control networks; security assessment techniques; security-oriented stochastic risk management technique; stochastic Bayesian network models; Generators; Indexes; Power measurement; Security; Smart grids; Cyber-physical security metrics; cyber-physical systems; intrusion detection systems; situational awareness (ID#: 16-9688)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979242&isnumber=7042857
H. Holm, K. Shahzad, M. Buschle and M. Ekstedt, “P2CySeMoL: Predictive, Probabilistic Cyber Security Modeling Language,” in IEEE Transactions on Dependable and Secure Computing, vol. 12, no. 6, pp. 626-639, Nov.-Dec. 1 2015.
doi: 10.1109/TDSC.2014.2382574
Abstract: This paper presents the Predictive, Probabilistic Cyber Security Modeling Language (P2CySeMoL), an attack graph tool that can be used to estimate the cyber security of enterprise architectures. P2CySeMoL includes theory on how attacks and defenses relate quantitatively; thus, users must only model their assets and how these are connected in order to enable calculations. The performance of P2CySeMoL enables quick calculations of large object models. It has been validated on both a component level and a system level using literature, domain experts, surveys, observations, experiments and case studies.
Keywords: estimation theory; formal languages; graph theory; probability; security of data; software architecture; P2CySeMoL; attack graph tool; cyber security estimation; enterprise architecture; predictive probabilistic cyber security modeling language; Computational modeling; Computer architecture; Computer security; Data models; Predictive models; Probabilistic logic; attack graphs; risk management; security metrics (ID#: 16-9689)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6990572&isnumber=7322332
C. Liu, N. Yang, J. Yuan and R. Malaney, “Location-Based Secure Transmission for Wiretap Channels,” in IEEE Journal on Selected Areas in Communications, vol. 33, no. 7, pp. 1458-1470, July 2015. doi: 10.1109/JSAC.2015.2430211
Abstract: Location information has been shown to be useful for a wide variety of applications in wireless networks, while its role in physical layer security has so far drawn little attention. In this work, we propose a new location-based secure transmission scheme for wiretap channels, where the accurate locations of the sources, destinations and any other authorized transceivers are known, but only an estimate of the eavesdropper's location is available. We outline how such an estimate of the eavesdropper's location can still allow for quantitative assessment of key security metrics. To provide focus, we describe how optimization of the effective secrecy throughput of a relay wiretap channel is obtained in our scheme, and investigate in detail the impact of the location uncertainty on the system performance. The work reported here provides insights into the design of new location-based physical layer security schemes in which the only information available on an eavesdropper is a noisy estimate of her location.
Keywords: optimisation; radio networks; radio transceivers; telecommunication security; wireless channels; authorized transceivers; eavesdropper location; key security metrics; location based secure transmission; location based secure transmission scheme; location information; physical layer security; quantitative assessment; wireless networks; wiretap channels; Physical layer; Relays; Security; Signal to noise ratio; Synchronization; Location; artificial noise; relay networks; wiretap channel (ID#: 16-9690)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102687&isnumber=7128777
L. Wang, K. J. Kim, T. Q. Duong, M. Elkashlan and H. V. Poor, “Security Enhancement of Cooperative Single Carrier Systems,” in IEEE Transactions on Information Forensics and Security, vol. 10, no. 1, pp. 90-103, Jan. 2015. doi: 10.1109/TIFS.2014.2360437
Abstract: In this paper, the impact of multiple active eavesdroppers on cooperative single carrier systems with multiple relays and multiple destinations is examined. To achieve the secrecy diversity gains in the form of opportunistic selection, a two-stage scheme is proposed for joint relay and destination selection, in which, after the selection of the relay with the minimum effective maximum signal-to-noise ratio (SNR) to a cluster of eavesdroppers, the destination that has the maximum SNR from the chosen relay is selected. To accurately assess the secrecy performance, exact and asymptotic expressions are obtained in closed form for several security metrics, including the secrecy outage probability, probability of nonzero secrecy rate, and ergodic secrecy rate in frequency selective fading. Based on the asymptotic analysis, key design parameters, such as secrecy diversity gain, secrecy array gain, secrecy multiplexing gain, and power cost, are characterized, from which new insights are drawn. In addition, it is concluded that secrecy performance limits occur when the average received power at the eavesdropper is proportional to the counterpart at the destination. In particular, for the secrecy outage probability, it is confirmed that the secrecy diversity gain collapses to zero with outage floor, whereas for the ergodic secrecy rate, it is confirmed that its slope collapses to zero with capacity ceiling.
Keywords: cooperative communication; probability; relay networks (telecommunication); telecommunication network reliability; telecommunication security; SNR; asymptotic analysis; asymptotic expression; capacity ceiling; cooperative single carrier system security enhancement; ergodic secrecy rate; frequency selective fading; multiple active eavesdropper; multiple relay selection; opportunistic selection; outage probability; secrecy diversity gain; signal-to-noise ratio; two-stage scheme; Diversity methods; Fading; Receivers; Relays; Security; Signal to noise ratio; Wireless communication; Cooperative transmission; physical layer security; secrecy ergodic rate; secrecy outage probability; single carrier transmission (ID#: 16-9691)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6910233&isnumber=6973056
R. Bulbul, P. Sapkota, C. W. Ten, L. Wang and A. Ginter, “Intrusion Evaluation of Communication Network Architectures for Power Substations,” in IEEE Transactions on Power Delivery, vol. 30, no. 3, pp. 1372-1382, June 2015. doi: 10.1109/TPWRD.2015.2409887
Abstract: Electronic elements of a substation control system have been recognized as critical cyberassets due to the increased complexity of the automation system that is further integrated with physical facilities. Since this can be executed by unauthorized users, the security investment of cybersystems remains one of the most important factors for substation planning and maintenance. As a result of these integrated systems, intrusion attacks can impact operations. This work systematically investigates the intrusion resilience of the ten architectures between a substation network and others. In this paper, two network architectures comparing computer-based boundary protection and firewall-dedicated virtual local-area networks are detailed, that is, architectures one and ten. A comparison on the remaining eight architecture models was performed. Mean time to compromise is used to determine the system operational period. Simulation cases have been set up with the metrics based on different levels of attackers' strength. These results as well as sensitivity analysis show that implementing certain architectures would enhance substation network security.
Keywords: firewalls; investment; local area networks; maintenance engineering; power system planning; safety systems; substation automation; substation protection; automation system; communication network architectures; computer-based boundary protection; cybersystems; electronic elements; firewall-dedicated virtual local-area networks; intrusion attacks; intrusion evaluation; intrusion resilience; power substations; security investment; sensitivity analysis; substation control system; substation maintenance; substation network security; substation planning; unauthorized users; Computer architecture; Modems; Protocols; Security; Servers; Substations; Tin; Cyberinfrastructure; electronic intrusion; network security planning; power substation (ID#: 16-9692)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054545&isnumber=7110680
P. L. Yu, G. Verma and B. M. Sadler, “Wireless Physical Layer Authentication via Fingerprint Embedding,” in IEEE Communications Magazine, vol. 53, no. 6, pp. 48-53, June 2015. doi: 10.1109/MCOM.2015.7120016
Abstract: Authentication is a fundamental requirement for secure communications. In this article, we describe a general framework for fingerprint embedding at the physical layer in order to provide message authentication that is secure and bandwidth-efficient. Rather than depending on channel or device characteristics that are outside of our control, deliberate fingerprint embedding for message authentication enables control over performance trade-offs by design. Furthermore, low-power fingerprint designs enhance security by making the authentication tags less accessible to adversaries. We define metrics for communications and authentication performance, and discuss the trade-offs in system design. Results from our wireless software-defined radio experiments validate the theory and demonstrate the low complexity, practicality, and enhanced security of the approach.
Keywords: fingerprint identification; message authentication; telecommunication security; fingerprint embedding; low-power fingerprint designs; secure communications; wireless physical layer authentication; wireless software-defined radio; Authentication; Bit error rate; Fingerprint recognition; Network security; Physical layer; Receivers; Signal to noise ratio; Wireless networks (ID#: 16-9693)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120016&isnumber=7120004
T. Qin, X. Guan, C. Wang and Z. Liu, “MUCM: Multilevel User Cluster Mining Based on Behavior Profiles for Network Monitoring,” in IEEE Systems Journal, vol. 9, no. 4, pp. 1322-1333, Dec. 2015. doi: 10.1109/JSYST.2014.2350019
Abstract: Mastering user's behavior character is important for efficient network management and security monitoring. In this paper, we develop a novel framework named as multilevel user cluster mining (MUCM) to measure user's behavior similarity under different network prefix levels. Focusing on aggregated traffic behavior under different network prefixes cannot only reduce the number of traffic flows but also reveal detailed patterns for a group of users sharing similar behaviors. First, we employ the bidirectional flow and bipartite graphs to model network traffic characteristics in large-scale networks. Four traffic features are then extracted to characterize the user's behavior profiles. Second, an efficient method with adjustable weight factors is employed to calculate the user's behavior similarity, and entropy gain is applied to select the weight factor adaptively. Using the behavior similarity metrics, a simple clustering algorithm based on κ-means is employed to perform user clustering based on behavior profiles. Finally, we examine the applications of behavior clustering in profiling network traffic patterns and detecting anomalous behaviors. The efficiency of our methods is verified with extensive experiments using actual traffic traces collected from the northwest region center of China Education and Research Network (CERNET), and the cluster results can be used for flow control and traffic security monitoring.
Keywords: complex networks; computer network management; computer network security; data mining; graph theory; pattern clustering; CERNET; China Education and Research Network; MUCM; behavior clustering; behavior profiles; behavior similarity metrics; bidirectional flow; bipartite graphs; large-scale networks; multilevel user cluster mining; network management; network monitoring; network prefix levels; network traffic characteristics; north-west region center; profiling network traffic patterns; traffic security monitoring; user behavior character; user behavior profiles; user behavior similarity; weight factor; Communities; Feature extraction; IP networks; Monitoring; Ports (Computers); Protocols; Security; Behavior profiles; different prefix levels; regional flow model; user clustering (ID#: 16-9694)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892980&isnumber=7332994
T. Lv, H. Gao and S. Yang, “Secrecy Transmit Beamforming for Heterogeneous Networks,” in IEEE Journal on Selected Areas in Communications, vol. 33, no. 6, pp. 1154-1170, June 2015. doi: 10.1109/JSAC.2015.2416984
Abstract: In this paper, we pioneer the study of physical-layer security in heterogeneous networks (HetNets). We investigate secure communications in a two-tier downlink HetNet, which comprises one macrocell and several femtocells. Each cell has multiple users and an eavesdropper attempts to wiretap the intended macrocell user. First, we consider an orthogonal spectrum allocation strategy to eliminate co-channel interference, and propose the secrecy transmit beamforming only operating in the macrocell (STB-OM) as a partial solution for secure communication in HetNet. Next, we consider a secrecy-oriented non-orthogonal spectrum allocation strategy and propose two cooperative STBs which rely on the collaboration amongst the macrocell base station (MBS) and the adjacent femtocell base stations (FBSs). Our first cooperative STB is the STB sequentially operating in the macrocell and femtocells (STB-SMF), where the cooperative FBSs individually design their STB matrices and then feed their performance metrics to the MBS for guiding the STB in the macrocell. Aiming to improve the performance of STB-SMF, we further propose the STB jointly designed in the macrocell and femtocells (STB-JMF), where all cooperative FBSs feed channel state information to the MBS for designing the joint STB. Unlike conventional STBs conceived for broadcasting or interference channels, the three proposed STB schemes all entail relatively sophisticated optimizations due to QoS constraints of the legitimate users. To efficiently use these STB schemes, the original optimization problems are reformulated and convex optimization techniques, such as second-order cone programming and semidefinite programming, are invoked to obtain the optimal solutions. Numerical results demonstrate that the proposed STB schemes are highly effective in improving the secrecy rate performance of HetNet.
Keywords: array signal processing; cochannel interference; convex programming; cooperative communication; femtocellular radio; interference suppression; matrix algebra; radio spectrum management; telecommunication security; wireless channels; FBS; MBS; QoS constraint; STB-OM; Secrecy Transmit Beamforming; broadcasting; channel state information; cochannel interference elimination; convex optimization technique; cooperative STB; femtocell base station; heterogeneous network; macrocell base station; optimization problem; orthogonal spectrum allocation strategy; physical-layer security; second-order cone programming; secrecy-oriented nonorthogonal spectrum allocation strategy; secure communication; semidefinite programming; two-tier downlink HetNet; Array signal processing; Femtocells; Interference; Macrocell networks; Quality of service; Resource management; Vectors; Beamforming; femtocell; nonconvex optimization; semidefinite programming (SDP) (ID#: 16-9695)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070667&isnumber=7108084
M. Mozaffari-Kermani, S. Sur-Kolay, A. Raghunathan and N. K. Jha, “Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare,” in IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 6, pp. 1893-1905, Nov. 2015. doi: 10.1109/JBHI.2014.2344095
Abstract: Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.
Keywords: health care; learning (artificial intelligence); medical computing; pattern classification; security of data; application domains; arbitrary errors; biomedicine; critical decisions; false diagnosis prompt users; false positive classification; health-related applications; healthcare; life-threatening consequences; machine-learning algorithms; malicious data; patient distress; security breach; systematic poisoning attacks; targeted errors; training datasets; Machine learning algorithms; Malware; Security; Training; Healthcare; machine learning; poisoning attacks; security (ID#: 16-9696)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868201&isnumber=7317613
J. M. Chang, P. C. Tsou, I. Woungang, H. C. Chao and C. F. Lai, “Defending Against Collaborative Attacks by Malicious Nodes in MANETs: A Cooperative Bait Detection Approach,” in IEEE Systems Journal, vol. 9, no. 1, pp. 65-75, March 2015.
doi: 10.1109/JSYST.2013.2296197
Abstract: In mobile ad hoc networks (MANETs), a primary requirement for the establishment of communication among nodes is that nodes should cooperate with each other. In the presence of malevolent nodes, this requirement may lead to serious security concerns; for instance, such nodes may disrupt the routing process. In this context, preventing or detecting malicious nodes launching grayhole or collaborative blackhole attacks is a challenge. This paper attempts to resolve this issue by designing a dynamic source routing (DSR)-based routing mechanism, which is referred to as the cooperative bait detection scheme (CBDS), that integrates the advantages of both proactive and reactive defense architectures. Our CBDS method implements a reverse tracing technique to help in achieving the stated goal. Simulation results are provided, showing that in the presence of malicious-node attacks, the CBDS outperforms the DSR, 2ACK, and best-effort fault-tolerant routing (BFTR) protocols (chosen as benchmarks) in terms of packet delivery ratio and routing overhead (chosen as performance metrics).
Keywords: cooperative communication; mobile ad hoc networks; routing protocols; telecommunication security; BFTR protocol; CBDS; DSR; MANET; best-effort fault-tolerant routing protocol; collaborative blackhole attack; cooperative bait detection approach; dynamic source routing; grayhole attack; malevolent nodes; malicious nodes; packet delivery ratio; reverse tracing; routing overhead; routing process; serious security concerns; Ad hoc networks; Collaboration; Delays; Mobile computing; Routing; Security; Cooperative bait detection scheme (CBDS); collaborative bait detection; collaborative blackhole attacks; detection mechanism; dynamic source routing (DSR); grayhole attacks; malicious node; mobile ad hoc network (MANET) (ID#: 16-9697)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6708418&isnumber=7053977
A. Rabbachin, A. Conti and M. Z. Win, “Wireless Network Intrinsic Secrecy,” in IEEE/ACM Transactions on Networking, vol. 23, no. 1, pp. 56-69, Feb. 2015. doi: 10.1109/TNET.2013.2297339
Abstract: Wireless secrecy is essential for communication confidentiality, health privacy, public safety, information superiority, and economic advantage in the modern information society. Contemporary security systems are based on cryptographic primitives and can be complemented by techniques that exploit the intrinsic properties of a wireless environment. This paper develops a foundation for design and analysis of wireless networks with secrecy provided by intrinsic properties such as node spatial distribution, wireless propagation medium, and aggregate network interference. We further propose strategies that mitigate eavesdropping capabilities, and we quantify their benefits in terms of network secrecy metrics. This research provides insights into the essence of wireless network intrinsic secrecy and offers a new perspective on the role of network interference in communication confidentiality.
Keywords: cryptography; interference (signal); radio networks; telecommunication security; aggregate network interference; communication confidentiality; contemporary security system; cryptographic primitive; eavesdropping mitigation; network secrecy metric; node spatial distribution; wireless network intrinsic secrecy; wireless propagation medium; Indexes; Interference; Measurement; Receivers; Transmitters; Wireless networks; Network secrecy; fading channels; interference exploitation; stochastic geometry; wireless networks (ID#: 16-9698)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6732960&isnumber=7041254
C. Liang and F. R. Yu, “Wireless Network Virtualization: A Survey, Some Research Issues and Challenges,” in IEEE Communications Surveys & Tutorials, vol. 17, no. 1, pp. 358-380, Firstquarter 2015. doi: 10.1109/COMST.2014.2352118
Abstract: Since wireless network virtualization enables abstraction and sharing of infrastructure and radio spectrum resources, the overall expenses of wireless network deployment and operation can be reduced significantly. Moreover, wireless network virtualization can provide easier migration to newer products or technologies by isolating part of the network. Despite the potential vision of wireless network virtualization, several significant research challenges remain to be addressed before widespread deployment of wireless network virtualization, including isolation, control signaling, resource discovery and allocation, mobility management, network management and operation, and security as well as non-technical issues such as governance regulations, etc. In this paper, we provide a brief survey on some of the works that have already been done to achieve wireless network virtualization, and discuss some research issues and challenges. We identify several important aspects of wireless network virtualization: overview, motivations, framework, performance metrics, enabling technologies, and challenges. Finally, we explore some broader perspectives in realizing wireless network virtualization.
Keywords: mobility management (mobile radio); radio networks; telecommunication computing; virtualisation; control signaling; governance regulations; mobility management; network management; nontechnical issues; performance metrics; radio spectrum resources; resource allocation; resource discovery; wireless network deployment; wireless network virtualization; Business; Indium phosphide; Mobile communication; Virtual private networks; Virtualization; Wireless networks; Wireless network virtualization; abstraction and sharing; cloud computing; cognitive radio and networks; isolation (ID#: 16-9699)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6887287&isnumber=7061782
R. Azarderakhsh and M. Mozaffari-Kermani, “High-Performance Two-Dimensional Finite Field Multiplication and Exponentiation for Cryptographic Applications,” in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 34,
no. 10, pp. 1569-1576, Oct. 2015. doi: 10.1109/TCAD.2015.2424928
Abstract: Finite field arithmetic operations have been traditionally used in different applications ranging from error control coding to cryptographic computations. Among these computations are normal basis multiplications and exponentiations which are utilized in efficient applications due to their advantageous characteristics and the fact that squaring (and subsequent powering by two) of elements can be obtained with no hardware complexity. In this paper, we present 2-D decomposition systolic-oriented algorithms to develop systolic structures for digit-level Gaussian normal basis multiplication and exponentiation over GF(2m). The proposed high-performance architectures are suitable for a number of applications, e.g., architectures for elliptic curve Diffie-Hellman key agreement scheme in cryptography. The results of the benchmark of efficiency, performance, and implementation metrics of such architectures through a 65-nm application-specific integrated circuit platform confirm high-performance structures for the multiplication and exponentiation architectures presented in this paper are suitable for high-speed architectures, including cryptographic applications.
Keywords: Galois fields; parallel architectures; public key cryptography; 2D decomposition systolic-oriented algorithms; application-specific integrated circuit platform; cryptographic applications; cryptographic computations; digit-level Gaussian normal basis exponentiation; digit-level Gaussian normal basis multiplication; elliptic curve Diffie-Hellman key agreement scheme; error control coding; exponentiation architectures; finite field arithmetic operations; hardware complexity; high-performance architectures; high-performance structures; high-performance two-dimensional finite field exponentiation; high-performance two-dimensional finite field multiplication; multiplication architectures; systolic structures; Arrays; Complexity theory; Cryptography; Gaussian processes; Hardware; Logic gates; Gaussian normal basis; Gaussian normal basis (GNB); finite field; security; systolic architecture (ID#: 16-9700)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7090958&isnumber=7271134
L. Lu and Y. Liu, “Safeguard: User Reauthentication on Smartphones via Behavioral Biometrics,” in IEEE Transactions on Computational Social Systems, vol. 2, no. 3, pp. 53-64, Sept. 2015. doi: 10.1109/TCSS.2016.2517648
Abstract: With the emergence of smartphones as an essential part of daily life, the demand for user reauthentication has increased manifolds. The effective and widely practiced biometric schemes are based upon the principle of “who you are” which utilizes inherent and unique characteristics of the user. In this context, the behavioral biometrics such as sliding dynamics and pressure intensity make use of on-screen sliding movements to infer the user's patterns. In this paper, we present Safeguard, an accurate and efficient smartphone user reauthentication (verification) system based upon on-screen finger movements. The computation and processing is performed at back-end which is transparent to the users. The key feature of the proposed system lies in fine-grained on-screen biometric metrics, i.e., sliding dynamics and pressure intensity, which are unique to each user under diverse scenarios. We first implement our scheme through five machine learning approaches and finally select the support vector machine (SVM)-based approach due to its high accuracy. We further analyze Safeguard to be robust against adversary imitation. We validate the efficacy of our approach through implementation on off-the-shelf smartphone followed by practical evaluation under different scenarios. We process a set of more than 50 000 effective samples derived from a raw dataset of over 10 000 slides collected from each of the 60 volunteers over a period of one month. The experimental results show that Safeguard can verify a user with 0.03% false acceptance rate (FAR) and 0.05% false rejection rate (FRR) within 0.3 s with 15 to 20 slides by the user. The FRR of our system adequately meets the European Standard for Access Control Systems, whereas FAR differs by 0.029%. Our future works aim to integrate multitouch sliding movements in existing scheme.
Keywords: formal verification; inference mechanisms; learning (artificial intelligence); message authentication; mobile computing; smart phones; support vector machines; SVM; Safeguard; adversary imitation; behavioral biometric; machine learning; on-screen sliding movement; smart phone; support vector machine; user pattern inference; user reauthentication system; user verification system; Authentication; Biometrics (access control); Machine learning; Security; Smart phones; Support vector machines; Behavioral biometrics; reauthentication; security; smartphone (ID#: 16-9701)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7414441&isnumber=7419212
J. Lu, G. Wang, W. Deng and K. Jia, “Reconstruction-Based Metric Learning for Unconstrained Face Verification,” in IEEE Transactions on Information Forensics and Security, vol. 10, no. 1, pp. 79-89, Jan. 2015. doi: 10.1109/TIFS.2014.2363792
Abstract: In this paper, we propose a reconstruction-based metric learning method to learn a discriminative distance metric for unconstrained face verification. Unlike conventional metric learning methods, which only consider the label information of training samples and ignore the reconstruction residual information in the learning procedure, we apply a reconstruction criterion to learn a discriminative distance metric. For each training example, the distance metric is learned by enforcing a margin between the interclass sparse reconstruction residual and interclass sparse reconstruction residual, so that the reconstruction residual of training samples can be effectively exploited to compute the between-class and within-class variations. To better use multiple features for distance metric learning, we propose a reconstruction-based multimetric learning method to collaboratively learn multiple distance metrics, one for each feature descriptor, to remove uncorrelated information for recognition. We evaluate our proposed methods on the Labelled Faces in the Wild (LFW) and YouTube face data sets and our experimental results clearly show the superiority of our methods over both previous metric learning methods and several state-of-the-art unconstrained face verification methods.
Keywords: face recognition; image reconstruction; learning (artificial intelligence); LFW data set; Labelled Faces in the Wild data set; YouTube face data sets; discriminative distance metric; distance metric learning; feature descriptor; interclass sparse reconstruction residual; reconstruction-based multimetric learning method; unconstrained face verification; Face; Feature extraction; Image reconstruction; Learning systems; Measurement; Training; Face recognition; metric learning; reconstruction-based learning
(ID#: 16-9702)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6926840&isnumber=6973056
Ahmad Gharanjik, M. R. Bhavani Shankar, Pantelis-Daniel Arapoglou and Björn Ottersten, “Multiple Gateway Transmit Diversity in Q/V Band Feeder Links,” in IEEE Transactions on Communications, vol. 63, no. 3, pp. 916-926, March 2015. doi: 10.1109/TCOMM.2014.2385703
Abstract: Design of high bandwidth and reliable feeder links are central toward provisioning new services on the user link of a multibeam satellite communication system. Toward this, utilization of the Q/V band and an exploitation of multiple gateways (GWs) as a transmit diversity measure for overcoming severe propagation effects are being considered. In this context, this contribution deals with the design of a feeder link comprising N+P GWs (N active and P redundant GWs). Toward provisioning the desired availability, a novel switching scheme is analyzed and practical aspects such as prediction-based switching and switching rate are discussed. Unlike most relevant works, a dynamic rain attenuation model is used to analytically derive average outage probability in the fundamental 1 + 1 GW case. Building on this result, an analysis for the N+P scenario leading to a quantification of the end-to-end performance is provided. This analysis aids system sizing by illustrating the interplay between the number of active and redundant GWs on the chosen metrics: average outage and average switching rate.
Keywords: electromagnetic wave attenuation; probability; satellite communication; Q/V band feeder links; attenuation model; average outage probability; multibeam satellite communication system; multiple gateway transmit diversity; prediction-based switching; reliable feeder links; Attenuation; Correlation; Logic gates; Rain; Satellites; Signal to noise ratio; Switches; $N+P$ scheme; Feeder Link; Gateway Diversity; Gateway diversity; N +P scheme; Q/V Band; Q/V band; Rain Attenuation; Satellite Communication; feeder link; rain attenuation (ID#: 16-9703)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6995937&isnumber=7060754
Y. Xu, Z. Y. Dong, C. Xiao, R. Zhang and K. P. Wong, “Optimal Placement of Static Compensators for Multi-Objective Voltage Stability Enhancement of Power Systems,” in IET Generation, Transmission & Distribution, vol. 9, no. 15, pp. 2144-2151,
19 November 2015. doi: 10.1049/iet-gtd.2015.0070
Abstract: Static compensators (STATCOMs) are able to provide rapid and dynamic reactive power support within a power system for voltage stability enhancement. While most of previous research focuses on only an either static or dynamic (short-term) voltage stability criterion, this study proposes a multi-objective programming (MOP) model to simultaneously minimise (i) investment cost, (ii) unacceptable transient voltage performance, and (iii) proximity to steady-state voltage collapse. The model aims to find Pareto optimal solutions for flexible and multi-objective decision-making. To account for multiple contingencies and their probabilities, corresponding risk-based metrics are proposed based on respective voltage stability measures. Given the two different voltage stability criteria, a strategy based on Pareto frontier is designed to identify critical contingencies and candidate buses for STATCOM connection. Finally, to solve the MOP model, an improved decomposition-based multi-objective evolutionary algorithm is developed. The proposed model and algorithm are demonstrated on the New England 39-bus test system, and compared with state-of-the-art solution algorithms.
Keywords: Pareto optimisation; cost reduction; evolutionary computation; power system dynamic stability; power system economics; power system reliability; power system security; power system transient stability; probability; risk management; stability criteria; static VAr compensators; voltage regulators; MOP model; New England 39-bus test system; Pareto optimal solutions; decomposition-based multiobjective evolutionary algorithm; dynamic reactive power support; investment cost minimisation; multiobjective decision-making; multiobjective programming model; multiobjective voltage stability enhancement; multiple contingencies; optimal static compensators placement; power system; proximity minimisation; risk-based metrics; steady-state voltage collapse; unacceptable transient voltage performance minimisation; voltage stability criteria; voltage stability measures (ID#: 16-9704)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328457&isnumber=7328433
M. M. E. A. Mahmoud, J. Mišić, K. Akkaya and X. Shen, “Investigating Public-Key Certificate Revocation in Smart Grid,” in IEEE Internet of Things Journal, vol. 2, no. 6, pp. 490-503, Dec. 2015. doi: 10.1109/JIOT.2015.2408597
Abstract: The public key cryptography (PKC) is essential for securing many applications in smart grid. For the secure use of the PKC, certificate revocation schemes tailored to smart grid applications should be adopted. However, little work has been done to study certificate revocation in smart grid. In this paper, we first explain different motivations that necessitate revoking certificates in smart grid. We also identify the applications that can be secured by PKC and thus need certificate revocation. Then, we explain existing certificate revocation schemes and define several metrics to assess them. Based on this assessment, we identify the applications that are proper for each scheme and discuss how the schemes can be modified to fully satisfy the requirements of its potential applications. Finally, we study certificate revocation in pseudonymous public key infrastructure (PPKI), where a large number of certified public/private keys are assigned for each node to preserve privacy. We target vehicles-to-grid communications as a potential application. Certificate revocation in this application is a challenge because of the large number of certificates. We discuss an efficient certificate revocation scheme for PPKI, named compressed certificate revocation lists (CRLs). Our analytical results demonstrate that one revocation scheme cannot satisfy the overhead/security requirements of all smart grid applications. Rather, different schemes should be employed for different applications. Moreover, we used simulations to measure the overhead of the schemes.
Keywords: public key cryptography; smart power grids; PKC; PPKI; pseudonymous public key infrastructure; public key cryptography; public-key certificate revocation; public-private keys; several metrics; smart grid; smart grid applications; vehicles-to-grid communications; Electricity; Measurement; Privacy; Public key; Smart grids; Substations; Certificate revocation schemes; public key cryptography (PKC); smart grid communication security (ID#: 16-9705)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054434&isnumber=7331244
S. Houshmand, S. Aggarwal and R. Flood, “Next Gen PCFG Password Cracking,” in IEEE Transactions on Information Forensics and Security, vol. 10, no. 8, pp. 1776-1791, Aug. 2015. doi: 10.1109/TIFS.2015.2428671
Abstract: Passwords continue to remain an important authentication technique. The probabilistic context-free grammar-based password cracking system of Weir et al. was an important addition to dictionary-based password cracking approaches. In this paper, we show how to substantially improve upon this system by systematically adding keyboard patterns and multiword patterns (two or more words in the alphabetic part of a password) to the context-free grammars used in the probabilistic password cracking. Our results on cracking multiple data sets show that by learning these new classes of patterns, we can achieve up to 22% improvement over the original system. In this paper, we also define metrics to help analyze and improve attack dictionaries. Using our approach to improving the dictionary, we achieve an additional improvement of ~33% by increasing the coverage of a standard attack dictionary. Combining both approaches, we can achieve a 55% improvement over the previous system. Our tests were done over fairly long password guessing sessions (up to 85 billion) and thus show the uniform effectiveness of our techniques for long cracking sessions.
Keywords: context-free grammars; security of data; authentication technique; dictionary based password cracking approaches; keyboard patterns; multiword patterns; next Gen PCFG password cracking; password cracking system; probabilistic context-free grammar; probabilistic password cracking; Dictionaries; Grammar; Keyboards; Probabilistic logic; Shape; Smoothing methods; Training; Authentication; Dictionaries; Keyboard patterns; Multiwords; Password cracking; Probabilistic grammars; authentication; dictionaries; multiwords; password cracking; probabilistic grammars (ID#: 16-9706)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098389&isnumber=7127092
H. Alves, C. H. M. de Lima, P. H. J. Nardelli, R. D. Souza and M. Latva-aho, “On the Secrecy of Interference-Limited Networks Under Composite Fading Channels,” in IEEE Signal Processing Letters, vol. 22, no. 9, pp. 1306-1310, Sept. 2015. doi: 10.1109/LSP.2015.2398514
Abstract: This letter deals with the secrecy capacity of the radio channel in interference-limited regime. We assume that interferers are uniformly scattered over the network area according to a Point Poisson Process and the channel model consists of path-loss, log-normal shadowing and Nakagami-m fading. Both the probability of non-zero secrecy capacity and the secrecy outage probability are then derived in closed-form expressions using tools of stochastic geometry and higher-order statistics. Our numerical results show how the secrecy metrics are affected by the disposition of the desired receiver, the eavesdropper and the legitimate transmitter.
Keywords: Nakagami channels; fading channels; log normal distribution; radio receivers; radio transmitters; stochastic processes; telecommunication security; Nakagami-m fading; channel model; composite fading channels; eavesdropper; higher-order statistics; interference-limited networks secrecy; interference-limited regime; nonzero secrecy capacity; path-loss log-normal shadowing; point Poisson process; radio channel; receiver; secrecy capacity; secrecy outage probability; stochastic geometry; transmitter; Capacity planning; Fading; Receivers; Security; Transmitters; Wireless networks; Composite channel (ID#: 16-9707)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027802&isnumber=7017605
T. Anwar and M. Abulaish, “Ranking Radically Influential Web Forum Users,” in IEEE Transactions on Information Forensics and Security, vol. 10, no. 6, pp. 1289-1298, June 2015. doi: 10.1109/TIFS.2015.2407313
Abstract: The growing popularity of online social media is leading to its widespread use among the online community for various purposes. In the recent past, it has been found that the web is also being used as a tool by radical or extremist groups and users to practice several kinds of mischievous acts with concealed agendas and promote ideologies in a sophisticated manner. Some of the web forums are predominantly being used for open discussions on critical issues influenced by radical thoughts. The influential users dominate and influence the newly joined innocent users through their radical thoughts. This paper presents an application of collocation theory to identify radically influential users in web forums. The radicalness of a user is captured by a measure based on the degree of match of the commented posts with a threat list. Eleven different collocation metrics are formulated to identify the association among users, and they are finally embedded in a customized PageRank algorithm to generate a ranked list of radically influential users. The experiments are conducted on a standard data set provided for a challenge at ISI-KDD'12 workshop to find radical and infectious threads, members, postings, ideas, and ideologies. Experimental results show that our proposed method outperforms the existing UserRank algorithm. We also found that the collocation theory is more effective to deal with such ranking problem than the textual and temporal similarity-based measures studied earlier.
Keywords: Internet; data mining; social networking (online); ISI-KDD'12 workshop; UserRank algorithm; collocation metrics; collocation theory; customized PageRank algorithm; infectious threads; online community; online social media; radical threads; radically influential Web forum user ranking; temporal similarity-based measures; textual similarity-based measures; user radicalness; Blogs; Communities; Equations; Mathematical model; Measurement; Media; Message systems; Radical user identification; Security informatics; Social media analysis; Users collocation analysis; radical user identification; security informatics; users collocation analysis (ID#: 16-9708)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7050292&isnumber=7084215
F. Zhou, W. Huang, Y. Zhao, Y. Shi, X. Liang and X. Fan, “ENTVis: A Visual Analytic Tool for Entropy-Based Network Traffic Anomaly Detection,” in IEEE Computer Graphics and Applications, vol. 35, no. 6, pp. 42-50, Nov.-Dec. 2015. doi: 10.1109/MCG.2015.97
Abstract: Entropy-based traffic metrics have received substantial attention in network traffic anomaly detection because entropy can provide fine-grained metrics of traffic distribution characteristics. However, some practical issues--such as ambiguity, lack of detailed distribution information, and a large number of false positives--affect the application of entropy-based traffic anomaly detection. In this work, we introduce a visual analytic tool called ENTVis to help users understand entropy-based traffic metrics and achieve accurate traffic anomaly detection. ENTVis provides three coordinated views and rich interactions to support a coherent visual analysis on multiple perspectives: the timeline group view for perceiving situations and finding hints of anomalies, the Radviz view for clustering similar anomalies in a period, and the matrix view for understanding traffic distributions and diagnosing anomalies in detail. Several case studies have been performed to verify the usability and effectiveness of our method. A further evaluation was conducted via expert review.
Keywords: data visualisation; entropy; pattern clustering; security of data; ENTVis; anomaly clustering; entropy-based network traffic anomaly detection; traffic distribution characteristic; visual analytic tool; Data visualization; Entropy; Human computer interaction; IP networks; Ports (Computers); Telecommunication traffic; Visual analytics; anomaly detection; computer graphics; cybersecurity; entropy; visual analytics (ID#: 16-9709)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7274260&isnumber=7331155
S. Yu, S. Guo and I. Stojmenovic, “Fool Me If You Can: Mimicking Attacks and Anti-Attacks in Cyberspace,” in IEEE Transactions on Computers, vol. 64, no. 1, pp. 139-151, Jan. 2015. doi: 10.1109/TC.2013.191
Abstract: Botnets have become major engines for malicious activities in cyberspace nowadays. To sustain their botnets and disguise their malicious actions, botnet owners are mimicking legitimate cyber behavior to fly under the radar. This poses a critical challenge in anomaly detection. In this paper, we use web browsing on popular web sites as an example to tackle this problem. First of all, we establish a semi-Markov model for browsing behavior. Based on this model, we find that it is impossible to detect mimicking attacks based on statistics if the number of active bots of the attacking botnet is sufficiently large (no less than the number of active legitimate users). However, we also find it is hard for botnet owners to satisfy the condition to carry out a mimicking attack most of the time. With this new finding, we conclude that mimicking attacks can be discriminated from genuine flash crowds using second order statistical metrics. We define a new fine correntropy metrics and show its effectiveness compared to others. Our real world data set experiments and simulations confirm our theoretical claims. Furthermore, the findings can be widely applied to similar situations in other research fields.
Keywords: Markov processes; Web sites; computer network security; entropy; invasive software; online front-ends; statistical analysis; Web browsing behavior; active bots; active legitimate users; anomaly detection; antiattack mimicking; attack mimicking; attacking botnet; botnet owners; correntropy metrics; cyberspace; flash crowds; legitimate cyber behavior mimicking; malicious actions; malicious activities; second-order statistical metrics; semiMarkov model; Ash; IP networks; Internet; Mathematical model; Measurement; Web pages; Mimicking; detection; flash crowd attack; second order metrics (ID#: 16-9710)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6601602&isnumber=6980145
J. Luna, N. Suri, M. Iorga and A. Karmel, “Leveraging the Potential of Cloud Security Service-Level Agreements Through Standards,” in IEEE Cloud Computing, vol. 2, no. 3, pp. 32-40, May-June 2015. doi: 10.1109/MCC.2015.52
Abstract: Despite the undisputed advantages of cloud computing, customers-in particular, small and medium enterprises (SMEs)-still need meaningful understanding of the security and risk-management changes that the cloud entails so they can assess whether this new computing paradigm meets their security requirements. This article presents a fresh view on this problem by surveying and analyzing, from the standardization and risk assessment perspective, the specification of security in cloud service-level agreements (secSLA) as a promising approach to empower customers in assessing and understanding cloud security. Apart from analyzing the proposed risk-based approach and surveying the relevant landscape, this article presents a real-world scenario to support the creation and adoption of secSLAs as enablers for negotiating, assessing, and monitoring the achieved security levels in cloud services.
Keywords: cloud computing; contracts; risk management; security of data; small-to-medium enterprises; software standards; standardisation; SME; secSLA; security in cloud service-level agreement; security requirement; small and medium enterprise; standardization; Cloud computing; Computer security; Interoperability; Measurement; Monitoring; NIST; SLA; cloud; metrics; security assessment; standards (ID#: 16-9711)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158967&isnumber=7158963
G. Han, J. Jiang, L. Shu and M. Guizani, “An Attack-Resistant Trust Model Based on Multidimensional Trust Metrics in Underwater Acoustic Sensor Network,” in IEEE Transactions on Mobile Computing, vol. 14, no. 12, pp. 2447-2459, Dec. 1 2015. doi: 10.1109/TMC.2015.2402120
Abstract: Underwater acoustic sensor networks (UASNs) have been widely used in many applications where a variable number of sensor nodes collaborate with each other to perform monitoring tasks. A trust model plays an important role in realizing collaborations of sensor nodes. Although many trust models have been proposed for terrestrial wireless sensor networks (TWSNs) in recent years, it is not feasible to directly use these trust models in UASNs due to unreliable underwater communication channel and mobile network environment. To achieve accurate and energy efficient trust evaluation in UASNs, an attack-resistant trust model based on multidimensional trust metrics (ARTMM) is proposed in this paper. The ARTMM mainly consists of three types of trust metrics, which are link trust, data trust, and node trust. During the process of trust calculation, unreliability of communication channel and mobility of underwater environment are carefully analyzed. Simulation results demonstrate that the proposed trust model is quite suitable for mobile underwater environment. In addition, the performance of the ARTMM is clearly better than that of conventional trust models in terms of both evaluation accuracy and energy consumption.
Keywords: mobility management (mobile radio); telecommunication network reliability; telecommunication security; underwater acoustic communication; wireless channels; wireless sensor networks; ARTMM; TWSN; UASN; attack-resistant trust model; data trust; energy efficient trust evaluation; link trust; mobile network environment; monitoring tasks; multidimensional trust metrics; node trust; sensor nodes; terrestrial wireless sensor networks; trust calculation process; underwater acoustic sensor network; unreliable underwater communication channel; Bit error rate; Computational modeling; Mathematical model; Packet loss; Predictive models; Multidimensional trust Metrics; Trust Model; Underwater Acoustic Sensor Networks; Underwater acoustic sensor networks; trust model (ID#: 16-9712)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7038144&isnumber=7314996
M. Dark and J. Mirkovic, “Evaluation Theory and Practice Applied to Cybersecurity Education,” in IEEE Security & Privacy,
vol. 13, no. 2, pp. 75-80, Mar.-Apr. 2015. doi: 10.1109/MSP.2015.27
Abstract: As more institutions, organizations, schools, and programs launch cybersecurity education programs in an attempt to meet needs that are emerging in a rapidly changing environment, evaluation will be important to ensure that programs are having the desired impact.
Keywords: educational institutions; security of data; cybersecurity education programs; cybersecurity environment; evaluation theory; schools; Computer security; Design methodology; Game theory; Performance evaluation; Program logic; Reliability; cybersecurity; evaluation design; formative evaluation; measurement; metrics; program logic; reliability; summative evaluation; validity (ID#: 16-9713)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085972&isnumber=7085640
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Location Privacy in Wireless Networks 2015 |
Privacy services on mobile devices are a major issue in cybersecurity. For the Science of Security community, the problem relates to resiliency, metrics, human behavior, and compositionality. The work cited here was presented in 2015.
S. Farhang, Y. Hayel and Q. Zhu, “PHY-Layer Location Privacy-Preserving Access Point Selection Mechanism in Next-Generation Wireless Networks,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015,
pp. 263-271. doi: 10.1109/CNS.2015.7346836
Abstract: The deployment of small-cell base stations in 5G wireless networks is an emerging technology to meet an increasing demand for high data rates of a growing number of heterogeneous devices. The standard algorithms designed for the physical layer communications exhibit security and privacy vulnerabilities. As a 5G network consists of increasingly small cells to improve the throughput, the knowledge of which cell a mobile user communicates to can easily reveal valuable information about the user's location. This paper investigates the location privacy of the access point selection algorithms in 5G mobile networks, and we show that the stable matching of mobile users to access points at the physical layer reveals information related to users' location and their preferences. Traditional location privacy is mainly treated at the application or network layer but the investigation from the physical layer is missing. In this work, we first establish a matching game model to capture the preferences of mobile users and base stations using physical layer system parameters, and then investigate the location privacy of the associated Gale-Shapley algorithm. We develop a differential privacy framework for the physical layer location privacy issues, and design decentralized differential private algorithms to guarantee privacy to a large number of users in the heterogeneous 5G network. Numerical experiments and case studies will be used to corroborate the results.
Keywords: 5G mobile communication; cellular radio; game theory; mobility management (mobile radio); next generation networks; 5G wireless networks; PHY-layer location privacy; access point selection algorithms; access point selection mechanism; application layer; associated Gale-Shapley algorithm; decentralized differential private algorithms; heterogeneous 5G network; heterogeneous devices; matching game model; mobile user; network layer; next-generation wireless networks; physical layer communications; small-cell base stations; Algorithm design and analysis; Bismuth; Physical layer; Privacy; Wireless networks (ID#: 16-11113)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346836&isnumber=7346791
J. R. Ward and M. Younis, “Base Station Anonymity Distributed Self-Assessment in Wireless Sensor Networks,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 103-108. doi: 10.1109/ISI.2015.7165947
Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. In most applications, the sensors act as data sources and forward information generated by event triggers to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN with the least amount of effort. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to identify the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Previous work has proposed anonymity-boosting techniques to improve the BS's anonymity posture, but all require some amount of overhead such as increased energy consumption, increased latency, or decreased throughput. If the BS understood its own anonymity posture, then it could evaluate whether the benefits of employing an anti-traffic analysis technique are worth the associated overhead. In this paper we propose two distributed approaches to allow a BS to assess its own anonymity and correspondingly employ anonymity-boosting techniques only when needed. Our approaches allow a WSN to increase its anonymity on demand, based on real-time measurements, and therefore conserve resources. The simulation results confirm the effectiveness of our approaches.
Keywords: security of data; wireless sensor networks; WSN; anonymity-boosting techniques; anti-traffic analysis technique; base station; base station anonymity distributed self-assessment; conventional security mechanisms; improved BS anonymity; Current measurement; Energy consumption; Entropy; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 16-11114)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165947&isnumber=7165923
M. Dong, K. Ota and A. Liu, “Preserving Source-Location Privacy Through Redundant Fog Loop for Wireless Sensor Networks,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015,
pp. 1835-1842. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.274
Abstract: A redundant fog loop-based scheme is proposed to preserve the source node-location privacy and achieve energy efficiency through two important mechanisms in wireless sensor networks (WSNs). The first mechanism is to create fogs with loop paths. The second mechanism creates fogs in the real source node region as well as many interference fogs in other regions of the network. In addition, the fogs are dynamically changing, and the communication among fogs also forms the loop path. The simulation results show that for medium-scale networks, our scheme can improve the privacy security by 8 fold compared to the phantom routing scheme, whereas the energy efficiency can be improved by 4 fold.
Keywords: data privacy; energy conservation; telecommunication power management; telecommunication security; wireless sensor networks; energy efficiency; medium-scale network; privacy security improvement; redundant fog loop-based scheme; source-location privacy preservation; wireless sensor network; Energy consumption; Phantoms; Position measurement; Privacy; Protocols; Routing; Wireless sensor networks; performance optimization; redundant fog loop; source-location privacy (ID#: 16-11115)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363320&isnumber=7362962
L. Lightfoot and J. Ren, “R-STaR Destination-Location Privacy Schemes in Wireless Sensor Networks,” Communications (ICC), 2015 IEEE International Conference on, London, 2015, pp. 7335-7340. doi: 10.1109/ICC.2015.7249498
Abstract: Wireless sensor networks (WSNs) can provide the world with a technology for real-time event monitoring for both military and civilian applications. One of the primary concerns that hinder the successful deployment of wireless sensor networks is providing adequate location privacy. Many protocols have been proposed to provide location privacy but most are based on public-key cryptosystems, while others are either energy inefficient or have certain security flaws. In this paper, after analyzing security weakness of the existing schemes, we propose an architecture that addresses the security flaw for destination location privacy in WSNs based on energy-aware two phase routing protocol. We call this scheme the R-STaR routing protocol. In the first routing phase of R-STaR routing, the source node transmits the the message to a randomly selected intermediate node located in a pre-determined region surrounding the source node, which we call the R-STaR area. In the second routing phase, the message is routed to the destination node using shortest path mix with fake message injections. We show that R-STaR routing provides a exceptional balance between security and energy consumption in comparison to existing well-known proposed schemes.
Keywords: electronic messaging; public key cryptography; routing protocols; telecommunication power management; wireless sensor networks; R-STaR destination-location privacy scheme; R-STaR routing protocol; WSNs; civilian applications; energy consumption; fake message injection; message transmission; military applications; public key cryptosystem; real-time event monitoring; security flaw; shortest path mix; wireless sensor network; Monitoring; Privacy; Routing; Routing protocols; Security; Trajectory; Wireless sensor networks (ID#: 16-11116)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249498&isnumber=7248285
Y. Bangash, Lingfang Zeng and D. Feng, “MimiBS: Mimicking Base-Station to Provide Location Privacy Protection in Wireless Sensor Networks,” Networking, Architecture and Storage (NAS), 2015 IEEE International Conference on, Boston, MA, 2015,
pp. 158-166. doi: 10.1109/NAS.2015.7255210
Abstract: Base station (BS) location privacy has been widely studied and researched in different applications like field monitoring, agriculture, industry and military etc. The purpose is to hide the location of BS from outside/inside attacker in any shape. Hundreds of thousands of sensor nodes are deployed in some area, bring lot of new challenges regarding routing, forwarding, scaling and security. Different approaches are there to provide location privacy in wireless sensor networks (WSN). The work done by researchers is to provide some form of BS location privacy either different from others or boosting them more. In our scheme, we proposed a new algorithm MimiBS “Mimicking Base-Station”. All the deployed aggregator nodes (ANs) in the field/area will be looking like BS, even if the attacker knows about any AN, he will be deceived between the real BS and AN. We approached different schemes for our proposed algorithm mentioning routing without fake packets, with fake packets, without energy consideration and with energy consideration. These different parameters show an improvement over previous work regarding the same problem.
Keywords: wireless sensor networks; MimiBS; WSN; aggregator nodes; agriculture; base station location privacy; field monitoring; industry; location privacy protection; military; mimicking base-station; outside/inside attacker; sensor nodes; Algorithm design and analysis; Base stations; Energy states; Privacy; Routing; Security; Wireless sensor networks; Base station; aggregator node; location privacy; wireless sensor network (ID#: 16-11117)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7255210&isnumber=7255186
S. Alsemairi and M. Younis, “Adaptive Packet-Combining to Counter Traffic Analysis in Wireless Sensor Networks,” Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, Dubrovnik, 2015, pp. 337-342. doi: 10.1109/IWCMC.2015.7289106
Abstract: Wireless Sensor Networks (WSNs) have become an attractive choice for many applications that serve in hostile setup. The operation model of a WSN makes it possible for an adversary to determine the location of the base-station (BS) in the network by intercepting transmissions and employing traffic analysis techniques such as Evidence Theory. By locating the BS, the adversary can then target it with denial-of-service attacks. This paper promotes a novel strategy for countering such an attack by adaptively combining packet payloads. The idea is to trade off packet delivery latency for BS location anonymity. Basically, a node on a data route will delay the forwarding of a packet until one or multiple additional packets arrive and the payloads are then combined in a single packet. Such an approach decreases the number of evidences that an adversary will collect and makes the traffic analysis inclusive in implicating the BS position. Given the data delivery delay that will be imposed, the proposed technique is to be adaptively applied when the BS anonymity needs a boost. The simulation results confirm the effectiveness of the proposed technique.
Keywords: packet radio networks; telecommunication security; telecommunication traffic; wireless sensor networks; BS location anonymity; WSN; adaptive packet-combining; counter traffic analysis; data delivery delay; denial-of-service attacks; evidence theory; packet delivery latency; Cryptography; Delays; Payloads; Routing; Topology; Wireless sensor networks; Anonymity; Location Privacy; Security; Traffic Analysis; Wireless Sensor Network (ID#: 16-11118)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289106&isnumber=7288920
J. R. Ward and M. Younis, “A Cross-Layer Defense Scheme for Countering Traffic Analysis Attacks in Wireless Sensor Networks,” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 972-977. doi: 10.1109/MILCOM.2015.7357571
Abstract: In most Wireless Sensor Network (WSN) applications the sensors forward their readings to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary's attack. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to locate the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Published anonymity-boosting techniques mainly focus on a single layer of the communication protocol stack and assume that changes in the protocol operation will not be detectable. In fact, existing single-layer techniques may not be able to protect the network if the adversary could guess what anonymity measure is being applied by identifying which layer is being exploited. In this paper we propose combining physical-layer and network-layer techniques to boost the network resilience to anonymity attacks. Our cross-layer approach avoids the shortcomings of the individual single-layer schemes and allows a WSN to effectively mask its behavior and simultaneously misdirect the adversary's attention away from the BS's location. We confirm the effectiveness of our cross-layer anti-traffic analysis measure using simulation.
Keywords: cryptographic protocols; telecommunication security; telecommunication traffic; wireless sensor networks; WSN; anonymity-boosting techniques; authentication; base station; central sink; communication protocol; cross-layer defense scheme; encryption; network-layer techniques; physical-layer techniques; single-layer techniques; traffic analysis attacks; traffic analysis techniques; Array signal processing; Computer security; Measurement; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 16-11119)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357571&isnumber=7357245
M. Bradbury, M. Leeke and A. Jhumka, “A Dynamic Fake Source Algorithm for Source Location Privacy in Wireless Sensor Networks,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 531-538. doi: 10.1109/Trustcom.2015.416
Abstract: Wireless sensor networks (WSNs) are commonly used in asset monitoring applications, where it is often desirable for the location of the asset being monitored to be kept private. The source location privacy (SLP) problem involves protecting the location of a WSN source node from an attacker who is attempting to locate it. Among the most promising approaches to the SLP problem is the use of fake sources, with much existing research demonstrating their efficacy. Despite the effectiveness of the approach, the most effective algorithms providing SLP require network and situational knowledge that makes their deployment impractical in many contexts. In this paper, we develop a novel dynamic fake sources-based algorithm for SLP. We show that the algorithm provides state-of-the-art levels of location privacy under practical operational assumptions.
Keywords: data privacy; telecommunication security; wireless sensor networks; SLP problem; WSN source node; asset monitoring applications; dynamic fake source algorithm; location protection; source location privacy problem; Context; Heuristic algorithms; Monitoring; Position measurement; Privacy; Temperature sensors; Wireless sensor networks; Dynamic; Sensor Networks; Source Location Privacy (ID#: 16-11120)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345324&isnumber=7345233
T. A. Sarjerao and A. Trivedi, “Physical Layer Secrecy Solution for Passive Wireless Sensor Networks,” Computing, Communication and Security (ICCCS), 2015 International Conference on, Pamplemousses, 2015, pp. 1-6. doi: 10.1109/CCCS.2015.7374120
Abstract: The backscatter communication system has tremendous potential in commercial applications, still very less work has been done to study the benefits of it. Backscatter communication system is the backbone of many low cost and low power distributed wireless systems. The data transmission between various nodes in wireless communication system always comes with the risk of third party interception. This leads to privacy and security breaches of the information. In this paper, physical layer security of backscatter wireless system for multiple eavesdropper, single tag, and single reader case is studied. Unique characteristics of the channel are used to provide security to signal transmission. In order to degrade the reception of the signal by eavesdropper, a noise injection scheme is proposed. The advantages of this approach are discussed for various cases while evaluating the impact of the key factors like antenna gain and location of the eavesdropper on the secrecy of the transmission. Analytical results indicate that if properly employed, the noise injection scheme improves the performance of backscatter wireless system.
Keywords: telecommunication security; wireless sensor networks; antenna gain; backscatter communication system; backscatter wireless system; distributed wireless systems; noise injection; passive wireless sensor networks; physical layer secrecy solution; physical layer security; wireless communication system; Backscatter; Physical layer; Receivers; Security; Signal to noise ratio; Wireless communication; Wireless sensor networks; Backscatter communication system; artificial noise injection; physical layer secrecy
(ID#: 16-11121)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374120&isnumber=7374113
C. Gu, M. Bradbury, A. Jhumka and M. Leeke, “Assessing the Performance of Phantom Routing on Source Location Privacy in Wireless Sensor Networks,” Dependable Computing (PRDC), 2015 IEEE 21st Pacific Rim International Symposium on, Zhangjiajie, 2015, pp. 99-108. doi: 10.1109/PRDC.2015.9
Abstract: As wireless sensor networks (WSNs) have been applied across a spectrum of application domains, the problem of source location privacy (SLP) has emerged as a significant issue, particularly in safety-critical situations. In seminal work on SLP, phantom routing was proposed as an approach to addressing the issue. However, results presented in support of phantom routing have not included considerations for practical network configurations, omitting simulations and analyses with larger network sizes. This paper addresses this shortcoming by conducting an in-depth investigation of phantom routing under various network configurations. The results presented demonstrate that previous work in phantom routing does not generalise well to different network configurations. Specifically, under certain configurations, it is shown that the afforded SLP is reduced by a factor of up to 75.
Keywords: telecommunication network routing; wireless sensor networks; phantom routing; source location privacy; Context; Monitoring; Phantoms; Position measurement; Privacy; Routing; Wireless sensor networks; Multiple Sources; Phantom Routing; Sensor networks; Source Location Privacy (ID#: 16-11122)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371853&isnumber=7371833
X. Wang, L. Dong, C. Xu and P. Li, “Location Privacy Protecting Based on Anonymous Technology in Wireless Sensor Networks,” Parallel Architectures, Algorithms and Programming (PAAP), 2015 Seventh International Symposium on, Nanjing, 2015, pp. 229-235. doi: 10.1109/PAAP.2015.50
Abstract: Wireless sensor network is a type of information sharing network, where the attacker can monitor the network traffic or trace the transmission of packets to infer the position of the target node. Particularly, the target node mainly refers to the source node and the aggregation node. Firstly, we discuss the privacy protection method which is based on the anonymous location to prevent from the location privacy problems. Then, we suggest at least n anonymous nodes distributing near the target node, and select one of the fake nodes by routing protocol to replace the real one to carry out the location of the data communication. Finally, in order to improve the security of nodes and increase the difficulty of the attacker tracking, we select the routing tree which is generated via Collection Tree Protocol (CTP) to build the anonymous group and verified by simulation. Experiments show that anonymity of the proposed treatment increases the difficulty of the attackers significantly.
Keywords: data privacy; routing protocols; telecommunication network topology; telecommunication security; telecommunication traffic; trees (mathematics); wireless sensor networks; CTP; aggregation node; anonymous technology; collection tree protocol; information sharing network; location privacy protection method; network traffic; packet transmission; routing protocol; routing tree selection; source node; target node; Base stations; Data privacy; Monitoring; Privacy; Routing; Security; Wireless sensor networks; Collection Tree Protocol; Location Privacy; Wireless Sensor Network (ID#: 16-11123)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7387330&isnumber=7387279
N. Pavitha and S. N. Shelke, “Tactics for Providing Location Privacy Against Global Adversaries in Wireless Sensor Networks,” Computer, Communication and Control (IC4), 2015 International Conference on, Indore, 2015, pp. 1-5. doi: 10.1109/IC4.2015.7375686
Abstract: The unprotected surroundings of a sensor network makes relatively easy for an adversary to eavesdrop the network. Even though there is wide range of protocols for providing content privacy, the contextual information remains exposed. So adversary will use this contextual information to perform attack on source node or sink node. The existing methods for location privacy protect the network only against a local eavesdropper. In this backdrop to provide location privacy for source node intervallic gathering and source imitation methods are proposed. Also for preventing sink location privacy sink imitation and backbone flooding methods are proposed. These proposed methods provide location privacy against global adversaries.
Keywords: telecommunication security; wireless sensor networks; backbone flooding methods; eavesdropper; global adversaries; providing location privacy; sensor network; sink location privacy sink limitation; sink node; source node; Base stations; Floods; Phantoms; Privacy; Routing; Security; Wireless sensor networks; intervallic gathering; location privacy; sink imitation; source imitation; wireless sensor network (ID#: 16-11124)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7375686&isnumber=7374772
Abdel-shakour Abuzneid, Tarek Sobh and Milad Faezipour, “Temporal Privacy Scheme for End-to-End Location Privacy in Wireless Sensor Networks,” Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, Visakhapatnam, 2015, pp. 1-6. doi: 10.1109/EESCO.2015.7253969
Abstract: Wireless sensor network (WSN) is built of hosts called sensors which can sense a phenomenon such as motion, temperature, and humidity. Sensors represent what they sense in data format. Providing an efficient end-to-end privacy solution would be a challenging task due to the open nature of the WSN. The key schemes needed for end-to-end location privacy are anonymity, observability, capture likelihood and safety period. On top of that, having temporal privacy is crucial to attain. We extend this work to provide a solution against global adversaries. We present a network model that is protected against passive/active and local/multi-local/global attacks. This work provides a solution for temporal privacy to attain end-to-end anonymity and location privacy.
Keywords: data privacy; telecommunication security; telecommunication traffic; wireless sensor networks; WSN; active attack; anonymity scheme; capture likelihood scheme; data format; end-to-end location privacy; global attack; local attack; multilocal attack; observability scheme; passive attack; safety period scheme; temporal privacy scheme; traffic rate privacy; wireless sensor networks; Correlation; Delays; Monitoring; Privacy; Protocols; Routing; Wireless sensor networks; WSN; sink privacy; source location privacy; temporal privacy; traffic rate privacy (ID#: 16-11125)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7253969&isnumber=7253613
N. Baroutis and M. Younis, “Using Fake Sinks and Deceptive Relays to Boost Base-Station Anonymity in Wireless Sensor Network,” Local Computer Networks (LCN), 2015 IEEE 40th Conference on, Clearwater Beach, FL, 2015, pp. 109-116. doi: 10.1109/LCN.2015.7366289
Abstract: In applications of wireless Sensor Networks (WSNs), the base-station (BS) acts as a sink for all data traffic. The continuous flow of packets toward the BS enables the adversary to analyze the traffic and uncover the BS position. In this paper we present a technique to counter such an attack by morphing the traffic pattern in the WSN. Our approach introduces multiple fake sinks and deceptive relays so that nodes other than the BS are implicated as the destination of all data traffic. Since the problem of the optimal fake sink's placement is NP-hard, we employ a heuristic to determine the most suitable fake sink count and placement for a network. Dynamic load-balancing trees are formed to identify relay nodes and adapt the topology to route packets to the faked (and real) sinks while extending the network lifetime. The simulation results confirm the effectiveness of the proposed technique.
Keywords: relay networks (telecommunication); telecommunication network routing; telecommunication network topology; telecommunication traffic; trees (mathematics); wireless sensor networks; NP-hard; WSN; base-station anonymity boost; data traffic pattern morphing; deceptive relay; dynamic load-balancing tree; optimal fake sink placement; packet routing; wireless sensor network; Frequency selective surfaces; Network topology; Relays; Routing; Topology; Traffic control; Wireless sensor networks; Traffic analysis; anonymity; location privacy; sensor networks (ID#: 16-11126)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366289&isnumber=7366232
M. Chaudhari and S. Dharawath, “Toward a Statistical Framework for Source Anonymity in Sensor Network Using Quantitative Measures,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, Coimbatore, 2015, pp. 1-5. doi: 10.1109/ICIIECS.2015.7193169
Abstract: In some applications in sensor network the location and privacy of certain events must remain anonymous or undetected even by analyzing the network traffic. In this paper the framework for modeling, investigating and evaluating the sensor network is suggested and results are charted. Suggested two folded structure introduces the notion of “interval indistinguishability” which gives a quantitative evaluation to form anonymity in sensor network and secondly it charts source anonymity to statistical problem of binary hypothesis checking with nuisance parameters. The system is made energy efficient by enhancing the available techniques for choosing cluster head. The energy efficiency of the sensor network is charted.
Keywords: statistical analysis; telecommunication security; telecommunication traffic; wireless sensor networks; binary hypothesis checking; network traffic; nuisance parameters; quantitative evaluation; quantitative measurement; sensor network; source anonymity; statistical framework; statistical problem; wireless sensor network; Conferences; Energy efficiency; Privacy; Protocols; Technological innovation; Wireless sensor networks; Binary Hypothesis; Interval Indistinguishability; Wireless Sensor Network; residual energy (ID#: 16-11127)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193169&isnumber=7192777
P. Kumar, J. P. Singh, P. Vishnoi and M. P. Singh, “Source Location Privacy Using Multiple-Phantom Nodes in WSN,” TENCON 2015 - 2015 IEEE Region 10 Conference, Macao, 2015, pp. 1-6. doi: 10.1109/TENCON.2015.7372969
Abstract: The ever increasing integration of sensor-driven application into our lives has led to sensor privacy becoming an important issue. The locational information of sensor nodes has to be hidden from adversary for the sake of privacy. An adversary may trace traffic and try to figure out the location of the source node. This work attempts to improve the Source Location Privacy by using two phantom nodes, selection of neighbors based on random based approach and random walk upto phantom nodes. Two phantom nodes are selected for each source node in such a way that no two phantom nodes of the same triplet are co-linear with the sink. The proposed protocol can keep the adversary confused within the sensor networks as it generates different paths for different packets for the same source. Here, we are distracting the adversary by creating alternate paths. This results in minimizing the hit-ratio, thereby maximizing the privacy. Analysis of the present work shows that this protocol tends to achieve more privacy and greater safety period as compared to single phantom routing protocol. Flooding techniques and dummy packets have not been used in working phase for the sake of energy efficiency and network congestion.
Keywords: data privacy; routing protocols; wireless sensor networks; WSN; energy efficiency; locational information; multiple-phantom nodes; network congestion; sensor-driven application; single phantom routing protocol; source location privacy; Analytical models; Communication system security; Floods; Wireless communication; Wireless sensor networks; Context Privacy; Phantom Node; Random Walk; Source Location Privacy; Wireless Sensor Networks (ID#: 16-11128)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7372969&isnumber=7372693
Abdel-shakour Abuzneid, Tarek Sobh and Milad Faezipour,, “An Enhanced Communication Protocol for Anonymity and Location Privacy in WSN,” Wireless Communications and Networking Conference Workshops (WCNCW), 2015 IEEE, New Orleans, LA, 2015, pp. 91-96. doi: 10.1109/WCNCW.2015.7122535
Abstract: Wireless sensor networks (WSNs) consist of many sensors working as hosts. These sensors can sense a phenomenon and represent it in a form of data. There are many applications for WSNs such as object tracking and monitoring where the objects need protection. Providing an efficient location privacy solution would be challenging to achieve due to the exposed nature of the WSN. The communication protocol needs to provide location privacy measured by anonymity, observability, capture- likelihood and safety period. We extend this work to allow for countermeasures against semi-global and global adversaries. We present a network model that is protected against a sophisticated passive and active attacks using local, semi-global, and global adversaries.
Keywords: protocols; telecommunication security; wireless sensor networks; WSN; active attacks; anonymity; capture-likelihood; communication protocol enhancement; global adversaries; local adversaries; location privacy; object tracking; observability; passive attacks; safety period; semiglobal adversaries; Conferences; Energy efficiency; Internet of things; Nickel; Privacy; Silicon; Wireless sensor networks; contextual privacy; privacy; sink privacy; source location privacy (ID#: 16-11129)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122535&isnumber=7122513
A. Butean, A. David, C. Buduleci and A. Daian, “Auxilum Medicine: A Cloud Based Platform for Real-Time Monitoring Medical Devices,” Control Systems and Computer Science (CSCS), 2015 20th International Conference on, Bucharest, 2015, pp. 874-879. doi: 10.1109/CSCS.2015.135
Abstract: Nowadays, time is a very valuable resource and can make the difference between life and death. Having knowledge about this fact we decided to deal with one of the most important aspects of contemporary medicine, EMS (emergency medical services) response time. Modern systems that encourage intelligent communication methods between medical devices and doctors are a must in ubiquitous health care environments. Auxilum Medicine fosters a triple-win situation regarding the relationship between medical institutions, doctors and patients. Emergency patients should be treated with utmost care because their life is hanging by a thread if nobody is present to take immediate action. We are presenting a platform which enables doctors to simultaneously monitor a large number of patients from different physical locations. By receiving real time notifications, medical history, prevention alarms directly to any network connected devices (mobile phones, tablets, desktops, notebooks, smart watches, etc.), the medical staff can act promptly, exactly when and where it is needed in order to save human lives. Our solution's architecture allows gathering data from any medical signal processing unit and sends it straight to the cloud using encrypted communication protocols. What makes Auxilum Medicine unique refers to the cloud integration with hospital departments' structure, awareness of different medical staff roles and capabilities, privacy data interest, updates sent to patient's relatives as well as a modern responsive adaptive user interface. As a part of our experiment, aimed for testing our platform's capabilities, we have built a biomedical wireless sensor wearable device that provides real-time parameters (temperature and heart rate). Such a system favors medical equipment real time monitoring by using cloud services and permanently keeps alive the link between doctors and their patients, drastically increasing the EMS response time.
Keywords: cloud computing; health care; medical computing; EMS response time; adaptive user interface; auxilum medicine; biomedical wireless sensor wearable device; cloud based platform; cloud integration; contemporary medicine; emergency medical services; emergency patients; heart rate parameter; intelligent communication methods; privacy data interest; realtime medical device monitoring; temperature parameter; ubiquitous health care environment; Medical diagnostic imaging; Medical services; Real-time systems; Sensors; Wireless communication; Wireless sensor networks; diagnostic systems; ehealth; medical devices; wireless sensors (ID#: 16-11130)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168529&isnumber=7168393
C. Lyu, A. Pande, X. Wang, J. Zhu, D. Gu and P. Mohapatra, “CLIP: Continuous Location Integrity and Provenance for Mobile Phones,” Mobile Ad Hoc and Sensor Systems (MASS), 2015 IEEE 12th International Conference on, Dallas, TX, 2015, pp. 172-180. doi: 10.1109/MASS.2015.33
Abstract: Many location-based services require a mobile user to continuously prove his location. In absence of a secure mechanism, malicious users may lie about their locations to get these services. Mobility trace, a sequence of past mobility points, provides evidence for the user's locations. In this paper, we propose a Continuous Location Integrity and Provenance (CLIP) Scheme to provide authentication for mobility trace, and protect users' privacy. CLIP uses low-power inertial accelerometer sensor with a light-weight entropy-based commitment mechanism and is able to authenticate the user's mobility trace without any cost of trusted hardware. CLIP maintains the user's privacy, allowing the user to submit a portion of his mobility trace with which the commitment can be also verified. Wireless Access Points (APs) or colocated mobile devices are used to generate the location proofs. We also propose a light-weight spatial-temporal trust model to detect fake location proofs from collusion attacks. The prototype implementation on Android demonstrates that CLIP requires low computational and storage resources. Our extensive simulations show that the spatial-temporal trust model can achieve high (> 0.9) detection accuracy against collusion attacks.
Keywords: data privacy; mobile computing; mobile handsets; radio access networks; AP; CLIP; computational resources; continuous location integrity and provenance; light-weight entropy-based commitment mechanism; location-based services; low-power inertial accelerometer sensor; mobile phones; mobility trace; storage resources; user privacy; wireless access points; Communication system security; Mobile communication; Mobile handsets; Privacy; Security; Wireless communication; Wireless sensor networks
(ID#: 16-11131)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366930&isnumber=7366897
D. Wu, J. Du, D. Zhu and S. Wang, “A Simple RFID-Based Architecture for Privacy Preservation,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 1224-1229. doi: 10.1109/Trustcom.2015.509
Abstract: With the rapid development of the internet of things all over the world, it is very promising to investigate one of main issues, i.e., localization. However, it has also brought a challenge to the privacy, where the source privacy and location privacy are concerned in this paper. We first review some approaches to deal with the source privacy and location privacy, respectively. Then a simple radio frequency identification (RFID) based architecture is proposed in this paper to preserve the privacy of the target object. This architecture can effectively hide the presence of the target object against adversaries. Meanwhile, the location information of the target object can also be preserved by simply transferring the ID information, rather than the location information. Compared with other approaches, it is convenient to implement the proposed architecture to protect the privacy without high computational complexity and additional supplements. Finally, the privacy analysis is presented to demonstrate the performance of the proposed architecture in terms of the source and location privacy preservation.
Keywords: data privacy; radiofrequency identification; ID information; Internet of Things; location information; location privacy; privacy preservation; radiofrequency identification; simple RFID-based architecture; source privacy; target object; Computer architecture; Monitoring; Privacy; RFID tags; Wireless communication; Wireless sensor networks; RFID; localization; privacy (ID#: 16-11132)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345417&isnumber=7345233
A. Basiri, P. Peltola, P. Figueiredo e Silva, E. S. Lohan, T. Moore and C. Hill, “Indoor Positioning Technology Assessment Using Analytic Hierarchy Process for Pedestrian Navigation Services,” Localization and GNSS (ICL-GNSS), 2015 International Conference on, Gothenburg, 2015, pp. 1-6. doi: 10.1109/ICL-GNSS.2015.7217157
Abstract: Indoor positioning is one of the biggest challenges of many Location Based Services (LBS), especially if the target users are pedestrians, who spend most of their time in roofed areas such as houses, offices, airports, shopping centres and in general indoors. Providing pedestrians with accurate, reliable, cheap, low power consuming and continuously available positional data inside the buildings (i.e. indoors) where GNSS signals are not usually available is difficult. Several positioning technologies can be applied as stand-alone indoor positioning technologies. They include Wireless Local Area Networks (WLAN), Bluetooth Low Energy (BLE), Ultra-Wideband (UWB), Radio Frequency Identification (RFID), Tactile Floor (TF), Ultra Sound (US) and High Sensitivity GNSS (HSGNSS). This paper evaluates the practicality and fitness-to-the-purpose of pedestrian navigation for these stand-alone positioning technologies to identify the best one for the purpose of indoor pedestrian navigation. In this regard, the most important criteria defining a suitable positioning service for pedestrian navigation are identified and prioritised. They include accuracy, availability, cost, power consumption and privacy. Each technology is evaluated according to each criterion using Analytic Hierarchy Process (AHP) and finally the combination of all weighted criteria and technologies are processed to identify the most suitable solution.
Keywords: indoor navigation; indoor radio; satellite navigation; Bluetooth low energy; GNSS; RFID; WLAN; airports; analytic hierarchy process; buildings; houses; indoor pedestrian navigation; indoor positioning; location based services; offices; power consumption; radio frequency identification; shopping centres; tactile floor; ultra sound; wireless local area networks; Accuracy; Floors; Global Positioning System; Power demand; Privacy; Wireless LAN; Analytic Hierarchy Process (AHP); Indoor Positioning; Pedestrian Navigation
(ID#: 16-11133)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7217157&isnumber=7217133
M. Guo, N. Pissinou and S. S. Iyengar, “Pseudonym-Based Anonymity Zone Generation for Mobile Service with Strong Adversary Model,” Consumer Communications and Networking Conference (CCNC), 2015 12th Annual IEEE, Las Vegas, NV, 2015, pp. 335-340. doi: 10.1109/CCNC.2015.7157998
Abstract: The popularity of location-aware mobile devices and the advances of wireless networking have seriously pushed location-based services into the IT market. However, moving users need to report their coordinates to an application service provider to utilize interested services that may compromise user privacy. In this paper, we propose an online personalized scheme for generating anonymity zones to protect users with mobile devices while on the move. We also introduce a strong adversary model, which can conduct inference attacks in the system. Our design combines a geometric transformation algorithm with a dynamic pseudonyms-changing mechanism and user-controlled personalized dummy generation to achieve strong trajectory privacy preservation. Our proposal does not involve any trusted third-party and will not affect the existing LBS system architecture. Simulations are performed to show the effectiveness and efficiency of our approach.
Keywords: authorisation; data privacy; mobile computing; IT market; LBS system architecture; anonymity zone generation; application service provider; dynamic pseudonyms-changing mechanism; geometric transformation algorithm; inference attacks; location-aware mobile devices; location-based services; mobile devices; mobile service; online personalized scheme; pseudonym-based anonymity zone generation; strong-adversary model; strong-trajectory privacy preservation; user data protection; user privacy; user-controlled personalized dummy generation; wireless networking; Computational modeling; Privacy; Quality of service; Anonymity Zone; Design; Geometric; Location-based Services; Pseudonyms; Trajectory Privacy Protection (ID#: 16-11134)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7157998&isnumber=7157933
S. Imran, R. V. Karthick and P. Visu, “DD-SARP: Dynamic Data Secure Anonymous Routing Protocol for MANETs in Attacking Environments,” Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), 2015 International Conference on, Chennai, 2015, pp. 39-46. doi: 10.1109/ICSTM.2015.7225388
Abstract: The most important application of MANETs is to maintain anonymous communication in attacking environment. Though lots of anonymous protocols for secure routing have been proposed, but the proposed solutions happen to be vulnerable at some point. The service rejection attacks or DoS, timing attacks makes both system and protocol vulnerable. This paper studies and discuss about the various existing protocols and how efficient they are in the attacking environment. The protocols such as, ALARM: Anonymous Location-Aided Routing in Suspicious MANET, ARM: Anonymous Routing Protocol for Mobile Ad Hoc Networks, Privacy-Preserving Location-Based On-Demand Routing in MANETs, AO2P: Ad Hoc on-Demand Position-Based Private Routing Protocol, Anonymous Connections. In this paper we propose a new concept by combining two proposed protocols based on geographical location based: ALERT which is based mainly on node-to-node hop encryption and bursty traffic. And Greedy Perimeter Stateless Routing (GPSR), a new geographical location based protocol for wireless networks that uses the router's position and a packet's destination to make forwarding of packets. It follows greedy method of forwarding using the information about the immediate neighboring router in the network. Simulation results have explained the efficiency of the proposed DD-SARP protocol with improved performance when compared to the existing protocols.
Keywords: mobile ad hoc networks; routing protocols; telecommunication security; ALARM; ALERT; AO2P; Ad Hoc on-Demand Position-Based Private Routing Protocol, Anonymous Connections; Anonymous Location-Aided Routing in Suspicious MANET; Anonymous Routing Protocol for Mobile Ad Hoc Networks; DD-SARP; DoS; GPSR; Greedy Perimeter Stateless Routing; anonymous communication; attacking environments; bursty traffic; dynamic data secure anonymous routing protocol; geographical location; neighboring router; node-to-node hop encryption; packet destination; packet forwarding; privacy-preserving location-based on-demand routing; router position; secure routing; service rejection attacks; timing attacks; Ad hoc networks; Encryption; Mobile computing; Public key; Routing; Routing protocols; Mobile adhoc network; adversarial; anonymous; privacy (ID#: 16-11135)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225388&isnumber=7225373
R. Ganvir and V. Mahalle, “An Overview of Secure Friend Matching in Mobile Social Networks,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, Coimbatore, 2015, pp. 1-4. doi: 10.1109/ICIIECS.2015.7193084
Abstract: Mobile users constitute one of the biggest areas of growth in the online market, so it is no surprise that mobile social networks have begun becoming more and more popular. Mobile social networks have also become quite as sophisticated as non mobile powerhouses like facebook. Positioning technologies such as Wireless localization techniques, and Global Positioning System gives rise to location-aware social networks. It allows mobile users to connect and converse with each other within a local physical proximity, based on some criteria such as similar interests and hobbies. But the location data posted to social networks are revealing sources, too. Hence Friend matching has become the sensitive part of Mobile social networks. It's really a challenge for developers to preserve the privacy of users' private information. This paper briefly peeks into the generalized friend matching process in mobile social networks and also gives the overview of various privacy preserving friend matching schemes, which are already established.
Keywords: Global Positioning System; data privacy; mobile computing; security of data; social networking (online); telecommunication security; Facebook; local physical proximity; location-aware social networks; mobile social networks; non mobile powerhouses; online market; positioning technologies; privacy preserving friend matching schemes; secure friend matching; users private information privacy preservation; wireless localization techniques; Frequency modulation; Mobile communication; Mobile computing; Privacy; Protocols; Security; Social network services; Friend Matching; Mobile Social Networks; Privacy Preserving (ID#: 16-11136)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193084&isnumber=7192777
A. K. Tyagi and N. Sreenath, “Location Privacy Preserving Techniques for Location Based Services over Road Networks,” Communications and Signal Processing (ICCSP), 2015 International Conference on, Melmaruvathur, 2015, pp. 1319-1326. doi: 10.1109/ICCSP.2015.7322723
Abstract: With the rapid development of wireless and mobile technologies (LBS, Privacy of personal location information in location-based services of a vehicle ad-hoc network (VANET) users is becoming an increasingly important issue. LBSs provide enhanced functionalities, they open up new vulnerabilities that can be exploited to cause security and privacy breaches. During communication in LBSs, individuals (vehicle users) face privacy risks (for example location privacy, identity privacy, data privacy etc.) when providing personal location data to potentially untrusted LBSs. However, as vehicle users with mobile (or wireless) devices are highly autonomous and heterogeneous, it is challenging to design generic location privacy protection techniques with desired level of protection. Location privacy is an important issue in vehicular networks since knowledge of a vehicle's location can result in leakage of sensitive information. This paper focuses and discussed on both potential location privacy threats and preserving mechanisms in LBSs over road networks. The proposed research in this paper carries significant intellectual merits and potential broader impacts i.e. (a) investigate the impact of inferential attacks (for example inference attack, position co-relation attack, transition attack and timing attack etc.) in LBSs for vehicular ad-hoc networks (VANET) users, and proves the vulnerability of using long-term pseudonyms (or other approaches like silent period, random encryption period etc.) for camouflaging users' real identities. (b) An effective and extensible location privacy architecture based on the one approach like mix zone model with other approaches to protect location privacy are discussed. (c) This paper addresses the location privacy preservation problems in details from a novel angle and provides a solid foundation for future research to protecting user's location information.
Keywords: data privacy; mobile computing; risk management; road traffic; security of data; telecommunication security; vehicular ad hoc networks; VANET; extensible location privacy architecture; identity privacy; inference attack; intellectual merits; location privacy preserving techniques; location privacy threats; location-based services; long-term pseudonyms; mix zone model; mobile technologies; personal location information; position corelation attack; privacy breach; privacy risks; road networks; security breach; timing attack; transition attack; vehicle ad-hoc network; wireless technologies; Communication system security; Mobile communication; Mobile computing; Navigation; Privacy; Vehicles; Wireless communication; Location privacy; Location-Based Service; Mix zones; Mobile networks; Path confusion; Pseudonyms; k-anonymity (ID#: 16-11137)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322723&isnumber=7322423
Beibei Huang, Yong Feng, Xiuqi Li and Qi Huang, “An Angle-Based Directed Random Walk Privacy Enhanced Routing Protocol for WSNs,” 2015 International Conference on Information and Communications Technologies (ICT 2015), Xi'an, 2015, pp. 1-5. doi: 10.1049/cp.2015.0229
Abstract: For the vulnerability of wireless transmission medium wireless sensor networks (WSNs) face severe privacy problem. To protect the source location privacy, this paper proposes a novel Angle-Based Directed Random Walk (ABDRW) routing protocol, which makes the selection of phantom source more flexible and the distribution relatively uniform. What's more, this approach can generate more different phantom sources that are far away from the real source, and thus enhances the source location privacy protection. Comparing with several existing typical methods such as sector-based directed random walk, hop-based directed random walk and RRIN, our proposed ABDRW protocol can reach higher source location privacy protection through flexible selection and more uniform distribution of phantom source.
Keywords: Phantom source; privacy enhanced routing; source location privacy (ID#: 16-11138)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7426027&isnumber=7425988
M. Grissa, A. Yavuz and B. Hamdaoui, “LPOS: Location Privacy for Optimal Sensing in Cognitive Radio Networks,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417611
Abstract: Cognitive Radio Networks (CRNs) enable opportunistic access to the licensed channel resources by allowing unlicensed users to exploit vacant channel opportunities. One effective technique through which unlicensed users, often referred to as Secondary Users (SUs), acquire whether a channel is vacant is cooperative spectrum sensing. Despite its effectiveness in enabling CRN access, cooperative sensing suffers from location privacy threats, merely because the sensing reports that need to be exchanged among the SUs to perform the sensing task are highly correlated to the SUs' locations. In this paper, we develop a new Location Privacy for Optimal Sensing (LPOS) scheme that preserves the location privacy of SUs while achieving optimal sensing performance through voting-based sensing. In addition, LPOS is the only alternative among existing CRN location privacy preserving schemes (to the best of our knowledge) that ensures high privacy, achieves fault tolerance, and is robust against the highly dynamic and wireless nature of CRNs.
Keywords: cognitive radio; telecommunication security; wireless channels; CRN; LPOS scheme; SU; channel opportunities; cognitive radio networks; cooperative spectrum sensing; licensed channel resources; location privacy for optimal sensing; location privacy threats; secondary users; Encryption; Fault tolerance; Fault tolerant systems; Privacy; Sensors (ID#: 16-11139)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417611&isnumber=7416057
U. Rajput, F. Abbas, H. Eun, R. Hussain and H. Oh, “A Two Level Privacy Preserving Pseudonymous Authentication Protocol for VANET,” Wireless and Mobile Computing, Networking and Communications (WiMob), 2015 IEEE 11th International Conference on, Abu Dhabi, 2015, pp. 643-650. doi: 10.1109/WiMOB.2015.7348023
Abstract: Vehicular ad hoc network (VANET) is gaining significant popularity due to their role in improving traffic efficiency and safety. However, communication in VANET needs to be secure as well as authenticated. The vehicles in the VANET not only broadcast traffic messages known as beacons but also broadcast safety critical messages such as electronic emergency brake light (EEBL). Due to the openness of the network, a malicious vehicles can join the network and broadcast bogus messages that could result in accident. On one hand, a vehicle needs to be authenticated while on the other hand, its private data such as location and identity information must be prevented from misuse. In this paper, we propose an efficient pseudonymous authentication protocol with conditional privacy preservation to enhance the security of VANET. Most of the current protocols either utilize pseudonym based approaches with certificate revocation list (CRL) that causes significant communicational and storage overhead or group signature based approaches that are computationally expensive. Another inherent disadvantage is to have full trust on certification authorities, as these entities have complete user profiles. We present a new protocol that only requires honest-but-curious behavior from certification authority. We utilize a mechanism for providing a user with two levels of pseudonyms named as base pseudonym and short time pseudonyms to achieve conditional privacy. However, in case of revocation, there is no need to maintain the revocation list of pseudonyms. The inherent mechanism assures the receiver of the message about the authenticity of the pseudonym. In the end of the paper, we analyze our protocol by giving the communication cost as well as various attack scenarios to show that our approach is efficient and robust.
Keywords: cryptographic protocols; telecommunication security; vehicular ad hoc networks; CRL; EEBL; VANET; certificate revocation list; certification authority; communication cost; communicational overhead; conditional privacy preservation; electronic emergency brake light; group signature based approach; honest-but-curious behavior; safety critical message broadcasting; storage overhead; traffic message broadcasting; two level privacy preserving pseudonymous authentication protocol; vehicular ad hoc network; Authentication; Cryptography; Privacy; Protocols; Vehicles; Vehicular ad hoc networks; Vehicular ad hoc networks (VANET); authentication; conditional privacy; pseudonyms (ID#: 16-11140)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348023&isnumber=7347915
M. Maier, L. Schauer and F. Dorfmeister, “ProbeTags: Privacy-Preserving Proximity Detection Using Wi-Fi Management Frames,” Wireless and Mobile Computing, Networking and Communications (WiMob), 2015 IEEE 11th International Conference on, Abu Dhabi, 2015, pp. 756-763. doi: 10.1109/WiMOB.2015.7348038
Abstract: Since the beginning of the ubiquitous computing era, context-aware applications have been envisioned and pursued, with location and especially proximity information being one of the primary building blocks. To date, there is still a lack of feasible solutions to perform proximity tests between mobile entities in a privacy-preserving manner, i.e., one that does not disclose one's location in case the other party is not in proximity. In this paper, we present our novel approach based on location tags built from surrounding Wi-Fi signals originating only from mobile devices. Since the set of mobile devices at a given location changes over time, this approach ensures the user's privacy when performing proximity tests. To improve the robustness of similarity calculations, we introduce a novel extension of the commonly used cosine similarity measure to allow for weighing its components while preserving the signal strength semantics. Our system is evaluated extensively in various settings, ranging from office scenarios to crowded mass events. The results show that our system allows for robust short-range proximity detection while preserving the participants' privacy.
Keywords: computer network management; computer network security; data privacy; mobile computing; wireless LAN; ProbeTags; Wi-Fi management frames; Wi-Fi signals; context-aware applications; cosine similarity measure; location tags; mobile devices; mobile entities; privacy-preserving proximity detection; proximity tests; signal strength semantics; similarity calculation robustness improvement; ubiquitous computing era; Euclidean distance; IEEE 802.11 Standard; Mobile communication; Mobile computing; Mobile handsets; Privacy; Wireless communication; 802.11; location-based services; proximity detection (ID#: 16-11141)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348038&isnumber=7347915
M. Ben Brahim, E. Ben Hamida, F. Filali and N. Hamdi, “Performance Impact of Security on Cooperative Awareness in Dense Urban Vehicular Networks,” Wireless and Mobile Computing, Networking and Communications (WiMob), 2015 IEEE 11th International Conference on, Abu Dhabi, 2015, pp. 268-274. doi: 10.1109/WiMOB.2015.7347971
Abstract: Cooperative Intelligent Transport Systems (C-ITS) communication technology is expected to be the near-future pioneer in the traffic management and road-safety control area by provisioning timely accurate and location-aware information. The data generated by connected vehicles may be privacy-sensitive and could be hacked by intrusive receivers. In order to prevent malicious sources from injecting untrusted data content, relevant ITS standards included security processes and protocols to deal with the potential architecture-imposed security vulnerabilities. In this paper we study the impact of these processes on time-sensitive and safety related applications. In this regard, we deeply investigate the ITS architecture integrating the security components and evaluate its performance through extensive simulations for sparse to dense network of vehicles in terms of delay and packet delivery ratio. We consider this work as an important step towards understanding the tradeoff between security and communication efficiency in V2X networks.
Keywords: cooperative communication; cryptography; intelligent transportation systems; protocols; vehicular ad hoc networks; C-ITS communication technology; V2X networks; cooperative awareness; cooperative intelligent transport system; dense urban vehicular networks; location-aware information; road-safety control area; traffic management; Computer aided manufacturing; Computer architecture; Protocols; Safety; Security; Standards; Wireless communication; Cooperative Awareness; Elliptic Curve Digital Signature Algorithm; Safety Applications; Security; V2X Communications (ID#: 16-11142)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7347971&isnumber=7347915
K. Sharma and B. K. Chaurasia, “Trust Based Location Finding Mechanism in VANET Using DST,” Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, Gwalior, 2015, pp. 763-766. doi: 10.1109/CSNT.2015.160
Abstract: In the near future, Vehicular Ad-Hoc Networks (VANET) will help to improve traffic safety and efficiency. Unfortunately, a VANET faced a set of challenges in security, privacy and detection of misbehaving vehicles. In addition to, there is a need to recognize false messages from received messages in VANETs during moving on the road. In this work, the application of Dempster-Shafer theorem (DST) for computing trust in the VANET environment for location finding is presented. Trust based location finding in VANETs is necessary to deter broadcast of selfish or malicious messages and also enable other vehicles to filter out such messages. Result shows that the proposed scheme is viable for VANE environment.
Keywords: inference mechanisms; traffic engineering computing; trusted computing; vehicular ad hoc networks; DST; Dempster-Shafer theorem; VANET; false messages; malicious messages; received messages; selfish messages; traffic safety; trust based location finding; trust computing; vehicular ad-hoc networks; Communication system security; Roads; Safety; Vehicles; Vehicular ad hoc networks; Wireless communication; I2V; Plausibility; Trust; V2I; V2V(ID#: 16-11143)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280021&isnumber=7279856
S. Seneviratne, F. Jiang, M. Cunche and A. Seneviratne, “SSIDs in the Wild: Extracting Semantic Information from WiFi SSIDs,” Local Computer Networks (LCN), 2015 IEEE 40th Conference on, Clearwater Beach, FL, 2015, pp. 494-497. doi: 10.1109/LCN.2015.7366361
Abstract: WiFi networks are becoming increasingly ubiquitous. In addition to providing network connectivity, WiFi finds applications in areas such as indoor and outdoor localisation, home automation, and physical analytics. In this paper, we explore the semantics of one key attribute of a WiFi network, SSID name. Using a dataset of approximately 120,000 WiFi access points and their corresponding geo-locations, we use a set of similarity metrics to relate SSID names to known business venues such as cafes, theatres, and shopping centres. Such correlations can be exploited by an adversary who has access to smartphone users preferred networks lists to build an accurate profile of the user and thus can be a potential privacy risk to the users.
Keywords: computer network security; data privacy; wireless LAN; SSID; SSID name attribute; WiFi SSID; WiFi networks; Wireless Fidelity; privacy risk; semantic information extraction; service set identifier; similarity metrics; smartphone users preferred networks; user profile; Business; IEEE 802.11 Standard; Measurement; Privacy; Semantics (ID#: 16-11144)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366361&isnumber=7366232
M. Reininger, S. Miller, Y. Zhuang and J. Cappos, “A First Look at Vehicle Data Collection via Smartphone Sensors,” Sensors Applications Symposium (SAS), 2015 IEEE, Zadar, 2015, pp. 1-6. doi: 10.1109/SAS.2015.7133607
Abstract: Smartphones serve as a technical interface to the outside world. These devices have embedded, on-board sensors (such as accelerometers, WiFi, and GPSes) that can provide valuable information for investigating users' needs and behavioral patterns. Similarly, computers that are embedded in vehicles are capable of collecting valuable sensor data that can be accessed by smartphones through the use of On-Board Diagnostics (OBD) sensors. This paper describes a prototype of a mobile computing platform that provides access to vehicles' sensors by using smartphones and tablets, without compromising these devices' security. Data such as speed, engine RPM, fuel consumption, GPS locations, etc. are collected from moving vehicles by using a WiFi On-Board Diagnostics (OBD) sensor, and then backhauled to a remote server for both real-time and offline analysis. We describe the design and implementation details of our platform, for which we developed a library for in-vehicle sensor access and created a non-relational database for scalable backend data storage. We propose that our data collection and visualization tools are useful for analyzing driving behaviors; we also discuss future applications, security, and privacy concerns specific to vehicular networks.
Keywords: on-board communications; smart phones; vehicles; wireless LAN; WiFi on-board diagnostics sensor; data collection; mobile computing platform; moving vehicles; offline analysis; real-time analysis; smartphone sensors; tablets; vehicle data collection; visualization tools; Data collection; IEEE 802.11 Standards; Prototypes; Security; Sensors; Servers; Vehicles; Smartphone sensors; data visualization and analysis; vehicular networks (ID#: 16-11145)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133607&isnumber=7133559
R. Hussain, D. Kim, A. O. Tokuta, H. M. Melikyan and H. Oh, “Covert Communication Based Privacy Preservation in Mobile Vehicular Networks,” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 55-60. doi: 10.1109/MILCOM.2015.7357418
Abstract: Due to the dire consequences of privacy abuse in vehicular ad hoc network (VANET), a number of mechanisms have been put forth to conditionally preserve the user and location privacy. To date, multiple pseudonymous approach is regarded as one of the best effective solutions where every node uses multiple temporary pseudonyms. However, recently it has been found out that even multiple pseudonyms could be linked to each other and to a single node thereby jeopardizing the privacy. Therefore in this paper, we propose a novel identity exchange-based approach to preserve user privacy in VANET where a node exchanges its pseudonyms with the neighbors and uses both its own and neighbors' pseudonym randomly to preserve privacy. Additionally the revocation of the immediate user of the pseudonym is made possible through an efficient revocation mechanism. Moreover the pseudonym exchange is realized through covert communication where a side channel is used to establish a covert communication path between the exchanging nodes, based on the scheduled beacons. Our proposed scheme is secure, robust, and it preserves privacy through the existing beacon infrastructure.
Keywords: data privacy; telecommunication security; vehicular ad hoc networks; wireless channels; VANET; beacon infrastructure; covert communication based user privacy preservation; identity exchange-based approach; mobile vehicular network; multiple pseudonymous approach; privacy abuse dire consequence; revocation mechanism; side channel; vehicular ad hoc network; Cryptography; Privacy; Standards; Transmission line measurements; Vehicles; Vehicular ad hoc networks; Beacons; Conditional Privacy; Covert Communication; Pseudonyms (ID#: 16-11146)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357418&isnumber=7357245
V. Sharma and C.-C. Shen, “Evaluation of an Entropy-Based K-Anonymity Model for Location Based Services,” Computing, Networking and Communications (ICNC), 2015 International Conference on, Garden Grove, CA, 2015, pp. 374-378. doi: 10.1109/ICCNC.2015.7069372
Abstract: As the market for cellular telephones, and other mobile devices, keeps growing, the demand for new services arises to attract the end users. Location Based Services (LBS) are becoming important to the success and attractiveness of next generation wireless systems. To access location-based services, mobile users have to disclose their location information to service providers and third party applications. This raises privacy concerns, which have hampered the widespread use of LBS. Location privacy mechanisms include Anonymization, Obfuscation, Policy Based Scheme, k-anonymity and Adding Fake Events. However most existing solutions adopt the k-anonymity principle. We propose an entropy based location privacy mechanism to protect user information against attackers. We look at the effectiveness of the technique in a continuous LBS scenarios, i.e., where users are moving and recurrently requesting for Location Based Services, we also evaluate the overall performance of the system with its drawbacks.
Keywords: data protection; mobile handsets; mobility management (mobile radio); next generation networks; LBS; cellular telephone; entropy-based k-anonymity model evaluation; location based service; location privacy mechanism; mobile device; mobile user; next generation wireless system; policy based scheme; user information protection; Computational modeling; Conferences; Entropy; Measurement; Mobile communication; Privacy; Query processing; Location Based Services (LBS); entropy; k-anonymity; privacy (ID#: 16-11147)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069372&isnumber=7069279
C. J. Bernardos, J. C. Zúńiga and P. O'Hanlon, “Wi-Fi Internet Connectivity and Privacy: Hiding Your Tracks on the Wireless Internet,” Standards for Communications and Networking (CSCN), 2015 IEEE Conference on, Tokyo, 2015, pp. 193-198. doi: 10.1109/CSCN.2015.7390443
Abstract: Internet privacy is a serious concern nowadays. Users' activity leaves a vast digital footprint, communications are not always properly secured and location can be easily tracked. In this paper we focus on this last point, which is mainly caused by the use of IEEE Layer-2 immutable addresses. Randomization of the addresses used at Layer-2 is a simple, but promising, solution to mitigate the location privacy issues. We experimentally evaluate this approach, by first assessing the existing support of address randomization by the different operating systems, and then conducting several trials during two IETF and one IEEE 802 standards meetings. Based on the obtained results we can conclude that address randomization is a feasible solution to the Layer-2 privacy problem, but there needs to be other mechanisms used at higher layers to make the most benefit from it and minimize the service disruptions it may cause. As a conclusion of the paper and future steps, we discuss the possibility of using a context-based Layer-2 address randomization scheme that can be enabled with privacy features at higher layers.
Keywords: Internet; computer network security; wireless LAN; IEEE 802 standards meetings; IEEE Layer-2 immutable addresses; Wi-Fi Internet connectivity; Wi-Fi Internet privacy; different operating systems; digital footprint; location privacy issues; wireless Internet; IEEE 802.11 Standard; Internet; Operating systems; Performance evaluation; Privacy; Protocols; Yttrium (ID#: 16-11148)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7390443&isnumber=7390405
M. Grissa, A. A. Yavuz and B. Hamdaoui, “Cuckoo Filter-Based Location-Privacy Preservation in Database-Driven Cognitive Radio Networks,” Computer Networks and Information Security (WSCNIS), 2015 World Symposium on, Hammamet, 2015, pp. 1-7. doi: 10.1109/WSCNIS.2015.7368280
Abstract: Cognitive Radio Networks (CRNs) enable opportunistic access to the licensed channels by allowing secondary users (SUs) to exploit vacant channel opportunities. One effective technique through which SU s acquire whether a channel is vacant is using geo-location databases. Despite their usefulness, geo-location database-driven CRN s suffer from location privacy threats, merely because SUs have to query the database with their exact locations in order to learn about spectrum availability. In this paper, we propose an efficient scheme for database-driven CRN s that preserves the location privacy of SU s while allowing them to learn about available channels in their vicinity. We present a tradeoff between offering an ideal location privacy while having a high communication overhead and compromising some of the users' coordinates at the benefit of incurring much lower overhead. We also study the effectiveness of the proposed scheme under various system parameters.
Keywords: cognitive radio; data privacy; filtering theory; query processing; radio spectrum management; wireless channels; cuckoo filter-based location-privacy preservation; database query; database-driven cognitive radio network; geo-location database-driven CRN; high communication overhead; licensed channel; secondary user; spectrum availability; vacant channel opportunity exploitation; Data privacy; Databases; Information filters; Privacy; Protocols; Sensors; Cuckoo Filter; Database-driven spectrum availability; cognitive radio networks; location privacy preservation (ID#: 16-11149)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7368280&isnumber=7368275
E. Panaousis, A. Laszka, J. Pohl, A. Noack and T. Alpcan, “Game-Theoretic Model of Incentivizing Privacy-Aware Users to Consent to Location Tracking,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 1006-1013. doi: 10.1109/Trustcom.2015.476
Abstract: Nowadays, mobile users have a vast number of applications and services at their disposal. Each of these might impose some privacy threats on users' “Personally Identifiable Information” (PII). Location privacy is a crucial part of PII, and as such, privacy-aware users wish to maximize it. This privacy can be, for instance, threatened by a company, which collects users' traces and shares them with third parties. To maximize their location privacy, users can decide to get offline so that the company cannot localize their devices. The longer a user stays connected to a network, the more services he might receive, but his location privacy decreases. In this paper, we analyze the trade-off between location privacy, the level of services that a user experiences, and the profit of the company. To this end, we formulate a Stackelberg Bayesian game between the User (follower) and the Company (leader). We present theoretical results characterizing the equilibria of the game. To the best of our knowledge, our work is the first to model the economically rational decision-making of the service provider (i.e., the Company) in conjunction with the rational decision making of users who wish to protect their location privacy. To evaluate the performance of our approach, we have used real-data from a testbed, and we have also shown that the game-theoretic strategy of the Company outperforms non-strategic methods. Finally, we have considered different User privacy types, and have determined the service level that incentivizes the User to stay connected as long as possible.
Keywords: data privacy; game theory; mobile computing; PII; Stackelberg Bayesian game; game theoretic model; location privacy; location tracking; mobile users; personally identifiable information; privacy-aware users; user experience; Bayes methods; Companies; Data privacy; Games; IEEE 802.11 Standard; Privacy; Wireless LAN; Game theory; localization; privacy (ID#: 16-11150)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345384&isnumber=7345233
E. Troja and S. Bakiras, “Efficient Location Privacy for Moving Clients in Database-Driven Dynamic Spectrum Access,” Computer Communication and Networks (ICCCN), 2015 24th International Conference on, Las Vegas, NV, 2015, pp. 1-8. doi: 10.1109/ICCCN.2015.7288403
Abstract: Dynamic spectrum access (DSA) is envisioned as a promising framework for addressing the spectrum shortage caused by the rapid growth of connected wireless devices. In contrast to the legacy fixed spectrum allocation policies, DSA allows license-exempt users to access the licensed spectrum bands when not in use by their respective owners. More specifically, in the database-driven DSA model, mobile users issue location-based queries to a white-space database, in order to identify idle channels in their area. To preserve location privacy, existing solutions suggest the use of private information retrieval (PIR) protocols when querying the database. Nevertheless, these methods are not communication efficient and fail to take into account user mobility. In this paper, we address these shortcomings and propose an efficient privacy-preserving protocol based on the Hilbert space filling curve. We provide optimizations for mobile users that require privacy on-the-fly and users that have full a priori knowledge of their trajectory. Through experimentation with two real life datasets, we show that, compared to the current state-of-the-art protocol, our methods reduce the query response time at the mobile clients by a large factor.
Keywords: Hilbert spaces; information retrieval; mobility management (mobile radio); optimisation; protocols; radio spectrum management; wireless channels; Hilbert space filling curve; PIR protocol; database-driven DSA model; database-driven dynamic spectrum access; idle channel identification; legacy fixed spectrum allocation policy; location privacy; location privacy preservation; location-based query; mobile client; mobile user; privacy-preserving protocol; private information retrieval protocol; query response time reduction; white-space database; wireless device; Computer architecture; Databases; Microprocessors; Mobile communication; Privacy; Protocols; Trajectory (ID#: 16-11151)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288403&isnumber=7288342
L. Zhao, N. Wong Hon Chan, S. J. Yang and R. W. Melton, “Privacy Sensitive Resource Access Monitoring for Android Systems,” Computer Communication and Networks (ICCCN), 2015 24th International Conference on, Las Vegas, NV, 2015, pp. 1-6. doi: 10.1109/ICCCN.2015.7288451
Abstract: Existing works have studied how to collect and analyze human usage of mobile devices, to aid in further understanding of human behavior. Typical data collection utilizes applications or background services installed on the mobile device with user permission to collect user usage data via accelerometer, call logs, location, Wi-Fi transmission, etc. through a data tainting process. Built on the existing work, this research developed a system called Panorama (Privacy-sensitive Resource Access Monitoring for Android Systems) to collect application behavior instead of user behavior. The goal is to provide the means to analyze how background services access mobile resources, and potentially to identify suspicious applications that access sensitive user information. Panorama tracks the access of mobile resources in real time and enhances the concept of taint tracking. Each identified user privacy-sensitive resource is tagged and marked for tracking. The result is a dynamic, real-time tool that monitors the process flow of applications. This paper presents the development of Panorama and a set of analysis with respect to a variety of legitimate application behaviors.
Keywords: Android (operating system); Internet; consumer behaviour; data privacy; mobile computing; smart phones; telecommunication services; wireless LAN; Android systems; Panorama; Wi-Fi transmission; accelerometer; background services; call logs; data collection; data tainting process; human behavior; mobile devices; mobile resources; privacy sensitive resource access monitoring; sensitive user information; taint tracking; user permission; user usage data; Androids; Data collection; Humanoid robots; IP networks; Monitoring; Servers; Smart phones (ID#: 16-11152)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288451&isnumber=7288342
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Multidimensional Signal Processing 2015 (Part 1) |
Multidimensional signal processing research deals with issues such as those arising in automatic target detection and recognition, geophysical inverse problems, and medical estimation problems. Its goal is to develop methods to extract information from diverse data sources amid uncertainty. Research cited here was presented in 2015.
A. Helal and F. Balasa, “Multithreaded Signal-to-Memory Mapping Algorithm for Embedded Multidimensional Signal Processing,” 2015 20th International Conference on Control Systems and Computer Science, Bucharest, 2015, pp. 255-260. doi: 10.1109/CSCS.2015.22
Abstract: Many signal processing systems, particularly in the multimedia and telecommunication domains, are synthesized to execute data-dominated applications. Their behavior is described in a high-level programming language, where the code is typically organized in sequences of loop nests and the main data structures are multidimensional arrays. This paper proposes a memory management algorithm for mapping multidimensional signals (arrays) to physical memory blocs. The advantages of this novel technique are the following: (a) it can be applied to multilayer memory hierarchies, which makes it particularly useful in embedded systems design, (b) it provides metrics of quality for the overall memory allocation solution: the minimum data storage of each multidimensional signal in the behavioral specification (therefore, the optimal memory sharing between the elements of same arrays), as well as the minimum data storage for the entire specification (therefore, the optimal memory sharing between all the array elements and scalars in the code), (c) it is well-suited to a dynamic multithreading implementation, which makes it computationally efficient.
Keywords: multi-threading; signal processing; storage management; embedded multidimensional signal processing; high-level programming language; memory allocation; memory management algorithm; memory sharing; multidimensional signal mapping; multilayer memory hierarchy; multimedia domain; multithreaded signal-to-memory mapping algorithm; telecommunication domain; Arrays; Hardware; Indexes; Lattices; Memory management; Signal processing algorithms; dynamic multithreading; memory management; memory mapping; multidimensional signal processing (ID#: 16-9760)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168439&isnumber=7168393
F. Balasa, N. Abuaesh, C. V. Gingu and H. Zhu, “Optimization of Memory Banking in Embedded Multidimensional Signal Processing Applications,” 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, 2015, pp. 2880-2883. doi: 10.1109/ISCAS.2015.7169288
Abstract: Hierarchical memory organizations are used in embedded systems to reduce energy consumption and improve performance by assigning the frequently-accessed data to the low levels of memory hierarchy. Within a given level of hierarchy, energy and access times can be further reduced by memory banking. This paper addresses the problem of banking optimization, presenting a dynamic programming approach that takes into account all three major design objectives — energy consumption, performance, and die area, letting the designers decide on their relative importance for a specific project. The time complexity is independent of the size of the storage access trace and of the memory size — a significant advantage in terms of computation speed when these two parameters are large.
Keywords: DRAM chips; cache storage; dynamic programming; signal processing; computation speed; die area; dynamic programming approach; embedded multidimensional signal processing applications; embedded systems; energy consumption reduction; frequently-accessed data assignment; hierarchical memory organizations; low memory hierarchy levels; memory banking optimization; memory size; off-chip DRAM; on-chip SPM; performance improvement; storage access trace; time complexity; Arrays; Banking; Dynamic programming; Energy consumption; Lattices; Memory management; Signal processing algorithms (ID#: 16-9761)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169288&isnumber=7168553
A. Madanayake, C. Wijenayake, Z. Lin and N. Dornback, “Recent Advances in Multidimensional Systems and Signal Processing: An Overview,” 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, 2015, pp. 2365-2368. doi: 10.1109/ISCAS.2015.7169159
Abstract: In this paper, we present an overview of recent advances in multidimensional (MD) systems and signal processing. We focus on topics closely related to the four papers selected into the special session of “recent advances in multidimensional systems and signal processing” at ISCAS 2015. The paper starts with an overview of the theory of MD IIR digital filters and its applications ranging from image/videos of light-fields to microwave and mm-wave antenna array processing. State-space formulation for the realization of MD IIR notch filters is also discussed as applicable to image processing scenarios. Thereafter, new developments in visual tomography-based imaging systems that exploit MD signal processing towards safety and health applications are discussed. The paper also reviews new theoretical developments in modeling of physical systems. Recent advances in MD Kirchhoff circuit realizations for some physical systems including finite speed heat diffusion are reviewed.
Keywords: IIR filters; image processing; microwave antenna arrays; millimetre wave antenna arrays; notch filters; ISCAS 2015; MD IIR digital filters; MD Kirchhoff circuit realizations; MD signal processing; image processing scenarios; microwave antenna array processing; mm-wave antenna array processing; multidimensional systems; visual tomography-based imaging systems (ID#: 16-9762)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169159&isnumber=7168553
A. Madanayake, C. Wijenayake, L. Belostotski and L. T. Bruton, “An Overview of Multi-Dimensional RF Signal Processing for Array Receivers,” Moratuwa Engineering Research Conference (MERCon), 2015, Moratuwa, 2015, pp. 255-259. doi: 10.1109/MERCon.2015.7112355
Abstract: In this review paper, recent advancements in multidimensional (MD) spatio-temporal signal processing for highly-directional radio frequency (RF) antenna array based receivers are discussed. MD network-resonant beamforming filters having infinite impulse response (IIR) and recursive spatio-temporal signal flow graphs are reviewed. The concept of MD network-resonant pre-filtering is described as a modification to existing phased/timed array beamforming back-ends to achieve improved side-lobe performance in the array pattern, leading to better interference rejection capabilities. Both digital and analog signal processing models are described in terms of their system transfer functions and signal flow graphs. Example MD frequency response and RF antenna array pattern simulations are presented.
Keywords: IIR filters; antenna phased arrays; antenna radiation patterns; array signal processing; directive antennas; multidimensional signal processing; radio receivers; radiofrequency interference; recursive filters; signal flow graphs; spatiotemporal phenomena; IIR; MD network-resonant beamforming filter; MD spatiotemporal signal processing; RF antenna array based receiver; analog signal processing; digital signal processing; highly-directional radio frequency antenna array; infinite impulse response; interference rejection capability; multidimensional RF signal processing; recursive spatio-temporal signal flow graph; side-lobe performance; transfer function; Array signal processing; Arrays; Frequency response; Passband; Radio frequency; Receivers; Transfer functions; Multidimensional Signal Processing; Phased Arrays (ID#: 16-9763)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7112355&isnumber=7112293
T. Randeny, A. Madanayake, A. Sengupta, Y. Li and C. Li, “Aperture-Array Directional Sensing Using 2-D Beam Digital Filters with Doppler-Radar Front-Ends,” Moratuwa Engineering Research Conference (MERCon), 2015, Moratuwa, 2015, pp. 265-270. doi: 10.1109/MERCon.2015.7112357
Abstract: A directional sensing algorithm is proposed employing doppler radar and low-complexity 2-D IIR spatially bandpass filters. The speed of the scatterer is determined by the frequency shift of the received signal following down-conversion. The downconversion is done by mixing it with the instantaneous transmitted signal. The direction of the scatterer is determined by the means of 2-D plane-wave spectral characteristics, using 2-D IIR beam filters. The proposed architecture was simulated for three scatterers at 10,° 30,° 60° from array broadside, traveling at speeds of 31 ms,-1 18 ms-1 and 27 ms -1, respectively. A doppler radar module was used to transmit and receive reflected signals, that has a carrier frequency of 2.4 GHz. Simulations show both direction and doppler information being enhanced.
Keywords: Doppler radar; IIR filters; UHF antennas; UHF filters; antenna arrays; aperture antennas; array signal processing; band-pass filters; directive antennas; radar antennas; radar signal processing; spatial filters; spectral analysis; two-dimensional digital filters; 2D IIR spatially band-pass filter; 2D beam digital filter; Doppler radar front-end; aperture-array directional sensing; array broadside; down conversion; scatterer direction; Arrays; Radar antennas; Radio frequency; Receivers; Sensors; 2-D IIR digital filters; Multidimensional signal processing; cyberphysical systems; directional sensing; doppler; radar (ID#: 16-9764)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7112357&isnumber=7112293
M. Barjenbruch, F. Gritschneder, K. Dietmayer, J. Klappstein and J. Dickmann, “Memory Efficient Spectral Estimation on Parallel Computing Architectures,” Signal Processing and Signal Processing Education Workshop (SP/SPE), 2015 IEEE, Salt Lake City, UT, 2015, pp. 337-340. doi: 10.1109/DSP-SPE.2015.7369576
Abstract: A method for spectral estimation is proposed. It is based on the multidimensional extensions of the RELAX algorithm. The fast Fourier transform is replaced by multiple Chirp-Z transforms. Each transform has a much shorter length than the transform in the original algorithm. This reduces the memory requirements significantly. At the same time a high degree of parallelism is preserved. A detailed analysis of the computational requirements is given. Finally, the proposed method is applied to automotive radar measurements. It is shown, that the multidimensional spectral estimation resolves multiple scattering centers on an extended object.
Keywords: Z transforms; estimation theory; fast Fourier transforms; parallel architectures; RELAX algorithm; fast Fourier transform; memory efficient spectral estimation; multidimensional extensions; multidimensional spectral estimation; multiple Chirp-Z transforms; multiple scattering centers; parallel computing architecture; Chirp; Estimation; Memory management; Signal processing algorithms; Transforms; Yttrium; Multidimensional signal processing; harmonic analysis; parallel algorithms; parameter estimation (ID#: 16-9765)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7369576&isnumber=7369513
T. A. Palka and R. J. Vaccaro, “Asymptotically Efficient Estimators for Multidimensional Harmonic Retrieval Based on the Geometry of the Stiefel Manifold,” 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, 2015, pp. 1691-1695. doi: 10.1109/ACSSC.2015.7421437
Abstract: A variety of signal processing applications require multidimensional harmonic retrieval on regular arrays and R-dimensional subspace-based methods (e.g; R-D Unitary ESPRIT ) are often used for this task. The conventional subspace estimation step via an SVD is the MLE for the unconstrained assumption but is suboptimal here since the harmonic signal structure or constraint is not exploited. Subspace estimation methods such as F/B averaging, HO-SVD, and SLS, which make use of the structure to varying degrees, yield improved performance but remain suboptimal in the sense that they do not satisfy a maximum likelihood criterion. Using a modified complex Stiefel manifold for the domain of the likelihood function we derive a quadratic ML criterion with a geometric constraint for the R- dimensional problem. This constraint is expressed in terms of the tangent space of an appropriate submanifold. For the special case when the submanifold satisfies a shift-invariance condition, we present a stand-alone estimation algorithm that computes the submanifold tangent space as the null space of a matrix that represents the linearized form of a geometric constraint. The estimator's performance is compared to existing approaches and to the intrinsic subspace CRB for highly stressful scenarios of closely spaced and highly correlated sources.
Keywords: maximum likelihood estimation; multidimensional signal processing; singular value decomposition; F/B averaging; R-dimensional problem; R-dimensional subspace; SVD; asymptotically efficient estimators; geometric constraint; harmonic signal structure; intrinsic subspace CRB; likelihood function; maximum likelihood criterion; modified complex Stiefel manifold; multidimensional harmonic retrieval; quadratic ML criterion; regular arrays; shift-invariance condition; signal processing; stand-alone estimation; subspace estimation; Covariance matrices; Geometry; Harmonic analysis; Manifolds; Maximum likelihood estimation; Niobium
(ID#: 16-9766)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7421437&isnumber=7421038
R. Gierlich, “Joint Estimation of Spatial and Motional Radar Target Parameters by Multidimensional Spectral Analysis,” 2015 16th International Radar Symposium (IRS), Dresden, 2015, pp. 95-101. doi: 10.1109/IRS.2015.7226282
Abstract: In this paper we investigate the suitability of modern multidimensional spectral analysis techniques for the estimation of radar target parameters. A short survey of both classical and alternative methods is given. The latter comprise high- and superresolution spectral analysis techniques like 2D-MUSIC, 2D-Minimum Variance, and recently published 2D-AR spectrum estimators. The performance characteristics are evaluated by systematic benchmarks, including spectral resolution, high signal dynamics, and multi-tone capacity. Field trials with a military C-band surveillance radar demonstrate the high potential of the alternative multidimensional spectrum estimators under real-world conditions with regard to the joint estimation of spatial and motional radar target parameters.
Keywords: military radar; multidimensional signal processing; parameter estimation; radar resolution; search radar; high signal dynamics; high-resolution spectral analysis technique; military C-band surveillance radar; multidimensional spectral analysis; multitone capacity; spatial and motional radar target parameter joint estimation; spectral resolution; superresolution spectral analysis technique; Benchmark testing; Discrete Fourier transforms; Multiple signal classification; Signal resolution; Signal to noise ratio; Spectral analysis (ID#: 16-9767)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226282&isnumber=7226207
J. Zhu, J. J. Bellanger, H. Shu and R. Le Bouquin Jeannès, “Investigating Bias in Non-Parametric Mutual Information Estimation,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015,
pp. 3971-3975. doi: 10.1109/ICASSP.2015.7178716
Abstract: In this paper, our aim is to investigate the control of bias accumulation when estimating mutual information from nearest neighbors non-parametric approach with continuously distributed random data. Using a multidimensional Taylor series expansion, a general relationship between the estimation bias and neighborhood size for plug-in entropy estimator is established without any assumption on the data for two different norms. When applied with the maximum norm, our theoretical analysis explains experimental simulation tests drawn in existing literature. In the experiments, two different strategies are tested and compared to estimate mutual information on independent and dependent simulated signals.
Keywords: entropy; multidimensional signal processing; nonparametric statistics; series (mathematics); bias accumulation control; multidimensional Taylor series expansion; nearest neighbors nonparametric approach; nonparametric mutual information estimation; plug-in entropy estimator; Approximation methods; Entropy; Estimation; Joints; Mutual information; Random variables; Entropy estimation; bias reduction; independence test; mutual information (ID#: 16-9768)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178716&isnumber=7177909
S. Gupta and C. Caloz, “Multi-Dimensional Real-Time Spectrum Analysis for High-Resolution Signal Processing,” Electromagnetics in Advanced Applications (ICEAA), 2015 International Conference on, Turin, 2015, pp. 1412-1415. doi: 10.1109/ICEAA.2015.7297351
Abstract: Two types of RTSAs are presented, which spectrally decompose a broadband electromagnetic signal in space in real-time. The first system is based on an array of LWA fed using an array of phasers, where the dispersion of the phasers in conjunction with the natural frequency scanning of the leaky-wave antenna enable the 2-D frequency scan. The second system is a purely spatial system based on dispersive metasurface which operate on a incident wave and performs 2-D spectral decomposition. Their principle and basic characteristics are discussed in details.
Keywords: antenna arrays; leaky wave antennas; multidimensional signal processing; spectral analysis; 2D frequency scan; 2D spectral decomposition; LWA; broadband electromagnetic signal; dispersive metasurface; high-resolution signal processing; Incident wave; leaky-wave antenna; multidimensional real-time spectrum analysis; natural frequency scanning; purely spatial system; Arrays; Broadband antennas; Broadband communication; Dispersion; Leaky wave antennas; Real-time systems; Spectral analysis
(ID#: 16-9769)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7297351&isnumber=7296849
Yuanxin Li, Yingsheng He, Yuejie Chi and Y. M. Lu, “Blind Calibration of Multi-Channel Samplers Using Sparse Recovery,” Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2015 IEEE 6th International Workshop on, Cancun, 2015, pp. 33-36. doi: 10.1109/CAMSAP.2015.7383729
Abstract: We propose an algorithm for blind calibration of multi-channel samplers in the presence of unknown gains and offsets, which is useful in many applications such as multi-channel analog-to-digital converters, image super-resolution, and sensor networks. Using a subspace-based rank condition developed by Vandewalle et al., we obtain a set of linear equations with respect to complex harmonics whose frequencies are determined by the offsets, and the coefficients of each harmonic are determined by the discrete-time Fourier transforms of outputs of each of the channels. By discretizing the offsets over a fine grid, this becomes a sparse recovery problem where the signal of interest is sparse with an additional structure, that in each block there is only one nonzero entry. We propose a modified CoSaMP algorithm that takes this structure into account to estimate the offsets. Our algorithm is scalable to large numbers of channels and can also be extended to multi-dimensional signals. Numerical experiments demonstrate the effectiveness of the proposed algorithm.
Keywords: Fourier transforms; analogue-digital conversion; calibration; compressed sensing; image resolution; multidimensional signal processing; signal sampling; CoSaMP algorithm; blind calibration; complex harmonics; discrete-time Fourier transforms; image superresolution; linear equations; multichannel analog-to-digital converters; multichannel samplers; multidimensional signals; sensor networks; sparse recovery problem; subspace-based rank condition; Calibration; Conferences; Discrete Fourier transforms; Harmonic analysis; Image resolution; Signal resolution; Yttrium; CoSaMP; multi-channel sampling; sparse recovery (ID#: 16-9770)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7383729&isnumber=7383717
S. Ono, I. Yamada and I. Kumazawa, “Total Generalized Variation for Graph Signals,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 5456-5460. doi: 10.1109/ICASSP.2015.7179014
Abstract: This paper proposes a second-order discrete total generalized variation (TGV) for arbitrary graph signals, which we call the graph TGV (G-TGV). The original TGV was introduced as a natural higher-order extension of the well-known total variation (TV) and is an effective prior for piecewise smooth signals. Similarly, the proposed G-TGV is an extension of the TV for graph signals (G-TV) and inherits the capability of the TGV, such as avoiding staircasing effect. Thus the G-TGV is expected to be a fundamental building block for graph signal processing. We provide its applications to piecewise-smooth graph signal inpainting and 3D mesh smoothing with illustrative experimental results.
Keywords: graph theory; multidimensional signal processing; piecewise constant techniques; smoothing methods; 3D mesh smoothing; G-TGV; graph TGV; graph signal processing; piecewise-smooth graph signal inpainting; second-order discrete total generalized variation; Convex functions; Graph theory; Optimization; Signal processing; Smoothing methods; TV; Three-dimensional displays; Graph signal processing; proximal splitting; total generalized variation (TGV) (ID#: 16-9771)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7179014&isnumber=7177909
M. A. Sedaghat and R. Müller, “Multi-Dimensional Continuous Phase Modulation in Uplink of MIMO Systems,” Signal Processing Conference (EUSIPCO), 2015 23rd European, Nice, 2015, pp. 2446-2450. doi: 10.1109/EUSIPCO.2015.7362824
Abstract: Phase Modulation on the Hypersphere (PMH) is considered in which the instantaneous sum power is constant. It is shown that for an i.i.d. Gaussian channel, the capacity achieving input distribution is approximately uniform on a hypersphere when the number of receive antennas is much larger than the number of transmit antennas. Moreover, in the case that channel state information is not available at the transmitter, it is proven that the capacity achieving input distribution is exactly uniform on a hypersphere. Mutual information between input and output of PMH with discrete constellation for an i.i.d. Gaussian channel is evaluated numerically. Furthermore, a spherical spectral shaping method for PMH is proposed to have Continuous Phase Modulation on the Hypersphere (CPMH). In CPMH, the continuous time signal has a constant instantaneous sum power. It is shown that using a spherical low pass filter in spherical domain followed by a Cartesian filter results in very good spectral properties.
Keywords: Gaussian channels; MIMO communication; antenna arrays; continuous phase modulation; low-pass filters; multidimensional signal processing; receiving antennas; transmitting antennas; CPMH; Cartesian filter; MIMO systems; channel state information; continuous time signal; hypersphere; i.i.d. Gaussian channel; multidimensional continuous phase modulation; mutual information; receive antennas; spectral properties; spherical domain; spherical low pass filter; spherical spectral shaping method; transmit antennas; MIMO; Mutual information; Peak to average power ratio; Pulse shaping methods; Radio transmitters; Receiving antennas; Phase modulation; continuous phase modulation (CPM); multiple-input multiple-output (MIMO) systems; peak-to-average power ratio (PAPR); single-RF transmitters; spherical filtering (ID#: 16-9772)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7362824&isnumber=7362087
A. K. Bhoi, K. S. Sherpa, D. Phurailatpam, J. S. Tamang and P. K. Giri, “Multidimensional Approaches for Noise Cancellation of ECG Signal,” Communications and Signal Processing (ICCSP), 2015 International Conference on, Melmaruvathur, 2015,
pp. 0066-0070. doi: 10.1109/ICCSP.2015.7322569
Abstract: In many situations, the Electrocardiogram (ECG) is recorded during ambulatory or strenuous conditions such that the signal is corrupted by different types of noise, sometimes originating from another physiological process of the body. Hence, noise removal is an important aspect of signal processing. Here five different filters i.e. median, Low Pass Butter worth, FIR, Weighted Moving Average and Stationary Wavelet Transform (SWT) with their filtering effect on noisy ECG are presented. Comparative analyses among these filtering techniques are described and statically results are evaluated.
Keywords: electrocardiography; medical signal processing; multidimensional signal processing; signal denoising; wavelet transforms; ECG signal; FIR; ambulatory conditions; electrocardiogram; filtering techniques; low-pass butter; multidimensional approaches; noise cancellation; physiological process; signal processing; stationary wavelet transform; strenuous conditions; weighted moving average; Discrete wavelet transforms; Finite impulse response filters; Noise measurement; Active noise reduction; Electrocardiography; Filters; Noise cancellation (ID#: 16-9773)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322569&isnumber=7322423
I. Dokmanić, J. Ranieri and M. Vetterli, “Relax and Unfold: Microphone Localization with Euclidean Distance Matrices,” Signal Processing Conference (EUSIPCO), 2015 23rd European, Nice, 2015, pp. 265-269. doi: 10.1109/EUSIPCO.2015.7362386
Abstract: Recent methods for microphone position calibration work with sound sources at a priori unknown locations. This is convenient for ad hoc arrays, as it requires little additional infrastructure. We propose a flexible localization algorithm by first recognizing the problem as an instance of multidimensional unfolding (MDU) - a classical problem in Euclidean geometry and psychometrics - and then solving the MDU as a special case of Euclidean distance matrix (EDM) completion. We solve the EDM completion using a semidefinite relaxation. In contrast to existing methods, the semidefinite formulation allows us to elegantly handle missing pairwise distance information, but also to incorporate various prior information about the distances between the pairs of microphones or sources, bounds on these distances, or ordinal information such as “microphones 1 and 2 are more apart than microphones 1 and 15”. The intuition that this should improve the localization performance is justified by numerical experiments.
Keywords: acoustic generators; acoustic signal processing; calibration; mathematical programming; matrix algebra; microphone arrays; multidimensional signal processing; source separation; EDM completion; Euclidean distance matrices; Euclidean geometry; MDU; ad hoc arrays; flexible localization algorithm; localization performance improvement; microphone localization; microphone position calibration; missing pairwise distance information handling; multidimensional unfolding; psychometrics; semidefinite relaxation; sound sources; Calibration; Euclidean distance; Europe; Geometry; Microphones; Noise measurement; Symmetric matrices; Euclidean distance matrix; Microphone localization; array calibration; microphone array (ID#: 16-9774)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7362386&isnumber=7362087
A. Pedrouzo-Ulloa, J. R. Troncoso-Pastoriza and F. Pérez-González, “Multivariate Lattices for Encrypted Image Processing,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 1707-1711. doi: 10.1109/ICASSP.2015.7178262
Abstract: Images are inherently sensitive signals that require privacy-preserving solutions when processed in an untrusted environment, but their efficient encrypted processing is particularly challenging due to their structure and size. This work introduces a new cryptographic hard problem called m-RLWE (multivariate Ring Learning with Errors) extending RLWE. It gives support to lattice cryptosystems that allow for encrypted processing of multidimensional signals. We show an example cryptosystem and prove that it outperforms its RLWE counterpart in terms of security against basis-reduction attacks, efficiency and cipher expansion for encrypted image processing.
Keywords: cryptography; image processing; telecommunication security; cryptographic hard problem; encrypted image processing; lattice cryptosystem; m-RLWE; multidimensional signal processing; multivariate lattice; multivariate ring learning with error; privacy-preserving solution; Ciphers; Encryption; Image processing; Lattices; Polynomials; Homomorphic Processing; Image Encryption; Lattice Cryptography; Security (ID#: 16-9775)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178262&isnumber=7177909
A. Madanayake, “Keynote Address: Multi-Dimensional RF Signal-Processing and Analog/Digital Circuits,” Moratuwa Engineering Research Conference (MERCon), 2015, Moratuwa, Sri Lanka, 2015, pp. xxi-xxii. doi: 10.1109/MERCon.2015.7112306
Abstract: The Advanced Signal Processing Circuits (ASPC) laboratory at the University of Akron was established in 2010, to conduct basic research involving antenna array signal processing, reconfigurable systems, and multidimensional filters. The target applications span communications, cognitive radio, radio astronomy, microwave imaging, and radar. In this talk, we will discuss our areas of investigation, starting with an overview of the impending spectral scarcity problem. The accelerating growth of wireless systems is rapidly leading to scarcity of electromagnetic spectral bandwidth. The spectrum is a finite natural resource that is subject to oversubscription. Cognitive radio (CR) is an approach for mitigating spectral scarcity. In a CR, situational awareness is provided to a wireless network, allowing intelligent decisions on the use of electromagnetic spectral resources without being limited by licensing schemes. The technologies and approaches that would enable an unprecedented increase in wireless system capacity, data rates and connectivity is known as the 1000 x Game.
Keywords: (not provided) (ID#: 16-9776)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7112306&isnumber=7112293
W. Hong, “Research Progress on Multidimensional Space Joint-Observation SAR,” 2015 40th International Conference on Infrared, Millimeter, and Terahertz waves (IRMMW-THz), Hong Kong, 2015, pp. 1-1. doi: 10.1109/IRMMW-THz.2015.7327714
Abstract: Summary form only given. With the application requirement and technology development, the necessity and tendency of Synthetic Aperture Radar (SAR) imaging within the framework of multidimensional space joint-observation, which are polarimetry, frequency, angle, time series and etc., evoke catholic interests in SAR imaging research nowadays. Recent research progress on the Multidimensional Space Joint-observation SAR (MSJosSAR) in the National Key Lab of Microwave Imaging Technology, Institute of Electronics, Chinese Academy of, Sciences(MITL-IECAS) is reported in this talk, where the a sphere cluster cordinate system is defined as the modeling basis on the demand of information fusion for SAR multidimensional space joint-observation. Further more, the advantage of MSJosSAR is revealed by using Kronecker product decomposition for better understanding of target scattering mechanisms, with the hypothesis and basic framework on which the MSJosSAR signal processing relies on. Tentative studies on multi-layer material with PolinSAR technique, anisotropic scattering mechanisms with multi-directional observation (cuverture or circular SAR technique), and instantaneous time-variant target with array SAR technique are demonstrated as the initial verification of the above defined hypothesis and framework. Finally, the value of joint observation space numbers for typical SAR configurations is enumerated, followed by the perspective discussions on the future work for MSJosSAR study.
Keywords: decomposition; image fusion; radar imaging; radar polarimetry; spaceborne radar; synthetic aperture radar; Chinese Academy of Sciences Institute of Electronics; Kronecker product decomposition; MITL-IECAS; MSJosSAR imaging; National Key Lab of Microwave Imaging Technology; PolinSAR technique; anisotropic scattering mechanisms; circular SAR technique; cuverture SAR technique; instantaneous time-variant target; multidimensional space joint-observation SAR imaging; multidirectional observation; multilayer material; polarimetry; signal processing; sphere cluster cordinate system; synthetic aperture radar imaging; target scattering mechanism; time series; Aerospace electronics; Microwave imaging; Microwave theory and techniques; Radar imaging; Scattering; Synthetic aperture radar (ID#: 16-9777)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7327714&isnumber=7327382
W. Yi, Z. Wang and J. Wang, “Multiple Antenna Wireless Communication Signal Blind Recovery Based on PARAFAC Decomposition,” Network and Information Systems for Computers (ICNISC), 2015 International Conference on, Wuhan, 2015, pp. 137-140. doi: 10.1109/ICNISC.2015.13
Abstract: Multiple antenna signals in wireless communication system form into a kind of multiple dimension array signal from the view of time, frequency and space factors which can be modelled and processed by tensor analysis method. This paper focuses on the application of PARAFAC-based tensor decomposition method in solving the problem of signal blind recovery in MIMO-OFDM wireless communication system. The received signal of MIMO-OFDM system can be viewed as a multidimensional array. PARAFAC decomposition is applied for blind recovery of the received signal with unknown CSI (Channel State Information) and CFO (Carrier Frequency Offset). Simulation results show that the proposed scheme performs better under high SNR and small symbol collection, which turns out to verify the blind recovery method based on the PARAFAC decomposition.
Keywords: MIMO communication; OFDM modulation; antenna arrays; array signal processing; decomposition; radio networks; tensors; wireless channels; CFO; CSI; MIMO-OFDM wireless communication system; PARAFAC-based tensor decomposition method; SNR; carrier frequency offset; channel state information; multiple antenna wireless communication signal blind recovery; multiple dimension array signal; MIMO; OFDM; Receiving antennas; Tensile stress; Transmitting antennas; Wireless communication; PARAFAC decomposition; blind recovery; multiple antenna; wireless communication (ID#: 16-9778)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7311854&isnumber=7311794
J. Velten, A. Kummert, A. Gavriilidis and F. Boschen, “2-D Signal Theoretic Investigation of Background Elimination in Visual Tomographic Reconstruction for Safety and Enabling Health Applications,” 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, 2015, pp. 2377-2380. doi: 10.1109/ISCAS.2015.7169162
Abstract: Visual tomography is a relatively new method for 3D scene reconstruction. It is adopted from medical tomography and based on multiple images from different viewpoints of a scene. In this context, multidimensional spectra and filtering techniques are the key technology for the reconstruction process. Visual tomography differs from classical tomography in several aspects which leads to new challenges with respect to mathematical description. The present paper examines the influence of image background on reconstruction quality. This background problem does not appear in classical medical tomography applications. In particular, the influence of multidimensional sampling and restrictions with respect to the number of view angles can be analyzed by using multidimensional signal theoretical concepts. The differences between ideal (no background) and real acquisition conditions are examined. Visual tomography has the potential for innovative new fields of applications, where Enabling Technologies for Societal Challenges are the focus of our considerations. Demographic change leads to a high interest for enabling mobility for elderly people with physical disabilities. Walking frames equipped with such technologies will be able to assist such people in real day environments.
Keywords: computerised tomography; image reconstruction; mathematical analysis; medical image processing; 2D signal theoretic investigation; 3D scene reconstruction; background elimination; classical medical tomography applications; elderly people; filtering techniques; health applications; mathematical description; medical tomography; multidimensional spectra; physical disabilities; reconstruction process; reconstruction quality; visual tomographic reconstruction; Biomedical imaging; Cameras; Image reconstruction; Legged locomotion; Radio frequency; Tomography; Visualization (ID#: 16-9779)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169162&isnumber=7168553
M. Cvijetic and I. B. Djordjevic, “Multidimensional Aspects of Ultra High Speed Optical Networking,” 2015 17th International Conference on Transparent Optical Networks (ICTON), Budapest, 2015, pp. 1-4. doi: 10.1109/ICTON.2015.7193472
Abstract: Multidimensional approach in optical channel construction and parallelization in signal processing are the key factors in enabling high spectral efficiency of optical transmission links and the overall throughput increase in optical networks. Multidimensionality is mainly related to employment of advanced modulation and multiplexing schemes operating in combination with the advanced coding and detection techniques. In this paper we discuss the key multidimensional principles that can be used not only of the information capacity increase, but also as enablers of the elastic and dynamic networking.
Keywords: channel capacity; multiplexing; optical fibre networks; optical information processing; optical modulation; spectral analysis; dynamic networking; elastic networking; information capacity; multidimensional approach; multiplexing scheme; optical channel construction; optical channel parallelization; optical modulation; optical signal processing; optical transmission link spectral efficiency; ultra high speed optical networking; Modulation; OFDM; Optical fiber networks; Optical fibers; Optical polarization; Optical signal processing (ID#: 16-9780)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193472&isnumber=7193279
Y. C. Lee, W. H. Fang and Y. T. Chen, “Improved HISS Technique for Multidimensional Harmonic Retrieval Problems,” Signal and Information Processing (ChinaSIP), 2015 IEEE China Summit and International Conference on, Chengdu, 2015, pp. 109-112. doi: 10.1109/ChinaSIP.2015.7230372
Abstract: This paper presents an accurate and efficient algorithm for multidimensional harmonic retrieval (MHR) problems. The new algorithm improves the previously addressed hierarchical signal separation (HISS) technique by using more robust constrained filtering instead of the projection matrices in the signal separation process. Thereby, the new algorithm not only requires lower complexity, but it can provide even superior performance, especially in low signal-to-noise ratio (SNR) scenarios. Moreover, the pairing of the estimated parameters is automatically achieved without extra overhead. While the new algorithm, as HISS, calls for low complexity as only one-dimensional (1-D) parameters are estimated in each stage, it, as shown in the simulations, provides competing performance compared with the main state-of-the-art works.
Keywords: filtering theory; matrix algebra; source separation; HISS technique; MHR problem; SNR; constrained filtering; hierarchical signal separation; multidimensional harmonic retrieval problem; projection matrix; signal-to-noise ratio; Manganese; Nickel; Silicon
(ID#: 16-9781)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230372&isnumber=7230339
G. Desodt, C. Adnet, A. Martin and R. Castaing, “Extract Before Detect, Coherent Extraction Based on Gridless Compressed Sensing,” Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa), 2015 3rd International Workshop on, Pisa, 2015, pp. 174-178. doi: 10.1109/CoSeRa.2015.7330287
Abstract: The goal of radar processing chain is to extract a few target echoes from their noisy sum made of thousands of complex numbers. Current radar chains are composed of matched filters (Pulse Compression, Doppler filters, Digital Beam Forming), noise/clutter estimation and thresholding, then extraction (hits clustering, location estimation). In this paper, extraction function is performed thanks to a Compressed Sensing approach. An important difference with current extraction function is that it performs a real “coherent extraction” (complex subtraction in each burst) of targets from the observed signals. This is a key factor to increase extraction capacity: coherent extraction can extract a higher number of target echoes than non-coherent extraction. This paper considers a multidimensional target domain and multidimensional input signals that are ambiguous in range and radial velocity, where target echoes fluctuate from burst to burst. The algorithm used to recover the sparse representation is Orthogonal Matching Pursuit (OMP) where the dictionary matrix is continuous over the target domain, therefore overcoming the grid problem: “gridless OMP”.
Keywords: array signal processing; compressed sensing; iterative methods; matched filters; pulse compression; radar clutter; radar detection; radar signal processing; signal representation; time-frequency analysis; Doppler filters; dictionary matrix; digital beam forming; gridless OMP; gridless compressed sensing; multidimensional input signals; multidimensional target domain; noise-clutter estimation; noncoherent extraction; orthogonal matching pursuit; pulse compression; radar processing chain; sparse representation; target echoes; Coherence; Compressed sensing; Conferences; Matching pursuit algorithms; Radar remote sensing; OMP; ambiguity solving; block; complex; extract before detect; extraction; gridless; off the grid; radar (ID#: 16-9782)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7330287&isnumber=7330246
J. A. T. Machado and A. M. Lopes, “Analysis and Visualization of Complex Phenomena,” Computational Intelligence and Informatics (CINTI), 2015 16th IEEE International Symposium on, Budapest, 2015, pp. 135-140. doi: 10.1109/CINTI.2015.7382909
Abstract: In this paper we study natural and man-made complex phenomena by means of signal processing and fractional calculus. The outputs of the complex phenomena are time series to be interpreted as manifestations of the system dynamics. In a first step we use the Jensen-Shannon divergence to compare real-world signals. We then adopt multidimensional scaling and visualization tools for data clustering and analysis. Classical and generalized (fractional) Jensen-Shannon divergence are tested. The generalized measures lead to a clear identification of patterns embedded in the data and contribute to better understand distinct phenomena.
Keywords: data analysis; data visualisation; pattern clustering; signal processing; time series; complex phenomena visualization; data clustering; fractional calculus; generalized fractional Jensen-Shannon divergence; man-made complex phenomena; multidimensional scaling; natural complex phenomena; signal processing; system dynamics; time series; visualization tools; Data visualization; Economic indicators; Electrocardiography; Entropy; IP networks; Indium tin oxide; Internet (ID#: 16-9783)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7382909&isnumber=7382894
Y. Li and T. Yu, “EEG-based Hybrid BCIs and Their Applications,” Brain-Computer Interface (BCI), 2015 3rd International Winter Conference on, Sabuk, 2015, pp. 1-4. doi: 10.1109/IWW-BCI.2015.7073035
Abstract: In this paper, we presented two hybrid brain computer interfaces (BCIs), one combing motor imagery (MI) and P300 and another combing P300 and steady state visual evoked potential (SSVEP), and their applications. An important issue in BCI research is multidimensional control. Potential applications include BCI controlled computer mouse, document and email processing, web browser, wheelchair and neuroprosthesis. The challenge for EEG-based multidimensional control is to obtain multiple independent control signals from the noisy EEG data. For this purpose, hybrid BCIs may yield better performance than BCIs those use only one type of brain pattern. In this project, we first developed a hybrid system for 2-D cursor control. In our system, two independent signals based on MI and P300 were produced from EEG for the vertical and horizontal movement control of the cursor respectively, and the cursor can be moved from an arbitrary initial position to a randomly given target position. Furthermore, a hybrid feature was extracted for selecting the target-of-interest and rejecting the target-of-no-interest, as fast and accurate as possible. In this way, a BCI mouse was implemented. Then an internet browser and a mail client were developed based on the BCI mouse. Moreover, we extended this hybrid BCI system to control a virtual car/wheelchair. On the other hand, we also developed a P300 and SSVEP-based hybrid BCI not only to improve the classification performance, but also to validate the possibility of clinical applications, e.g., detection of residual cognitive function and covert awareness in patients with disorders of consciousness.
Keywords: brain-computer interfaces; electroencephalography; feature extraction; medical signal processing; signal classification; visual evoked potentials; 2D cursor control; BCI controlled computer mouse; BCI mouse; EEG-based hybrid BCI; EEG-based multidimensional control; internet browser; MI; P300; SSVEP; classification performance improvement; clinical applications; consciousness disorder patients; covert awareness detection; horizontal movement control; hybrid brain computer interfaces; hybrid feature extraction; mail client; motor imagery; multiple independent control signals; noisy EEG data; residual cognitive function detection; steady state visual evoked potential; target position; target-of-interest selection; target-of-no-interest rejection; vertical movement control; virtual car; virtual wheelchair; Accuracy; Brain-computer interfaces; Browsers; Control systems; Mice; Postal services; Wheelchairs; Hybrid BCIs; Multidimentional control; Steady state visual evoked potential; awareness detection (ID#: 16-9784)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7073035&isnumber=7073013
Y. Zhang, L. Comerford, M. Beer and I. Kougioumtzoglou, “Compressive Sensing for Power Spectrum Estimation of Multi-Dimensional Processes Under Missing Data,” 2015 International Conference on Systems, Signals and Image Processing (IWSSIP), London, 2015, pp. 162-165. doi: 10.1109/IWSSIP.2015.7314202
Abstract: A compressive sensing (CS) based approach is applied in conjunction with an adaptive basis re-weighting procedure for multi-dimensional stochastic process power spectrum estimation. In particular, the problem of sampling gaps in stochastic process records, occurring for reasons such as sensor failures, data corruption, and bandwidth limitations, is addressed. Specifically, due to the fact that many stochastic process records such as wind, sea wave and earthquake excitations can be represented with relative sparsity in the frequency domain, a CS framework can be applied for power spectrum estimation. By relying on signal sparsity, and the assumption that multiple records are available upon which to produce a spectral estimate, it has been shown that a re-weighted CS approach succeeds in estimating power spectra with satisfactory accuracy. Of key importance in this paper is the extension from one-dimensional vector processes to a broader class of problems involving multidimensional stochastic fields. Numerical examples demonstrate the effectiveness of the approach when records are subjected to up to 75% missing data.
Keywords: compressed sensing; spectral analysis; stochastic processes; adaptive basis reweighting procedure; compressive sensing; missing data; multidimensional process; multidimensional stochastic fields; multidimensional stochastic process; one-dimensional vector process; power spectrum estimation; Compressed sensing; Estimation; Frequency-domain analysis; Sensors; Sparse matrices; Spectral analysis; Stochastic processes; Compressive Sensing; Missing Data; Power Spectrum; Stochastic Process (ID#: 16-9785)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7314202&isnumber=7313917
D. Misra, S. Deb and S. Joardar, “Efficient Design of Quadrature Mirror Filter Bank for Audio Signal Processing Using Craziness based Particle Swarm Optimization Technique,” Computer, Communication and Control (IC4), 2015 International Conference on, Indore, 2015, pp. 1-5. doi: 10.1109/IC4.2015.7375563
Abstract: In this paper, a superior version of Particle Swarm Optimization called Craziness based Particle Swarm Optimization (CRPSO) Technique is demonstrated for designing two-channel Quadrature Mirror Filter (QMF) Bank so as to process an audio signal with nearly perfect reconstructed output. Apart from achieving a better control on cognitive and social components of standard PSO, the proposed CRPSO dictates better implementation due to incorporation of a fresh craziness parameter, in the velocity equation of PSO, to ensure that the particle would have a predefined craziness probability to maintain the diversity of the particles. This mutation in the velocity equation not only ensures the faster searching in the multidimensional search space but also the solution produced is nearly accurate to the global optimal solution. The algorithm's performance is studied with the comparison of traditional PSO. Simulation results articulate that the proposed CRPSO algorithm outperforms its counterparts(PSO) not only in terms of quality output, i.e. sharpness at cut-off, pass band ripple and stop band attenuation but also in convergence speed with assured fidelity.
Keywords: audio signal processing; channel bank filters; particle swarm optimisation; CRPSO algorithm; QMF Bank; cognitive components; craziness based particle swarm optimization technique; craziness probability; global optimal solution; multidimensional search space; pass band ripple; quadrature mirror filter bank; social components; standard PSO; stop band attenuation; two-channel quadrature mirror filter bank; Filter banks; Finite impulse response filters; IIR filters; Mirrors; Particle swarm optimization; Power filters; CRPSO; FIR; PSO; QMF (ID#: 16-9786)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7375563&isnumber=7374772
J. Velten, A. Kummert, A. Gavriilidis and K. Galkowski, “Application Specific Stability of 3-D Roesser-like Model Realizations,” Multidimensional (nD) Systems (nDS), 2015 IEEE 9th International Workshop on, Vila Real, 2015, pp. 1-6. doi: 10.1109/NDS.2015.7332651
Abstract: Stability of multidimensional (k-D) systems is still a challenging field of work. Well known and established stability measures may lead to complex mathematical problems, while simple tests are restricted to special cases of n-D systems. A new stability test for certain discrete 3-D system realizations given in a Roesser-like model description is proposed. This test is suitable for signals bounded with respect to all three coordinate directions, like spatio temporal video image signals. The 3-D system is observed in real operation, i.e. considering a sequence of processing, which leads to a 1-D state space description, allowing for application of a 1-D stability test. Since application of 1-D stability tests to higher dimensional problems is not a completely new approach, main contribution of this paper is the regular and well structured decomposition of a discrete 3-D system description into a classical 1-D state space description.
Keywords: discrete systems; stability; state-space methods; 3D Roesser-like model realization; discrete 3D system realization; multidimensional k-D system stability; state space description; Computational modeling; Electronic mail; MIMO; Mathematical model; Solid modeling; Stability criteria (ID#: 16-9787)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332651&isnumber=7332630
V. M. Chubich, O. S. Chernikova and E. A. Beriket, “Specificities of the Design of Input Signals for Models Gaussian Linear Systems,” Control and Communications (SIBCON), 2015 International Siberian Conference on, Omsk, 2015, pp. 1-6. doi: 10.1109/SIBCON.2015.7147271
Abstract: Some theoretical and applied aspects of the design input signals of the multidimensional stochastic linear discrete and continuous-discrete systems described by state space models are under consideration. Original results are obtained in the case that the parameters of mathematical models to be estimated appear the covariance matrices of the process noise and measurement noise. It is shown that in such a case, occurrence of the unknown parameters of the Fisher information matrix is constant and design of experiment not useful.
Keywords: Gaussian processes; covariance analysis; discrete systems; stochastic processes; Fisher information matrix; Gaussian linear systems; continuous-discrete systems; covariance matrices; design input signals; mathematical models; measurement noise; multidimensional stochastic linear discrete systems; process noise; state space models; Covariance matrices; Kalman filters; Linear systems; Mathematical model; Noise; Noise measurement; Stochastic processes; continuous - discrete system; discrete system; experiment design (ID#: 16-9788)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7147271&isnumber=7146959
T. Liu, I. B. Djordjevic and Mo Li, “Multidimensional Signal Constellation Design for Channel Dominated with Nonlinear Phase Noise,” Telecommunication in Modern Satellite, Cable and Broadcasting Services (TELSIKS), 2015 12th International Conference on, Nis, 2015, pp. 133-136. doi: 10.1109/TELSKS.2015.7357754
Abstract: Multidimensional signal constellation sets suitable for nonlinear phase noise dominated channel are proposed, where the cumulative log-likelihood-function is used as the optimization criterion. Also, we demonstrate that our proposed signal constellation sets significantly outperform polarization switched QPSK and sphere packing constellations in 8-ary case and 16-ary case.
Keywords: optimisation; phase noise; signal processing; cumulative log likelihood function; multidimensional signal constellation; nonlinear phase noise; optimization criterion; polarization switched QPSK; sphere packing constellations; Algorithm design and analysis; Constellation diagram; Optical fiber communication; Optical fiber polarization; Parity check codes; Phase noise; Fiber optics and optical communications; Forward error correction; Low-density parity-check codes; Modulations; Optimal signal constellation design (ID#: 16-9789)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357754&isnumber=7357713
V. C. K. Cheung, K. Devarajan, G. Severini, A. Turolla and P. Bonato, “Decomposing Time Series Data by a Non-Negative Matrix Factorization Algorithm with Temporally Constrained Coefficients,” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, 2015, pp. 3496-3499. doi: 10.1109/EMBC.2015.7319146
Abstract: The non-negative matrix factorization algorithm (NMF) decomposes a data matrix into a set of non-negative basis vectors, each scaled by a coefficient. In its original formulation, the NMF assumes the data samples and dimensions to be independently distributed, making it a less-than-ideal algorithm for the analysis of time series data with temporal correlations. Here, we seek to derive an NMF that accounts for temporal dependencies in the data by explicitly incorporating a very simple temporal constraint for the coefficients into the NMF update rules. We applied the modified algorithm to 2 multi-dimensional electromyographic data sets collected from the human upper-limb to identify muscle synergies. We found that because it reduced the number of free parameters in the model, our modified NMF made it possible to use the Akaike Information Criterion to objectively identify a model order (i.e., the number of muscle synergies composing the data) that is more functionally interpretable, and closer to the numbers previously determined using ad hoc measures.
Keywords: electromyography; matrix decomposition; medical signal processing; numerical analysis; time series; vectors; Akaike Information Criterion; NMF update rules; coefficient temporal constraint; human upper limb; independently distributed data dimensions; independently distributed data samples; multidimensional electromyographic data sets; muscle synergy identification; nonnegative basis vectors; nonnegative matrix factorization algorithm; temporal correlations; temporally constrained coefficients; time series data analysis; time series data decomposition; vector coefficient; Data mining; Data models; Distributed databases; Electromyography; Matrix decomposition; Muscles; Time series analysis (ID#: 16-9790)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7319146&isnumber=7318236
T. D. Pham and M. Oyama-Higa, “Photoplethysmography Technology and Its Feature Visualization for Cognitive Stimulation Assessment,” Industrial Technology (ICIT), 2015 IEEE International Conference on, Seville, 2015, pp. 1735-1740. doi: 10.1109/ICIT.2015.7125348
Abstract: Therapeutic communication is recognized as an alternative cognitive stimulation for people with mental disorders. It is important to measure the effectiveness of such therapeutic treatments. In this paper, we present the use of photoplethysmography (PPG) technology to synchronize communication signals between the care-giver and people with dementia. To gain insights into the communication effect, the largest Lyapunov exponents are extracted from the PPG signals, which are then analyzed by multidimensional scaling to visualize the signal similarity/dissimilarity between the care-giver and participants. Experimental results show that the proposed approach is promising as a useful tool for visual assessment of the influence of the therapy over participants.
Keywords: cognition; data visualisation; feature extraction; medical disorders; medical signal processing; patient care; patient treatment; photoplethysmography; PPG signals; alternative cognitive stimulation; care-giver; cognitive stimulation assessment; communication effect; communication signals; dementia; feature visualization; largest Lyapunov exponents; mental disorders; multidimensional scaling; photoplethysmography technology; signal dissimilarity; signal similarity; therapeutic communication; therapeutic treatments; visual assessment; Data visualization; Dementia; Feature extraction; Mathematical model; Senior citizens; Synchronization; Data visualization; Largest Lyapunov exponent; Mental health; Multidimensional scaling; Photoplethysmography; Synchronized signal processing (ID#: 16-9791)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7125348&isnumber=7125066
V. Vinoth, M. Muthaiah and M. Chitra, “An Efficient Pipeline Inspection on Heterogeneous Relay Nodes in Wireless Sensor Networks,” Communications and Signal Processing (ICCSP), 2015 International Conference on, Melmaruvathur, 2015, pp. 0730-0734. doi: 10.1109/ICCSP.2015.7322586
Abstract: Wireless sensor networks (WSNs) have become an effective technique in monitoring water, oil, and gas pipelines. Wireless sensor nodes have played a greater role in multidimensional applications such as tactical monitoring, weather monitoring, and battlefield detecting. Combined wireless nodes create a network in a distributed manner. They can provide accurate results in both aboveground and underground. However, there are some challenges in propagating the signal in the underground. In underground pipeline communications, sensor nodes detect the signal and forward it to the relay node, which is placed in aboveground. Our study is designed to reduce the propagating delay and to allocate channel in optimal relay node selection by using a heterogeneous network.
Keywords: channel allocation; computerised monitoring; inspection; pipelines; relay networks (telecommunication); signal detection; underground communication; wireless channels; wireless sensor networks; WSN; gas pipeline; heterogeneous relay node; multidimensional application; oil pipeline; pipeline inspection; signal propagation; underground pipeline communication; water pipeline; wireless sensor network; Delays; Media; Monitoring; Pipelines; Relays; Reliability; Wireless sensor networks; heterogeneous; leak detection; pipelines (ID#: 16-9792)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322586&isnumber=7322423
Asit Kumar Subudhi, Subhranshu Sekhar Jena and S. K. Sabut, “Delineation of Infarct Lesions by Multi-Dimensional Fuzzy C-Means of Acute Ischemic Stroke Patients,” Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, Visakhapatnam, 2015, pp. 1-5. doi: 10.1109/EESCO.2015.7253655
Abstract: Lesion size in diffusion weighted imaging (DWI) of magnetic resonance (MR) images is an important clinical parameter to assess the lesion area in ischemic stroke. Manual delineation of stroke lesion is time-consuming, highly user-dependent and difficult to perform in areas of indistinct borders. In this paper we present a segmentation process to detect lesion which separates non-enhancing brain lesion from healthy tissues in DWI MR images to aid in the task of tracking lesion area over time. Lesion segmentation by Fast Fuzzy C-means was performed in DWI images obtained from patients following ischemic stroke. The lesions are delineated and segmented by Multi- dimensional Fuzzy C-Means (FCM). A high visual similarity of lesions was observed in segmented images obtained by this method. The key elements are the accurate segmenting brain images from stroke patients and measuring the size of images in pixel-wise for defining areas with hypo- or hyper-intense signals. The relative area of the affected lesion is also measured with respect to normal brain image.
Keywords: biodiffusion; biological tissues; biomedical MRI; brain; fuzzy set theory; image segmentation; medical image processing; DWI MR images; FCM; acute ischemic stroke patients; brain images; diffusion weighted imaging; healthy tissues; hyper-intense signals; hypointense signals; infarct lesions; lesion segmentation; lesion size; magnetic resonance images; manual delineation; multidimensional fuzzy C-means; nonenhancing brain lesion; segmentation process; stroke lesion; Brain; Clustering algorithms; Image segmentation; Lesions; Magnetic resonance imaging; Visualization; DWI; Lesion; Magnetic resonance imaging; ischemic stroke (ID#: 16-9793)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7253655&isnumber=7253613
M. Mardani and G. B. Giannakis, “Online Sketching for Big Data Subspace Learning,” Signal Processing Conference (EUSIPCO), 2015 23rd European, Nice, 2015, pp. 2511-2515. doi: 10.1109/EUSIPCO.2015.7362837
Abstract: Sketching (a.k.a. subsampling) high-dimensional data is a crucial task to facilitate data acquisition process e.g., in magnetic resonance imaging, and to render affordable 'Big Data' analytics. Multidimensional nature and the need for realtime processing of data however pose major obstacles. To cope with these challenges, the present paper brings forth a novel real-time sketching scheme that exploits the correlations across data stream to learn a latent subspace based upon tensor PARAFAC decomposition 'on the fly.' Leveraging the online subspace updates, we introduce a notion of importance score, which is subsequently adapted into a randomization scheme to predict a minimal subset of important features to acquire in the next time instant. Preliminary tests with synthetic data corroborate the effectiveness of the novel scheme relative to uniform sampling.
Keywords: Big Data; data acquisition; learning (artificial intelligence); tensors; Big Data analytics; Big Data subspace learning; PARAFAC decomposition; data acquisition process; data processing; importance score; latent subspace; online subspace updates; randomization scheme; realtime sketching scheme; Big data; Europe; Magnetic resonance imaging; Matrix decomposition; Real-time systems; Signal processing; Tensile stress; Tensor; randomization; streaming data; subspace learning (ID#: 16-9794)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7362837&isnumber=7362087
F. A. B. Hamzah, T. Yoshida, M. Iwahashi and H. Kiya, “Channel Scaling for Rounding Noise Reduction in Minimum Lifting 3D Wavelet Transform,” 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Hong Kong, 2015, pp. 888-891. doi: 10.1109/APSIPA.2015.7415398
Abstract: An integer transform is used in lossless-lossy coding since it can reconstruct an input signal without any loss at output of the backward transform. Recently, its number of lifting steps is reduced as well as delay from input to output introducing multi-dimensional memory accessing. However it has a problem that quality of the reconstructed signal in lossy coding has its upper bound in the rate distortion curve. This is because the noise generated by rounding operations in each lifting step inside the integer transform does not contribute to data compression. This paper tries to reduce the rounding noise observed at output of the integer transform introducing channel scaling inside the transform. As a result of experiments, it was observed that the proposed method improves quality of the decoded signal in lossy coding mode.
Keywords: channel coding; data compression; rate distortion theory; signal denoising; signal reconstruction; wavelet transforms; backward transform; channel scaling; data compression; input signal reconstruction; integer transform; lossless-lossy coding; minimum lifting 3D wavelet transform; multidimensional memory access; rate distortion curve; rounding noise reduction; upper bound; Bit rate; Decision support systems (ID#: 16-9795)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7415398&isnumber=7415286
K. Konopko, Y. P. Grishin and D. Jańczak, “Radar Signal Recognition Based on Time-Frequency Representations and Multidimensional Probability Density Function Estimator,” Signal Processing Symposium (SPSympo), 2015, Debe, 2015, pp. 1-6. doi: 10.1109/SPS.2015.7168292
Abstract: A radar signal recognition can be accomplished by exploiting the particular features of a radar signal observed in presence of noise. The features are the result of slight radar component variations and acts as an individual signature. The paper describes radar signal recognition algorithm based on time frequency analysis, noise reduction and statistical classification procedures. The proposed method is based on the Wigner-Ville Distribution with using a two-dimensional denoising filter which is followed by a probability density function estimator which extracts the features vector. Finally the statistical classifier is used for the radar signal recognition. The numerical simulation results for the P4-coded signals are presented.
Keywords: Wigner distribution; probability; radar signal processing; Wigner-Ville distribution; multidimensional probability density function estimator; noise reduction; radar signal recognition; statistical classifier; time-frequency representations; two-dimensional denoising filter; Algorithm design and analysis; Feature extraction; Noise; Noise reduction; Radar; Signal processing algorithms; Time-frequency analysis; Wigner-Ville Distribution; time-frequency analysis (ID#: 16-9796)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168292&isnumber=7168254
S. Kumar and K. Rajawat, “Velocity-Assisted Multidimensional Scaling,” 2015 IEEE 16th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Stockholm, 2015, pp. 570-574. doi: 10.1109/SPAWC.2015.7227102
Abstract: This paper considers the problem of cooperative localization in mobile networks. In static networks, node locations can be obtained from pairwise distance measurements using the classical multidimensional scaling (MDS) approach. This paper introduces a modified MDS framework that also incorporates relative velocity measurements available in mobile networks. The proposed cost function is minimized via a provably convergent, low complexity majorization algorithm similar to SMACOF. The algorithm incurs low computational and communication cost, and allows practical constraints such as missing measurements and variable node velocities. Simulation results corroborate the performance gains obtained by the proposed algorithm over state-of-the-art localization algorithms.
Keywords: cooperative communication; mobile communication; cooperative localization; mobile networks; node locations; pairwise distance measurements; relative velocity measurements; velocity-assisted multidimensional scaling; Accuracy; Doppler effect; Measurement uncertainty; Noise measurement; Stress; Velocity measurement; Wireless sensor networks; Localization; iterative majorization-minimization; mobile sensor networks; multidimensional scaling (MDS) (ID#: 16-9797)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7227102&isnumber=7226983
G. Shishkov, N. Popova, K. Alexiev and P. Koprinkova-Hristova, “Investigation of Some Parameters of a Neuro-Fuzzy Approach for Dynamic Sound Fields Visualization,” Innovations in Intelligent SysTems and Applications (INISTA), 2015 International Symposium on, Madrid, 2015, pp. 1-8. doi: 10.1109/INISTA.2015.7276769
Abstract: The present paper presents detailed investigation of some parameters of our recently proposed approach for multidimensional data clustering aimed at dynamic sound fields' visualization. These include the following: number of direction selective cells (MT neurons) applied as filters at the first step of feature extraction from the raw data; size of ESN reservoir used at the second step for feature extraction; selection criteria for proper 2D projection of the original multidimensional data; number of clusters into which data are separated. The tests were performed using real experimental data collected by a microphone array (called further “acoustic camera”) build from 18 microphones placed irregularly on a wheel antenna with a photo camera at its center. Using our approach we created dynamic “sound pictures” of the data collected by acoustic camera and compared them with the static “sound picture” created by the original software of the equipment. During investigations we also discovered that our algorithm is able to distinguish among two sound sources — a task that was not that well performed by the original software of the acoustic camera.
Keywords: acoustic signal processing; data visualisation; feature extraction; fuzzy neural nets; microphone arrays; 2D projection; ESN reservoir; MT neuron; acoustic camera; direction selective cell; dynamic sound field visualization; dynamic sound fields visualization; microphone array; multidimensional data clustering; neuro-fuzzy approach; original multidimensional data; photo camera; raw data; selection criteria; sound source; static sound picture; wheel antenna; Acoustics; Cameras; Feature extraction; Heuristic algorithms; Microphones; Neurons; Reservoirs; Echo state networks (ESN); Fuzzy C-means (FCM) clustering; acoustic camera; direction selective cells (MT neurons) (ID#: 16-9798)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7276769&isnumber=7276713
M. Xu, J. Shen and H. Yu, “Multimodal Data Classification Using Signal Quality Indices and Empirical Similarity-Based Reasoning,” 2015 Computing in Cardiology Conference (CinC), Nice, 2015, pp. 1197-1200. doi: 10.1109/CIC.2015.7411131
Abstract: All bedside monitors are prone to heterogeneity and mis-labeled data, yet each multimodal sample data contains different sets of multi-dimensional attributes. To reduce the incidence of false alarms in the Intensive Care Unit (ICU), a new interactive classifier was proposed. In the algorithm, case was represented with signal quality Indices (SQIs) and RR interval features. With the function wabp, the annotations were obtained from the target signal after preprocessing. Five features were used as the inputs to a case-based reasoning classifier, retrieving the cases with empirical similarity. With the posted 750 records of the PhysioNet/CinC 2015 Challenge, the classifier was trained for answering the alarm types of the query segments. Compared with conventional threshold-based alarm algorithms, the performance of our proposed algthom reduces the maximum number of false alarms while avoiding the suppression of true alarms. Evaluated with the hidden test dataset, both real-time and retrospective, the results show that the overall TPR is 83% and 82% respectively; and TNR 44% and 43% respectively. This algorithm offers a new way of thinking about retrieving heterogeneity patients with multimodal data and classifying the alarm types in the context of mislabeled cases.
Keywords: electrocardiography; haemodynamics; inference mechanisms; medical signal processing; patient care; photoplethysmography; signal classification; ECG wavefonns; ICU; Intensive Care Unit; PPG; PhysioNet/CinC 2015 Challenge; RR interval features; SQI; arterial blood pressure; bedside monitors; case-based reasoning classifier; empirical similarity-based reasoning; interactive classifier; multidimensional attributes; multimodal data classification; signal quality Indices; signal quality indices; Chlorine; Classification algorithms; Physiology; Robustness (ID#: 16-9799)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7411131&isnumber=7408562
K. Xiao, B. Xiao, S. Zhang, Z. Chen and B. Xia, “Simplified Multiuser Detection for SCMA with Sum-Product Algorithm,” Wireless Communications & Signal Processing (WCSP), 2015 International Conference on, Nanjing, 2015, pp. 1-5. doi: 10.1109/WCSP.2015.7341328
Abstract: Sparse code multiple access (SCMA) is a novel non-orthogonal multiple access technique, which fully exploits the shaping gain of multi-dimensional codewords. However, the lack of simplified multiuser detection algorithm prevents further implementation due to the inherently high computation complexity. In this paper, general SCMA detector algorithms based on Sum-product algorithm are elaborated. Then two improved algorithms are proposed, which simplify the detection structure and curtail exponent operations quantitatively in logarithm domain. Furthermore, to analyze these detection algorithms fairly, we derive theoretical expression of the average mutual information (AMI) of SCMA (SCMA-AMI), and employ a statistical method to calculate SCMA-AMI associated with specific detection algorithms. Simulation results show that the performance is almost as well as the based message passing algorithm in terms of both BER and AMI while the complexity is significantly decreased,compared to the traditional Max-Log approximation method.
Keywords: communication complexity; error statistics; message passing; multi-access systems; multiuser detection; statistical analysis; AMI; Max-Log approximation method; SCMA detector algorithm; average mutual information; computational complexity; message passing algorithm; multidimensional codeword; non-orthogonal multiple access technique; simplified multiuser detection algorithm; sparse code multiple access technique; statistical method; sum-product algorithm; Approximation algorithms; Complexity theory; Detectors; Least squares approximations; Message passing; Multiuser detection; Sum product algorithm (ID#: 16-9800)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7341328&isnumber=7340966
B. Balasingam, M. Baum and P. Willett, “MMOSPA Estimation with Unknown Number of Objects,” Signal and Information Processing (ChinaSIP), 2015 IEEE China Summit and International Conference on, Chengdu, 2015, pp. 706-710. doi: 10.1109/ChinaSIP.2015.7230496
Abstract: We consider the problem of estimating unordered sets of objects, which arises when the object labels are irrelevant. The widely used minimum mean square error (MMSE) estimators are not applicable for the estimation of unordered objects. Recently, a new type of estimator, known as the minimum mean OSPA (MMOSPA) estimator, which minimizes the optimal sub-pattern assignment (OSPA) metric, was proposed. Unfortunately, the MMOSPA estimator is unable to deliver a closed form solution when the objects, represented as a random finite set (RFS), are multidimensional or when the underlying posterior density is non-Gaussian; also, the existing MMOSPA estimators have not bee used to estimate unknown numbers of objects. In this paper, we derive a particle-based algorithm for the estimation of unknown number of objects which is optimal in the MMOSPA sense; also, the proposed algorithm is not limited by the dimension of the RFS or the requirement of Gaussian posterior density.
Keywords: Gaussian processes; estimation theory; minimisation; object tracking; MMOSPA estimation; OSPA metric minimization; RFS; minimum mean OSPA; nonGaussian posterior density; optimal subpattern assignment metric; particle-based algorithm; random finite set; unordered object estimation; Conferences; Cost function; Estimation; Radar tracking; Signal processing algorithms; Target tracking; Minimum mean OSPA (MMOSPA) estimate; Multi-object filtering; Multi-object systems; Multitarget tracking; Optimal sub-pattern assignment (OSPA); Point processes; Random finite sets (RFS); Wasserstein distance (ID#: 16-9801)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230496&isnumber=7230339
F. Mandanas and C. Kotropoulos, “A Maximum Correntropy Criterion for Robust Multidimensional Scaling,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 1906-1910. doi: 10.1109/ICASSP.2015.7178302
Abstract: Multidimensional Scaling (MDS) refers to a class of dimensionality reduction techniques applied to pairwise dissimilarities between objects, so that the interpoint distances in the space of reduced dimensions approximate the initial pairwise dissimilarities as closely as possible. Here, a unified framework is proposed, where the MDS is treated as maximization of a correntropy criterion, which is solved by half-quadratic optimization in a multiplicative formulation. The proposed algorithm is coined as Multiplicative Half-Quadratic MDS (MHQMDS). Its performance is assessed for potential functions associated to various M-estimators, because the correntropy criterion is closely related to the Welsch M-estimator. Three state-of-the-art MDS techniques, namely the Scaling by Majorizing a Complicated Function (SMACOF), the Robust Euclidean Embedding (REE), and the Robust MDS (RMDS), are implemented under the same conditions. The experimental results indicate that the MHQMDS, relying on the M-estimators, performs better than the aforementioned state-of-the-art competing techniques.
Keywords: entropy; quadratic programming; MHQMDS; REE; RMDS; SMACOF; Welsch M-estimator; correntropy criterion maximization; dimensionality reduction technique; half-quadratic optimization; maximum correntropy criterion; multiplicative formulation; multiplicative half-quadratic MDS robust Euclidean embedding; robust MDS; robust multidimensional scaling; scaling by majorizing a complicated function; Computer integrated manufacturing; Kernel; Minimization; Optimization; Robustness; Signal processing; Stress; M-estimators; Multidimensional scaling; correntropy; robustness (ID#: 16-9802)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178302&isnumber=7177909
B. Gajic, S. Nováczki and S. Mwanje, “An Improved Anomaly Detection in Mobile Networks by Using Incremental Time-Aware Clustering,” 2015 IFIP/IEEE International Symposium on Integrated Network Management (IM), Ottawa, ON, 2015, pp. 1286-1291. doi: 10.1109/INM.2015.7140483
Abstract: With the increase of the mobile network complexity, minimizing the level of human intervention in the network management and troubleshooting has become a crucial factor. This paper focuses on enhancing the level of automation in the network management by dynamically learning the mobile network cell states and improving the anomaly detection on the individual cell level taking into consideration not just the multidimensionality of cell performance indicators, but also the sequence of cell states that have been traversed over time. Our evaluation based on the real network data shows very good performance of such a learning model being able to capture the cell behavior in time and multidimensional space. Such knowledge can improve the detection of different types of anomalies in cell functionality and enhance the process of cell failure mitigation.
Keywords: mobile communication; mobile computing; mobility management (mobile radio); pattern clustering; anomaly detection; cell failure mitigation process; incremental time-aware clustering; learning model; mobile network cell; mobile network complexity; mobile network management; multidimensional space; network troubleshooting; Clustering algorithms; Mobile communication; Mobile computing; Quantization (signal); Sun; Testing; Training (ID#: 16-9803)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140483&isnumber=7140257
T. Qu and Z. Cai, “A Fast Multidimensional Scaling Algorithm,” 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, 2015, pp. 2569-2574. doi: 10.1109/ROBIO.2015.7419726
Abstract: Classical multidimensional scaling (CMDS) is a common method for dimensionality reduction and data visualization. Aimed at the problem of slow speed of CMDS, a divide-and-conquer based MDS (dcMDS) algorithm is put forward in this paper. In this algorithm, the distance matrix between samples is divided along its main diagonal into several submatrices, which are solved respectively. By isometric transformation, the solutions of the submatrices can be integrated to form the solution of the whole matrix. The solution of dcMDS is the same as that of CMDS. Moreover, when the intrinsic dimension of the samples is much smaller than the number of samples, the speed of dcMDS is significantly improved than CMDS. In this paper, a detailed theoretical analysis of dcMDS is presented, and its efficiency is verified by experiments.
Keywords: data visualisation; CMDS; classical multidimensional scaling; data visualization; dimensionality reduction; divide-and-conquer based MDS; isometric transformation; multidimensional scaling algorithm; Algorithm design and analysis; Euclidean distance; Matrix decomposition; Nickel; Niobium; Principal component analysis; Signal processing algorithms (ID#: 16-9804)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7419726&isnumber=7407009
M. Ahadi, M. Vempa and S. Roy, “Efficient Multidimensional Statistical Modeling of High Speed Interconnects in SPICE via Stochastic Collocation Using Stroud Cubature,” Electromagnetic Compatibility and Signal Integrity, 2015 IEEE Symposium on, Santa Clara, CA, 2015, pp. 350-355. doi: 10.1109/EMCSI.2015.7107713
Abstract: In this paper, a novel stochastic collocation approach for the efficient statistical analysis of high-speed interconnect networks within a SPICE environment is proposed. This approach employs the Stroud cubature rules to locate the sparse grid of collocation nodes within the random space where the deterministic SPICE simulations of the network are performed. The major advantage of this approach lies in the fact that the number of collocation nodes scales optimally (i.e. linearly) with the number of random dimensions unlike the exponential or polynomial scaling exhibited by the conventional tensor product grids or the Smolyak sparse grids respectively. This enables the quantification of the statistical moments for interconnect networks involving large random spaces at only a fraction of the typical CPU cost. The validity of this methodology is demonstrated using a numerical example.
Keywords: SPICE; integrated circuit design; integrated circuit interconnections; stochastic processes; Stroud cubature rules; collocation nodes; high speed interconnect; multidimensional statistical modeling; sparse grid location; statistical moment quantification; stochastic collocation; Algorithm design and analysis; Computational modeling; Integrated circuit interconnections; Numerical models; Polynomials; Stochastic processes; Cubature rules; interconnect networks; statistical moments; transient analysis (ID#: 16-9805)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107713&isnumber=7107640
T. Merritt, J. Latorre and S. King, “Attributing Modelling Errors in HMM Synthesis by Stepping Gradually from Natural to Modelled Speech,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 4220-4224. doi: 10.1109/ICASSP.2015.7178766
Abstract: Even the best statistical parametric speech synthesis systems do not achieve the naturalness of good unit selection. We investigated possible causes of this. By constructing speech signals that lie in between natural speech and the output from a complete HMM synthesis system, we investigated various effects of modelling. We manipulated the temporal smoothness and the variance of the spectral parameters to create stimuli, then presented these to listeners alongside natural and vocoded speech, as well as output from a full HMM-based text-to-speech system and from an idealised 'pseudo-HMM'. All speech signals, except the natural waveform, were created using vocoders employing one of two popular spectral parameterisations: Mel-Cepstra or Mel-Line Spectral Pairs. Listeners made 'same or different' pairwise judgements, from which we generated a perceptual map using Multidimensional Scaling. We draw conclusions about which aspects of HMM synthesis are limiting the naturalness of the synthetic speech.
Keywords: hidden Markov models; speech synthesis; vocoders; voice equipment; HMM synthesis; Mel-Cepstra pairs; Mel-Line Spectral pairs; hidden Markov model; modelled speech; modelling errors; natural speech; speech naturalness; speech synthesis systems; vocoded speech; Hidden Markov models; Lead; Smoothing methods; Speech; hidden Markov modelling; vocoding
(ID#: 16-9806)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178766&isnumber=7177909
V. Seneviratne, A. Madanayake and N. Udayanga, “Wideband 32-element 200-MHz 2-D IIR Beam Filters Using ROACH-2 Virtex-6 sx475t FPGA,” Multidimensional (nD) Systems (nDS), 2015 IEEE 9th International Workshop on, Vila Real, 2015, pp. 1-5. doi: 10.1109/NDS.2015.7332641
Abstract: Two-dimensional (2-D) IIR beam filter applications operating in ultra wide-band (UWB) radio frequency (RF) range requires hardware capable of handling high speed real-time processing due to its operation bandwidth lies in megahertz or gigahertz range. Two-dimensional IIR beam forming is used mainly for applications such as communications, radars and detection of directional sensing. A systolic architecture is proposed for the real-time implementation of the 2-D IIR beam filter. This the first attempt of evaluating the prospect of practical implementation of such a beam filter capable in ROACH-2 hardware platform which is equipped with a Xilinx Virtex-6 sx475t FPGA chip, widely used in the field of radio astronomy reaching up to 200 MHz operating frequency.
Keywords: IIR filters; array signal processing; field programmable gate arrays; ultra wideband communication; 2D IIR beam forming; ROACH-2 Virtex-6 sx475t FPGA; UWB radio frequency; directional sensing; frequency 200 MHz; radio astronomy; ultra wideband radio frequency; wideband 32-element 2-D IIR beam filters; Antenna arrays; Array signal processing; Computer architecture; Field programmable gate arrays; Hardware; Radio frequency; Real-time systems; Analog; arrays; beamfilter; beamforming (ID#: 16-9807)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332641&isnumber=7332630
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Multidimensional Signal Processing 2015 (Part 2) |
Multidimensional signal processing research deals with issues such as those arising in automatic target detection and recognition, geophysical inverse problems, and medical estimation problems. Its goal is to develop methods to extract information from diverse data sources amid uncertainty. Research cited here was presented in 2015.
C. Djiongo, S. M. Mpong and O. Monga, “Estimation of Aboveground Biomass from Satellite Data Using Quaternion-Based Texture Analysis of Multi Chromatic Images,” 2015 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Bangkok, 2015, pp. 68-75. doi: 10.1109/SITIS.2015.97
Abstract: In recent years, first approaches using quaternion numbers to handle and model multi chromatic images in a holistic manner were introduced. By defining quaternion Fourier transform, multidimensional data such as color images can be efficiently and easily process. On the other hand, multi chromatic satellite data appear as a primary source for measuring past trends and monitoring changes in forest carbon stocks. Thus, the processing of these data represents a fundamental challenge. In this work, inspired by the quaternion Fourier transforms, we propose a texture-color descriptor to extract relevant information from multi chromatic satellite images. We also propose a quaternion-based texture model, named FOTO++, to address the aboveground biomass estimation issue. Our proposed model begins by removing noises in the multi chromatic data while preserving the edges of canopies. After that, color texture indices are extracted using discrete form of Quaternion Fourier Transform and finally support vector regression method is used to derive biomass estimation from texture indices. Our texture features are modeled by a vector composed by the radial spectrum coming from the amplitude of quaternion Fourier Transform. We conduct several experiments in order the study the sensitivity of our model to acquisition parameters. We also assess its performances both on synthetic images and on real multi chromatic images of Cameroonian forest. The results provided support that our model is more robust to acquisition parameters than the classical Fourier Texture Ordination model and it is more accurate for aboveground biomass estimates. We stress that similar methodology could be used with quaternion wavelets. These results highlight the potential of quaternion-based approach to study multi chromatic images.
Keywords: Fourier transforms; chromatography; forestry; geophysical techniques; regression analysis; rocks; support vector machines; Cameroonian forest; aboveground biomass estimation; classical Fourier texture ordination model; color images; color texture indices; forest carbon stocks; multichromatic satellite data; multichromatic satellite images; quaternion Fourier transform; quaternion numbers; quaternion wavelets; quaternion-based texture analysis; radial spectrum; support vector regression method; texture-color descriptor; Biological system modeling; Biomass; Color; Estimation; Quaternions; Satellites; aboveground biomass; color image; color-texture; discrete quaternion Fourier transform; multi chromatic satellite image (ID#: 16-9848)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7400547&isnumber=7400513
N. Udayanga, A. Madanayake, C. Wijenayake and R. Acosta, “Applebaum Adaptive Array Apertures with 2-D IIR Space-Time Circuit-Network Resonant Pre-Filters,” 2015 IEEE Radar Conference (RadarCon), Arlington, VA, 2015, pp. 0611-0615. doi: 10.1109/RADAR.2015.7131070
Abstract: A modification to the well-known Applebaum adaptive beamformer is proposed employing a low complexity space-time network-resonant IIR beamfiler. The network-resonant filter is a multiple input multiple output (MIMO) linear multidimensional recursive discrete system. It is applied as a pre-filter to an Applebaum adaptive beamformer in order to perform highly directional receive mode beamforming with improved noise and interference rejection. The spatial selectivity (directivity) of the Applebaum beamfomer is enhanced by introducing complex-manifolds from the 2-D IIR beamfilter to the zero-manifold-only transfer function of the adaptive beamformer. This leads to the proposed network-resonant adaptive Applebaum array, which shows angle-dependent levels of additional improvement of output SINR (best case, up to 12 dB improvements, near the end-fire direction). The ability to increase the improvement of the output SINR by 0-12 dB compared the best available adaptive Applebaum beamformer without using additional antenna elements in the array is a useful feature of the scheme. The proposed network-resonant Applebaum adaptive beamformer can be implemented using an upgrade to the digital signal processor without change to the array or RF electronics.
Keywords: IIR filters; MIMO communication; adaptive signal processing; array signal processing; signal denoising; two-dimensional digital filters; 2D IIR space-time circuit-network resonant pre-filters; MIMO system; RF electronics; angle-dependent levels; applebaum adaptive array apertures; applebaum adaptive beamformer; complex-manifolds; digital signal processor; directional receive mode beamforming; improved noise rejection; interference rejection; low complexity space-time network-resonant IIR beamfiler; multiple input multiple output linear multidimensional recursive discrete system; output SINR; spatial selectivity; zero-manifold-only transfer function; Adaptive arrays; Array signal processing; Arrays; Interference; Signal to noise ratio; Transfer functions; 2-D IIR beamfilter; Antenna arrays; Applebaum beamformer; SINR improvement (ID#: 16-9849)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131070&isnumber=7130933
A. Wicenec, “From Antennas to Multi-Dimensional Data Cubes: The SKA Data Path,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 5645-5649. doi: 10.1109/ICASSP.2015.7179052
Abstract: The SKA baseline design defines three independent radio antenna arrays producing vast amounts of data. In order to arrive at, still big, but more manageable data volumes and rates, the information will be processed on-line to arrive at science ready products. This requires a direct network interface between the correlators and dedicated world-class HPC facilities. Due to the remoteness of the two SKA sites, power as well as the availability of maintenance staff will be but two of the limiting factors for the operation of the arrays. Thus the baseline design keeps just the actual core signal processing close to the center of the arrays, the on-line HPC data reduction will be located in Perth and Cape Town, respectively. This paper presents an outline of the complete digital data path starting at the digitiser outputs and ending in the data dissemination and science post-processing, with a focus on the data management aspects within the Science Data Processor (SDP) element, responsible for the post-correlator signal processing and data reduction.
Keywords: antenna arrays; array signal processing; astronomy computing; data handling; radioastronomical techniques; SDP element; SKA data path; multidimensional data cubes; online HPC data reduction; radio antenna arrays; radio astronomy; science data processor element; Array signal processing; Arrays; Australia; Correlators; Pipelines; Radio astronomy; Standards; Radio Astronomy; Square Kilometre Array; data management; data processing (ID#: 16-9850)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7179052&isnumber=7177909
R. Rademacher, J. A. Jackson, A. Rexford and C. S. Kabban, “Quadrature-Based Credible Set Estimation for Radar Feature Extraction,” 2015 IEEE Radar Conference (RadarCon), Arlington, VA, 2015, pp. 1027-1032. doi: 10.1109/RADAR.2015.7131145
Abstract: Efficient and accurate extraction of physically-relevant features from measured radar data is desirable for automatic target recognition (ATR). In this paper, we present an estimation technique to find credible sets of parameters for any given feature model. The proposed approach provides parameter estimates along with confidence values. Maximum a posteriori (MAP) estimates provide a single (vector) parameter value, typically found via sampling methods. However, computational inefficiency and inaccuracy issues commonly arise when sampling multi-modal or multi-dimensional posteriors. As an alternative, we use Gaussian quadrature to compute probability mass functions, covering the entire probability space. An efficient zoom-in approach is used to iteratively locate regions of high probability. The (possibly disjoint) regions of high probability correspond to sets of feasible parameter values, call credible sets. Thus, our quadrature-based credible set estimator (QBCSE) includes values very near the true parameter and confuser values that may lie far from the true parameter but map with high probability to the same observed data. The credible set and associated probabilities are computed and should both be passed to an ATR algorithm for informed decision-making. Applicable to any feature model, we demonstrate the proposed QBCSE scheme using canonical shape feature models in synthetic aperture radar phase history.
Keywords: Gaussian processes; decision making; feature extraction; maximum likelihood estimation; radar target recognition; signal sampling; synthetic aperture radar; vectors; ATR algorithm; Gaussian quadrature; MAP estimation; QBCSE scheme; automatic target recognition; call credible sets; canonical shape feature models; confidence values; decision-making; feasible parameter values; maximum a posteriori estimation; multidimensional posteriors; multimodal posteriors; physically-relevant features; probability mass functions; probability space; quadrature-based credible set estimation; radar feature extraction; sampling methods; single parameter value; synthetic aperture radar phase history; vector; zoom-in approach; Accuracy; Estimation; Feature extraction; Graphics processing units; Probability density function; Radar; Shape; Bayesian estimation; credible set; quadrature (ID#: 16-9851)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131145&isnumber=7130933
H. L. Kennedy, “Parallel Software Implementation of Recursive Multidimensional Digital Filters for Point-Target Detection in Cluttered Infrared Scenes,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 1086-1090. doi: 10.1109/ICASSP.2015.7178137
Abstract: A technique for the enhancement of point targets in clutter is described. The local 3-D spectrum at each pixel is estimated recursively. An optical flow-field for the textured background is then generated using the 3-D autocorrelation function and the local velocity estimates are used to apply high-pass velocity-selective spatiotemporal filters, with finite impulse responses (FIRs), to subtract the background clutter signal, leaving the foreground target signal, plus noise. Parallel software implementations using a multicore central processing unit (CPU) and a graphical processing unit (GPU) are investigated.
Keywords: filtering theory; image sequences; object detection; 3D autocorrelation function; CPU; GPU; cluttered infrared scenes; finite impulse responses; graphical processing unit; high-pass velocity-selective spatiotemporal filters; local 3D spectrum; local velocity estimates; multicore central processing unit; optical flow-field; point-target detection; recursive multidimensional digital filters; Central Processing Unit; Digital filters; Discrete Fourier transforms; Filter banks; Graphics processing units; Optical filters; Spatiotemporal phenomena; Digital filter; Image processing; Multithreading; Optical flow; Recursive spectrum; Whitening (ID#: 16-9852)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumbr=7178137&isnumber=7177909
T. F. de Lima, A. N. Tait, M. A. Nahmias, B. J. Shastri and P. R. Prucnal, “Improved Spectral Sensing in Cognitive Radios Using Photonic-Based Principal Component Analysis,” Signal Processing and Communication Systems (ICSPCS), 2015 9th International Conference on, Cairns, QLD, 2015, pp. 1-5. doi: 10.1109/ICSPCS.2015.7391750
Abstract: We propose and experimentally demonstrate a microwave photonic system that iteratively performs principal component analysis on partially correlated, 8-channel, 13 Gbaud signals. The system that is presented is able to adapt to oscillations in interchannel correlations and follow changing principal components. The system provides advantages in bandwidth performance and fan-in scalability that are far superior to electronic counterparts. Wideband, multidimensional techniques are relevant to >10 GHz cognitive radio systems and could bring solutions for intelligent radio communications and information sensing, including spectral sensing.
Keywords: cognitive radio; microwave photonics; principal component analysis; radio spectrum management; signal processing; 8-channel signals; cognitive radios; interchannel correlations; microwave photonic system; partially correlated signals; photonic-based principal component analysis; spectral sensing; Bandwidth; Correlation; Microwave communication; Microwave filters; Microwave photonics; Principal component analysis; Analog Signal Processing; Microwave Photonics; RF Photonics (ID#: 16-9853)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7391750&isnumber=7391710
M. Darwish, P. Cox, G. Pillonetto and R. Tóth, “Bayesian Identification of LPV Box-Jenkins Models,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 66-71. doi: 10.1109/CDC.2015.7402087
Abstract: In this paper, we introduce a nonparametric approach in a Bayesian setting to efficiently estimate, both in the stochastic and computational sense, linear parameter-varying (LPV) input-output models under general noise conditions of Box-Jenkins (BJ) type. The approach is based on the estimation of the one-step-ahead predictor model of general LPV-BJ structures, where the sub-predictors associated with the input and output signals are captured as asymptotically stable infinite impulse response models (IIRs). These IIR sub-predictors are identified in a completely nonparametric sense, where not only the coefficients are estimated as functions, but also the whole time evolution of the impulse response is estimated as a function. In this Bayesian setting, the one-step-ahead predictor is modelled as a zero-mean Gaussian random field, where the covariance function is a multidimensional Gaussian kernel that encodes both the possible structural dependencies and the stability of the predictor. The unknown hyperparameters that parameterize the kernel are tuned using the empirical Bayes approach, i.e., optimization of the marginal likelihood with respect to available data. It is also shown that, in case the predictor has a finite order, i.e., the true system has an ARX noise structure, our approach is able to recover the underlying structural dependencies. The performance of the identification method is demonstrated on LPV-ARX and LPV-BJ simulation examples by means of a Monte Carlo study.
Keywords: Bayes methods; Gaussian processes; asymptotic stability; autoregressive moving average processes; linear parameter varying systems; parameter estimation; random processes; signal denoising; transient response; ARX noise structure; Bayesian identification; IIR subpredictor identification; LPV Box-Jenkins models; LPV-ARX simulation example; LPV-BJ simulation example; Monte Carlo study; asymptotic infinite impulse response model stability; covariance function; empirical Bayes approach; general LPV-BJ structures; impulse response; input signals; linear parameter-varying input-output models; multidimensional Gaussian kernel; nonparametric approach; one-step-ahead predictor model; output signals; time evolution; unknown hyperparameters; zero-mean Gaussian random field; Asymptotic stability; Bayes methods; Computational modeling; Estimation; Kernel; Optimization; Predictive models (ID#: 16-9854)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7402087&isnumber=7402066
Y. Wu, G. Wen, F. Gao and Y. Fan, “Superpixel Regions Extraction for Target Detection,” Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, Ningbo, 2015, pp. 1-5. doi: 10.1109/ICSPCC.2015.7338955
Abstract: In this paper, an algorithm of target region detection is proposed based on superpixel segmentation in the field of computer vision which is imported to high-resolution remote sensing images for superpixel-level rather than pixel-level target detection. For the problem of massive data, redundant information and time-consuming targets searching of high-resolution remote sensing images with complex scene and large size, the region of interest (ROI) extraction strategy based on a visual saliency map detection is adopted. Second, the multidimensional description vector of local feature is constructed via superpixels obtained from simple linear iterative clustering (SLIC). Third, combine with the prior information of the target to determine the threshold of feature, from which we select the candidate superpixels belong to target. Experimental results show that the proposed algorithm is more effective in high-resolution remote sensing images, overcoming the situation of complex background interference and robust to the target rotation. In addition, the proposed algorithm performs favorably against the traditional sliding windows search algorithm. On one hand, significantly reduces the computing complexity of the search space, and achieves the data dimensionality reduction. On the other hand, it brings lower false probability and improves the detection accuracy.
Keywords: feature extraction; image resolution; image segmentation; iterative methods; military systems; object detection; remote sensing; complex background interference; computer vision; data dimensionality reduction; false probability; high-resolution remote sensing images; local feature; multidimensional description vector; pixel-level target detection; search space; simple linear iterative clustering; superpixel regions extraction; superpixel segmentation; superpixel-level; target region detection target rotation; traditional sliding windows search algorithm; visual saliency map detection; Aircraft; Feature extraction; Image color analysis; Image segmentation; Object detection; Remote sensing; Shape; Remote sensing images; target detection; visual saliency (ID#: 16-9855)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338955&isnumber=7338753
B. Xuhui, C. Zihao and H. Zhongsheng, “Quantized Feedback Control for a Class of 2-D Systems with Missing Measurements,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 3073-3078. doi: 10.1109/ChiCC.2015.7260113
Abstract: In this paper, the quantized feedback control problem is investigated for a class of network-based 2-D systems described by Roesser model with data missing. It is assume that the states of the controlled system are available and there are quantized by logarithmic quantizer before being communicated. Moreover, the data missing phenomena is modeled by a Bernoulli distributed stochastic variable taking values of 1 and 0. A sufficient condition is derived in virtue of the method of sector-bounded uncertainties, which guarantees that the closed-loop system is stochastically stable. Based on the condition, quantized feedback controller can be designed by using linear matrix inequalities technique. The simulation example is given to illustrate the proposed method.
Keywords: H∞ control; closed loop systems; control system synthesis; feedback; linear matrix inequalities; multidimensional systems; networked control systems; stability; stochastic processes; uncertain systems; Bernoulli distributed stochastic variable; Roesser model; closed-loop system; data missing phenomenon; linear matrix inequalities technique; logarithmic quantizer; missing measurement; network-based 2D system; quantized H∞ control problem; quantized feedback control problem; quantized feedback controller design; sector-bounded uncertainties; stochastic stability; sufficient condition; Asymptotic stability; Closed loop systems; Feedback control; Quantization (signal); Sufficient conditions; Symmetric matrices; 2-D systems; missing measurements; networked control systems; quantized control (ID#: 16-9856)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260113&isnumber=7259602
R. Hu, W. Qi and Z. Guo, “Feature Reduction of Multi-Scale LBP for Texture Classification,” 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), Adelaide, SA, 2015, pp. 397-400.
doi: 10.1109/IIH-MSP.2015.79
Abstract: Local binary pattern (LBP) is a simple yet powerful texture descriptor modeling the relationship of pixels to their local neighborhood. By considering multiple neighborhood radii, multi-scale LBP (MS-LBP) is derived. For MS-LBP generation, different scales LBP histograms are first extracted separately, and then combined in concatenate or joint way, resulting in a one-dimensional or multi-dimensional histogram, respectively. Concatenate MS-LBP has low feature dimension but loses some important discriminative information, while joint MS-LBP performs well but suffers high feature dimension. In this work, based on the similarity between different scales patterns and the sparsity of joint MS-LBP histogram, a feature reduction method for joint MS-LBP is proposed. Experiments on Outex and CURet show that the proposed method and its extension have performance comparable to the original joint MS-LBP but have lower feature dimension.
Keywords: feature extraction; image classification; image texture; CURet; MS-LBP generation; Outex; feature reduction method; high-feature dimension; joint MS-LBP histogram; local binary pattern; local neighborhood; low-feature dimension; multidimensional histogram; multiscale LBP; neighborhood radii; one-dimensional histogram; pixel relationship; texture classification; texture descriptor modeling; Correlation; Databases; Feature extraction; Hamming distance; Histograms; Lighting; Robustness; Local binary pattern (LBP); feature reduction; multi-scale LBP (MSLBP) (ID#: 16-9857)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7415840&isnumber=7415733
N. Asendorf and R. R. Nadakuditi, “Improved Estimation of Canonical Vectors in Canonical Correlation Analysis,” 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, 2015, pp. 1806-1810. doi: 10.1109/ACSSC.2015.7421463
Abstract: Canonical Correlation Analysis (CCA) is a multidimensional algorithm for two datasets that finds linear transformations, called canonical vectors, that maximize the correlation between the transformed datasets. However, in the low-sample high-dimension regime these canonical vector estimates are extremely inaccurate. We use insights from random matrix theory to propose a new algorithm that can reliably estimate canonical vectors in the sample deficient regime. Through numerical simulations we showcase that our new algorithm is robust to both limited training data and overestimating the dimension of the signal subspaces.
Keywords: data analysis; statistical analysis; vectors; CCA; canonical correlation analysis; canonical vectors estimation; numerical simulation; random matrix theory; sample deficient regime; Correlation; Covariance matrices; Data models; Matrices; Signal processing algorithms; Sociology (ID#: 16-9858)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7421463&isnumber=7421038
M. Ghamgui, D. Mehdi, O. Bachelier and F. Tadeo, “Lyapunov Theory for Continuous 2D Systems with Variable Delays: Application to Asymptotic and Exponential Stability,” 2015 4th International Conference on Systems and Control (ICSC), Sousse, 2015, pp. 367-371. doi: 10.1109/ICoSC.2015.7153308
Abstract: This paper deals with two dimensional (2D) systems with variable delays. More precisely, conditions are developed to study the asymptotic and exponential stability of 2D Roesser-like models with variable independent delays affecting the two directions. Based on proper definitions of 2D asymptotic and exponential stability, sufficient conditions are developed, expressed using Linear Matrix Inequalities, based on Lyapunov-Krasovskii functionals.
Keywords: Lyapunov methods; asymptotic stability; delay systems; linear matrix inequalities; multidimensional systems; 2D Roesser-like model; Lyapunov theory; Lyapunov-Krasovskii functional; continuous 2D system; exponential stability; linear matrix inequality; sufficient condition; two dimensional system; variable delay; variable independent delay; Asymptotic stability; Boundary conditions; Control theory; Delays; Signal processing; Stability analysis; 2D systems; Roesser model; variable delays (ID#: 16-9859)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153308&isnumber=7152706
J. Gao, L. Shen and D. Luo, “High Frequency HEMT Modeling Using Artificial Neural Network Technique,” 2015 IEEE MTT-S International Conference on Numerical Electromagnetic and Multiphysics Modeling and Optimization (NEMO), Ottawa, ON, 2015, pp. 1-3. doi: 10.1109/NEMO.2015.7415085
Abstract: Accurate high frequency modeling for active devices which includes microwave diodes and transistors are absolutely necessary for computer-aided radio frequency integrated circuit (RFIC) design. This paper aims to provide an overview on small signal and large signal for field effect transistor (FETs) based on the combination of the conventional equivalent circuit modeling and artificial neural network (ANN) modeling techniques. MLPs and Space-mapped neuromodeling techniques have been used for building a small signal model, and the adjoint technique as well as integration and differential techniques are used for building a large signal model. Experimental results, which confirm the validity of the approaches, are also presented.
Keywords: electronic engineering computing; equivalent circuits; high electron mobility transistors; neural nets; semiconductor device models; ANN modeling techniques; FET; HEMT modeling; MLP; RFIC design; artificial neural network modeling techniques; artificial neural network technique; computer-aided radio frequency integrated circuit design; differential techniques; equivalent circuit modeling; field effect transistor; integration techniques microwave diodes; microwave transistors; multilayer perceptrons; space-mapped neuromodeling techniques; Artificial neural networks; Computational modeling; HEMTs; Integrated circuit modeling; Microwave circuits; Microwave transistors; ANN; device; modeling (ID#: 16-9860)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7415085&isnumber=7414988
F. Xue, J. Hu and J. Wang, “An Analysis Model for Spatial Information in Multi-Scales,” 2015 8th International Conference on Signal Processing, Image Processing and Pattern Recognition (SIP), Jeju, 2015, pp. 30-33. doi: 10.1109/SIP.2015.15
Abstract: Multi-Level Integrated analysis model of spatial information proposed in this article can offer more credible results than traditional regression models because of its ability disposing the hierarchical structure of data. In the model spatial process is taken as the process affected by inner and external effect of data and relations of spatial heterogeneity, spatial dependence and spatial scales are distinguished by two regressions, which conducive to dispose combined action formed by spatial dependence, spatial heterogeneity and spatial scale effect in multidimensional spatial analysis.
Keywords: data analysis; regression analysis; spatial data structures; data structure; multidimensional spatial analysis; multilevel integrated analysis model; regression model; spatial dependence; spatial heterogeneity; spatial information; spatial scale effect; Analytical models; Correlation; Data models; Economics; Information science; Mathematical model; Urban areas; Multi-Level Integrated analysis model (ID#: 16-9861)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7433010&isnumber=7432994
R. K. Miranda, J. P. C. L. da Costa, F. Roemer, A. L. F. de Almeida and G. Del Galdo, “Generalized Sidelobe Cancellers for Multidimensional Separable Arrays,” Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2015 IEEE 6th International Workshop on, Cancun, 2015, pp. 193-196. doi: 10.1109/CAMSAP.2015.7383769
Abstract: The usage of antenna arrays brought innumerable benefits to radio systems in the last decades. Arrays can have multidimensional structures that can be exploited to achieve superior performance and lower complexity. However, the literature has not explored yet all the advantages arising from these features. This paper uses tensors to provide a method to design efficient beamformers for multidimensional antenna arrays. In this work, the generalized sidelobe canceller (GSC) is extended to a multidimensional array to create the proposed R-Dimensional GSC (R-D GSC). The proposed scheme has a lower computational complexity and, under certain conditions, exhibits an improved signal to interference and noise ratio (SINR).
Keywords: antenna arrays; computational complexity; generalized sidelobe cancellers; multidimensional separable arrays; signal to interference and noise ratio; Antenna arrays; Array signal processing; Arrays; Interference; Signal to noise ratio; Tensile stress; Transmission line matrix methods (ID#: 16-9862)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7383769&isnumber=7383717
X. Zhang, G. Wen and W. Dai, “Anomaly Detecting in Hyperspectral Imageries Based on Tensor Decomposition with Spectral and Spatial Partitioning,” 2015 8th International Congress on Image and Signal Processing (CISP), Shenyang, 2015, pp. 737-741. doi: 10.1109/CISP.2015.7407975
Abstract: Due to the multidimensional nature of the hyperspectral image (HSI), multi-way arrays (called tensor) are one of the possible solutions for analyzing such data. In tensor algebra, CANDECOMP/PARAFAC decomposition (CPD) is a popular tool which has been successfully applied for the HSI data processing. However, on the one hand, CPD requires large memory for temporal variables. As a result, the memory usually overflows during the process for a real HSI whose size is large. On the other hand, so far no finite algorithm can well-determine the rank of the tensor to be decomposed. An inappropriate number of the rank may over-fit/under-fit the information provided by the tensor. To deal with these problems, this paper proposes an improved CPD with spectral and spatial partitioning for the HSI anomaly detection. First, the original HSI is divided into a set of smaller-sized sub-tensors. Second, CPD is applied onto each sub-tensor. Then, an anomaly detection algorithm is implemented and the detection results are fused along the spectral direction. Experiments with a real HSI data set reveals that the proposed method outperforms the CPD with no partition and the traditional RX anomaly detector with better detection performance.
Keywords: algebra; hyperspectral imaging; CANDECOMP/PARAFAC decomposition; CPD; HSI anomaly detection; anomaly detection algorithm; finite algorithm; hyperspectral imageries; multiway arrays; spatial partitioning; spectral partitioning; tensor algebra; tensor decomposition; Algebra; Correlation; Hyperspectral imaging; Memory management; Spectral analysis; Tensile stress; CANDECOMP/PARAFAC decomposition (CPD); anomaly detection; hyperspectral image (HSI); spectral and spatial partitioning (ID#: 16-9863)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7407975&isnumber=7407837
R. Chen et al., “Research on Multi-Dimensional N-Back Task Induced EEG Variations,” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, 2015, pp. 5163-5166. doi:10.1109/EMBC.2015.7319554
Abstract: In order to test the effectiveness of multi-dimensional N-back task for inducing deeper brain fatigue, we conducted a series of N*L-back experiments: 1*1-back, 1*2-back, 2*1-back and 2*2-back tasks. We analyzed and compared the behavioral results, EEG variations and mutual information among these four different tasks. There was no significant difference in average EEG power and power spectrum entropy (PSE) among the tasks. However, the behavioral result of N*2-back task showed significant difference compared to traditional one dimensional N-back task. Connectivity changes were observed with the addition of one more matching task in N-back. We suggest that multi-dimensional N-back task consume more brain resources and activate different brain areas. These results provide a basis for multi-dimensional N-back tasks that can be used to induce deeper mental fatigue or exert more workload.
Keywords: bioelectric potentials; electroencephalography; entropy; medical signal processing; neurophysiology; average EEG power; brain areas; brain resources; deep brain fatigue; matching task; mental fatigue; multidimensional N-back task induced EEG variations; one dimensional N-back task; power spectrum entropy; Electroencephalography; Entropy; Fatigue; Image color analysis; Instruments; Mutual information; Yttrium (ID#: 16-9864)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7319554&isnumber=7318236
Z. Yun, “The Study Of CDM-BSC-Based Data Mining Driven Fishbone Applied for Data Processing,” Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, Ningbo, 2015, pp. 1-5. doi:10.1109/ICSPCC.2015.7338909
Abstract: Data Mining Driven Fishbone (DMDF), which is whole a new term, is an enhancement of abstractive conception of multidimensional-data flow of fishbone applied for data processing to optimize the process and structure of data management and data mining. CDM-BSC(CRISP-DM applied with Balance Scorecard), which is developed from combination of traditional Data Processing Methodology and BSC for performance measurement systems. End-to-end DMDF diagram includes complex dataflow and different processing component and improvements for numerous aspects in multiply level. Balance Scorecard applied to CRISP-DM is a new methodology of improving the performance of Information and Data Processing. CDM-BSC-based DMDF provides integrated platform and mixed methodology to support the whole life cycle of data processing with comprehensive methodology. Data preprocessing, data Classification, Association rule mining and Prediction are the foundation and linkage of the whole data processing life cycle. DMDF supports combination of different mining component from strategy level, tactical level to abstractive level, and then re-engineered data mining process into execution system to realize reasonable architecture. CDM-BSC-based DMDF is a new direction of the structure of large scale information and data processing.
Keywords: cause-effect analysis; data mining; CDM-BSC; CRISP-DM; DMDF; data classification; data management; data mining driven fishbone; data processing methodology; mixed methodology; multidimensional-data flow; Cause effect analysis; Data mining; Data preprocessing; Metadata; CDM-BSC (CRISP-DM applied with Balance Scorecard); DMDF (data mining driven fishbone); Data mining; Data mining process; Data processing (ID#: 16-9865)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338909&isnumber=7338753
S. Handagala, A. Madanayake and N. Udayanga, “Design of a Millimeter-Wave Dish-Antenna Based 3-D IIR Radar Digital Velocity Filter,” Multidimensional (nD) Systems (nDS), 2015 IEEE 9th International Workshop on, Vila Real, 2015, pp. 1-6. doi: 10.1109/NDS.2015.7332657
Abstract: The enhancement of radar signatures corresponding to an object traveling in a particular velocity is proposed. The method employs a parabolic dish and focal plane array (FPA) processor together with a network resonant multi-dimensional recursive digital velocity filter. An FPA-fed parabolic dish antenna creates multiple radio frequency (RF) beams. The RF beams can sense simulated moving objects that are illuminated using mm-wave (90 GHz) RF energy. A 3-D IIR digital velocity filter is applied on the simulated radar signals to enhance signatures that are moving at a direction of interest at a given speed while significantly suppressing undesired interfering signals traveling at other velocities and additive Gaussian noise. A dish of diameter 0.5 m and focal length of 30 cm with an FPA with 4096 antenna elements (64×64) arranged in a dense square array is simulated using an electromagnetic field simulator. The resulting electric field intensity profiles are processed to extract the signatures of interest. Simulation results show an average signal to interference improvement of 7 dB with single interference and 6 dB for multiple interference. Proposed method exhibits an average signal to interference and noise ratio (SINR) improvement of 6 dB for input SINR of -15 dB. All results are simulation based. No fabrications have been attempted at this point.
Keywords: AWGN; IIR filters; electromagnetic fields; focal planes; millimetre wave antennas; object tracking; radar signal processing; radiofrequency interference; 3D IIR radar digital velocity filter; FPA processor; FPA-fed parabolic dish antenna; SINR; additive Gaussian noise; antenna elements; electric field intensity; electromagnetic field simulator; focal plane array processor; frequency 90 GHz; millimeter wave dish antenna; mm-wave RF energy; network resonant multidimensional recursive digital velocity filter; object traveling; radar signatures; radio frequency beams; signal to interference and noise ratio; Antenna arrays; Arrays; Interference; Radar; Radar antennas; Radio frequency (ID#: 16-9866)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332657&isnumber=7332630
H. D. Lu, B. Y. Chen and F. M. Guo, “The Model Optimized of Mini Packaging for Quantum Dots Photodetector Readout,” Electronics Packaging and iMAPS All Asia Conference (ICEP-IACC), 2015 International Conference on, Kyoto, 2015, pp. 581-585. doi: 10.1109/ICEP-IAAC.2015.7111081
Abstract: The paper shows the research of readout model optimized and mini packaging for the quantum dots photodetector array. The genetic algorithms are used to quantum dots photodetector modeling for accurate readout photoelectric response signal. Three kinds different equivalent circuit model were compared each other and simulated with Cadence IC design software respectively. We developed CTIA readout structure for the quantum dots photodetector array, the readout noise and different substrate materials has simulated by ADS to minimize noise and interference. Two kinds of silicon interposer, namely via-with-one-line and via-with-four-line, have been compared, and demonstrate the via-with-four-line silicon interposer is better than via-with-one-line silicon. Other interposers such as PCB and ceramic interposers still reduce more crosstalk and suppress noise. We still designed the data acquisition and processing analysis unit system, providing Wi-Fi interface to communicate with the PC software to complete the tasks like data acquisition, digital filtering, spectral display, network communication, human-computer interaction etc. Based on high sensitivity of the quantum dots photodetector, the system integrated has more short integration time (10 us), lower noise, and better ability to resist overflow and large dynamic range.
Keywords: data acquisition; elemental semiconductors; packaging; photodetectors; quantum dots; readout electronics; silicon; wireless LAN; PC software; PCB; Si; Wi-Fi interface; ceramic interposers; crosstalk; data acquisition; digital filtering; equivalent circuit model; genetic algorithms; human-computer interaction; mini packaging; network communication; optimized model; quantum dots photodetector array; readout model; readout noise; readout photoelectric response signal; silicon interposer; spectral display; substrate materials; via-with-four-line silicon interposer; via-with-one-line silicon interposer; Arrays; Integrated circuit modeling; Noise; Packaging; Photodetectors; Quantum dots; Silicon; mini packaging; miniature spectrometer; optimization model; photodetector (ID#: 16-9867)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7111081&isnumber=7110993
J. Matamoros, M. Calvo-Fullana and C. Antón-Haro, “On the Impact of Correlated Sampling Processes in WSNs with Energy-Neutral Operation,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 258-263. doi: 10.1109/ICC.2015.7248331
Abstract: In this paper, we consider a communication scenario where multiple EH sensor nodes collect correlated measurements of an underlying random field. The nodes operate in an energy-neutral manner (i.e. energy is used as soon as it is harvested) and, hence, the energy-harvesting and sampling processes at the sensor nodes become inter-twined, random and spatially correlated. Under some mild assumptions, we derive the multidimensional linear filter which minimizes the mean square error in the reconstructed measurements at the Fusion Center (FC). We also analyze the impact of correlated and random sampling processes in the resulting distortion and, in order to gain some insight, we particularize the analysis to the case of fully correlated spatial fields and with an asymptotically large number of sensor nodes.
Keywords: correlation methods; mean square error methods; multidimensional digital filters; sensor fusion; signal sampling; wireless sensor networks; WSN; communication scenario; correlated sampling processes; energy-harvesting; energy-neutral operation; fusion center; intertwined correlation; mean square error minimization; multidimensional linear filter; multiple EH sensor nodes; random correlation; random field; random sampling processes; resulting distortion; spatially correlation; Batteries; Correlation; Distortion; Energy harvesting; Noise; Numerical models; Wireless sensor networks (ID#: 16-9868)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7248331&isnumber=7248285
A. Al-nasheri et al., “Voice Pathology Detection with MDVP Parameters Using Arabic Voice Pathology Database,” Information Technology: Towards New Smart World (NSITNSW), 2015 5th National Symposium on, Riyadh, 2015, pp. 1-5. doi: 10.1109/NSITNSW.2015.7176431
Abstract: This paper investigates the use of Multi-Dimensional Voice Program (MDVP) parameters to automatically detect voice pathology in Arabic voice pathology database (AVPD). MDVP parameters are very popular among the physician / clinician to detect voice pathology; however, MDVP is a commercial software. AVPD is a newly developed speech database designed to suit a wide range of experiments in the field of automatic voice pathology detection, classification, and automatic speech recognition. This paper is the first step to evaluate MDVP parameters in AVPD using sustained vowel /a/. The experimental results demonstrate that some of the acoustic features show an excellent ability to discriminate between normal and pathological voices. The overall best accuracy is 81.33% by using SVM classifier.
Keywords: medical signal detection; signal classification; speech recognition; support vector machines; AVPD; Arabic voice pathology database; MDVP parameters; SVM classifier; acoustic features; automatic speech recognition; commercial software; multidimensional voice program; speech database; support vector machine; voice pathology detection; Accuracy; Acoustics; Databases; Pathology; Speech; Speech recognition; Support vector machines; MDVP; MEEI; SVM (ID#: 16-9869)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176431&isnumber=7176382
B. Liao and S. C. Chan, “A Simple Method for DOA Estimation in the Presence of Unknown Nonuniform Noise,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 2789-2793. doi: 10.1109/ICASSP.2015.7178479
Abstract: When considering the problem of direction-of-arrival (DOA) estimation, uniform noise is often assumed and hence, the corresponding noise covariance matrix is diagonal and has identical diagonal entries. However, this does not always hold true since the noise is nonuniform in certain applications and a model of arbitrary diagonal noise covariance matrix should be adopted. To this end, a simple approach to handling the unknown nonuniform noise problem is proposed. In particular, an iterative procedure is developed to determine the signal subspace and noise covariance matrix. As a consequence, existing subspace-based DOA estimators such as MUSIC can be applied. Furthermore, the proposed method converges within very few iterations, in each of which closed-form estimates of the signal subspace and noise covariance matrix can be achieved. Hence, it is much more computationally attractive than conventional methods which rely on multi-dimensional search. It is shown that the proposed method enjoys good performance, simplicity and low computational cost, which are desirable in practical applications.
Keywords: covariance matrices; direction-of-arrival estimation; iterative methods; search problems; DOA estimation; arbitrary diagonal noise covariance matrix model; direction-of-arrival estimation; iterative procedure; multidimensional search; signal subspace; unknown nonuniform noise handling problem; Direction-of-arrival estimation; Direction-of-arrival (DOA) estimation; nonuniform noise; subspace estimation (ID#: 16-9870)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178479&isnumber=7177909
H. Wang, Q. Song, T. Ma, H. Cao and Y. Sun, “Study on Brain-Computer Interface Based on Mental Tasks,” Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015 IEEE International Conference on, Shenyang, 2015, pp. 841-845. doi: 10.1109/CYBER.2015.7288053
Abstract: In this paper, a novel method was proposed, which could realize brain-computer interface by means of distinguishing two different imaginary tasks of relaxation-meditation and tension-imagination based on Electroencephalogram (EEG) signal. When subjects performed the task of relaxation-meditation or tension-imagination, the output EEG signals of the subjects from the central parieto-occipital region of PZ electrode were recorded by the digital EEG device. By means of drawing Hilbert time-frequency amplitude spectrum and selecting the statistical properties of amplitude within different time-frequency bands as characteristic vector set, then carrying out feature selection based on Fisher distance criterion, choosing former several elements of larger Fisher index to be multidimensional feature vector and at last inputting the eigenvector to Fisher classifier, and so brain-computer interface was realized. The experiment results of 15 volunteers showed that the average of classification correct ratio was 90.3% and the highest was 95%. Due to only one electrode adopted, if some coding way was adopted, the brain-computer interface technology could be more easily used in robot control.
Keywords: brain-computer interfaces; electroencephalography; medical signal processing; statistical analysis; EEG signal; Fisher distance criterion; Fisher index; Hilbert time-frequency amplitude spectrum; PZ electrode; brain computer interface; brain-computer interface technology; central parieto-occipital region; characteristic vector set; different time-frequency bands; digital EEG device; electroencephalogram signal; mental tasks; multidimensional feature vector; relaxation meditation; robot control; statistical properties; tension imagination; Accuracy; Brain-computer interfaces; Electrodes; Electroencephalography; Feature extraction; Time-frequency analysis; Transforms; EEG-based brain-computer interface (BCI); Hilbert-Huang transform; feature extraction; mental task of relaxation-meditation; mental task of tension-imagination; pattern classification (ID#: 16-9871)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288053&isnumber=7287893
G. Dickins, Hanchi Chen and Wen Zhang, “Soundfield Control for Consumer Device Testing,” Signal Processing and Communication Systems (ICSPCS), 2015 9th International Conference on, Cairns, QLD, 2015, pp. 1-5. doi: 10.1109/ICSPCS.2015.7391774
Abstract: This paper covers the theory, measurement and analysis of a constructed reference system to capture and acoustically reconstruct a spatial soundfield for device testing. Using a rigid sphere microphone array, a framework is presented for numeric and visual representation of the multidimensional system performance. This is used to compare the measured acoustic spatial recreation of a practical 30 channel dodecahedral speaker array to that of a theoretically optimal third order system. We consider the excess noise gain introduced to compensate for the imperfect realization. Results show the theoretical impact of the pragmatic dodecahedral speaker geometry is similar in magnitude to the impact of acoustic considerations such as scattering and use of real speakers. Whilst the speaker arrangement is important, in the context of system design, it is not the most critical factor in a cost constrained design. This work provides a contribution towards bridging the gap between academic soundfield theory and the research challenges for a present high impact application.
Keywords: acoustic field; loudspeakers; microphone arrays; telecommunication equipment testing; test equipment; channel dodecahedral speaker array; consumer device testing; multidimensional system performance; pragmatic dodecahedral speaker geometry; rigid sphere microphone array; soundfield control; spatial soundfield acoustic reconstruction; spatial soundfield capture; Arrays; Couplings; Geometry; Harmonic analysis; Loudspeakers; Microphones; Testing; acoustic testing; microphone array; sound field; spatial sound; speaker array (ID#: 16-9872)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7391774&isnumber=7391710
M. Altuve, E. Severeyn and S. Wong, “Unsupervised Subjects Classification Using Insulin and Glucose Data for Insulin Resistance Assessment,” Signal Processing, Images and Computer Vision (STSIVA), 2015 20th Symposium on, Bogota, 2015, pp. 1-7. doi: 10.1109/STSIVA.2015.7330444
Abstract: In this paper, the K-means clustering algorithm is employed to perform an unsupervised classification of subjects based on unidimensional observations (HOMA-IR and the Matsuda indexes separately) and multidimensional observations (insulin and glucose samples obtained from the oral glucose tolerance test). The goal is to explore if the clusters obtained could be used to predict or diagnose insulin resistance or are related to the profiles of the population under study: metabolic syndrome, marathoners and sedentaries. Using two and three clusters, three classification experiments were carried out: i) using the HOMA-IR index as unidimensional observations, (ii) using the Matsuda index as unidimensional observations, and (iii) using five insulin and five glucose samples as multidimensional observations. The results show that using the HOMA-IR index the clusters are related to insulin resistance but when multidimensional observations are used in the classification process the clusters could be used to predict the insulin resistance or other related diseases.
Keywords: diseases; medical computing; pattern classification; pattern clustering; unsupervised learning; HOMA-IR index; K-means clustering algorithm; Matsuda index; disease; glucose data; insulin data; insulin resistance assessment; unsupervised subject classification; Clustering algorithms; Diseases; Immune system; Indexes; Insulin; Sugar; Unsupervised learning (ID#: 16-9873)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7330444&isnumber=7330388
R. Feld and E. C. Slob, “2D GPR Monitoring Without a Source by Interferometry in a 3D World,” Advanced Ground Penetrating Radar (IWAGPR), 2015 8th International Workshop on, Florence, 2015, pp. 1-4. doi: 10.1109/IWAGPR.2015.7292612
Abstract: Creating virtual sources at locations where physical receivers have measured a response is known as seismic interferometry. The method does not use any information about the actual source's location. The source can be mobile phone radiation, already available in the air, as long as this background radiation can be represented by uncorrelated noise sources. Interferometry by multi-dimensional deconvolution (MDD) 'divides the common path out of the data', resulting in amplitude and phase information. Meanwhile, interferometry by cross-correlation (CC) uses time-reversion to retrieve phase information only. CC works fine for low-dissipative media. A finite difference time-domain solver can create 3D line-array data of receiving antennas on a surface of which the subsurface is homogeneous perpendicular to this receiver-array, without having anything being transmitted, other than background radiation. By applying the MDD and CC techniques, the 2D GPR signal can be retrieved as if there would be a transmitting antenna at a receiving antenna's position. Numerical results show that both MDD and CC work well.
Keywords: array signal processing; finite difference time-domain analysis; ground penetrating radar; radar interferometry; radar signal processing; 2D GPR monitoring; 3D world; MDD; finite difference time-domain solver; low-dissipative media; mobile phone radiation; multidimensional deconvolution; receiving antennas; seismic interferometry; transmitting antenna; Deconvolution; Ground penetrating radar; Interferometry; Noise; Receiving antennas; Three-dimensional displays; GPR; cross-correlation; multi-dimensional deconvolution; passive interferometry (ID#: 16-9874)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292612&isnumber=7292607
C. L. Liu and P. P. Vaidyanathan, “Tensor MUSIC in Multidimensional Sparse Arrays,” 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, 2015, pp. 1783-1787. doi: 10.1109/ACSSC.2015.7421458
Abstract: Tensor-based MUSIC algorithms have been successfully applied to parameter estimation in array processing. In this paper, we apply these for sparse arrays, such as nested arrays and coprime arrays, which are known to boost the degrees of freedom to O(N2) given O(N) sensors. We consider two tensor decomposition methods: CANDECOMP/PARAFAC (CP) and high-order singular value decomposition (HOSVD) to derive novel tensor MUSIC spectra for sparse arrays. It will be demonstrated that the tensor MUSIC spectrum via HOSVD suffers from cross-term issues while the tensor MUSIC spectrum via CP identifies sources unambiguously, even in high- dimensional tensors.
Keywords: array signal processing; parameter estimation; tensors; array processing; coprime arrays; high-order singular value decomposition; multidimensional sparse arrays; nested arrays; parameter estimation; tensor MUSIC; Covariance matrices; Multiple signal classification; Sensor arrays; Smoothing methods; Tensile stress; CANDE-COMP/PARAFAC (CP); MUSIC algorithm; Sparse arrays; high-order singular value decomposition (HOSVD) (ID#: 16-9875)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7421458&isnumber=7421038
S. Miah et al., “Design of Multidimensional Sensor Fusion System for Road Pavement Inspection,” 2015 International Conference on Systems, Signals and Image Processing (IWSSIP), London, 2015, pp. 304-308. doi: 10.1109/IWSSIP.2015.7314236
Abstract: This paper presents a systematic approach for decision level sensor fusion of road pavement inspection system under FP7 RPB HealTec- “Road Pavements & Bridge Deck Health Monitoring/Early Warning Using Advanced Inspection Technologies”. The paper focuses on the design aspect of the post processing sensor fusion system and outlines methods that can be used to process and fuse sensor data such as GPR, IRT, ACU and HDV for multidimensional assessment on the road pavement quality condition. In addition, the paper illustrates a visualization technique for mapping of detected defects with road surface and a GIS map.
Keywords: ground penetrating radar; infrared imaging; radar detection; roads; sensor fusion; ultrasonics; ACU; FP7 RPB HealTec; GIS map; GPR; HDV; IRT; decision level sensor fusion; detected defects; multidimensional assessment; post processing sensor fusion system; road pavement inspection system; road pavement quality condition; road surface; Europe; Feature extraction; Ground penetrating radar; Inspection; Roads; Sensor fusion; Surface treatment; Air-Coupled Ultrasound; Ground Penetrating Radar; Infrared Thermography; Non-destructive testing (ID#: 16-9876)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7314236&isnumber=7313917
T. Janvars and P. Farkaš, “Hard Decision Decoding of Single Parity Turbo Product Code with N-Level Quantization,” Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, Prague, 2015, pp. 1-6. doi: 10.1109/TSP.2015.7296433
Abstract: In this paper we propose an iterative hard decision decoding algorithm with N-level quantization for multidimensional turbo product codes composed of single parity codes. The paper introduces the idea of adjusting original iterative HIHO decoding algorithm to keep the same decoder complexity, but approaching SISO decoder performance. Performance for single parity product codes and various quantization levels in an additive white Gaussian noise channel for different single parity turbo codes is presented.
Keywords: AWGN channels; decoding; parity check codes; product codes; quantisation (signal); turbo codes; N-level quantization; additive white Gaussian noise channel; iterative hard decision decoding algorithm; multidimensional turbo product codes; single parity codes; single parity turbo product code; Bit error rate; Decoding; Encoding; Iterative decoding; Product codes; Quantization (signal); HIHO decoder; N-level quantization decision; code component; iterative decoding; performance; single parity turbo product code (ID#: 16-9877)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296433&isnumber=7296206
S. Ling and Q. Yunfeng, “Optimization of the Distributed K-Means Clustering Algorithm Based on Set Pair Analysis,” 2015 8th International Congress on Image and Signal Processing (CISP), Shenyang, 2015, pp. 1593-1598. doi: 10.1109/CISP.2015.7408139
Abstract: The distributed K-means cluster algorithm which focused on multidimensional data has been widely used. However, the current distributed K-means clustering algorithm uses the Euclidean distance as the similarity degree comparison of multidimensional data, which makes the algorithm divides the data set relatively stiff. Aiming at this problem, we present a distributed k-means clustering algorithm (SPAB-DKMC) based on the method of set pair analysis. The results of experiments on the Hadoop distributed platform show that SPAB-DKMC can reduce the number of iterations and improve the efficiency of the distributed K-means clustering algorithm.
Keywords: data handling; optimisation; parallel programming; pattern clustering; Euclidean distance; Hadoop distributed platform; SPAB-DKMC; distributed K-means cluster algorithm; iterative method; set pair analysis; Algorithm design and analysis; Classification algorithms; Clustering algorithms; Convergence; Distributed databases; Euclidean distance; Signal processing algorithms; K-means clustering; MapReduce model; distributed algorithm; similarity degree computation (ID#: 16-9878)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7408139&isnumber=7407837
H. S. Shekhawat and S. Weiland, “A Novel Computational Scheme for Low Multi-Linear Rank Approximations of Tensors,” Control Conference (ECC), 2015 European, Linz, 2015, pp. 3003-3008. doi: 10.1109/ECC.2015.7330994
Abstract: Multi-linear functions are generally known as tensors and provide a natural object of study in multi-dimensional signal and system analysis. Tensor approximation has various applications in signal processing and system theory. In this paper, we show the local convergence of a numerical method for multi-linear rank tensor approximation that is based on Jacobi iterations.
Keywords: Jacobian matrices; approximation theory; tensors; Jacobi iteration; multidimensional signal; multilinear functional; multilinear rank approximation; tensor approximation; Approximation methods; Convergence; Eigenvalues and eigenfunctions; Jacobian matrices; Standards; Tensile stress; Jacobi iterations; Tensor decompositions; singular value decompositions (ID#: 16-9879)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7330994&isnumber=7330515
J. A. Hogan and J. D. Lakey, “Wavelet Frames Generated by Bandpass Prolate Functions,” Sampling Theory and Applications (SampTA), 2015 International Conference on, Washington, DC, 2015, pp. 120-123. doi: 10.1109/SAMPTA.2015.7148863
Abstract: We refer to eigenfunctions of the kernel corresponding to truncation in a time interval followed by truncation in a frequency band as bandpass prelates (BPPs). We prove frame bounds for certain families of shifts of bandpass prolates, and we numerically construct dual frames for finite dimensional analogues. In the continuous case, the corresponding families produce wavelet frames for the space of square-integrable functions.
Keywords: eigenvalues and eigenfunctions; multidimensional systems; signal processing; wavelet transforms; BPP; bandpass prolate functions; bandpass prolates; eigenfunctions; finite dimensional analogues; frequency band; square-integrable functions; wavelet frames; Baseband; Discrete Fourier transforms; Eigenvalues and eigenfunctions; Generators; Kernel; Redundancy; Wave functions (ID#: 16-9880)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148863&isnumber=7148833
Senthildevi, K. A and Chandra, E, “Keyword Spotting System for Tamil Isolated Words Using Multidimensional MFCC and DTW Algorithm,” Communications and Signal Processing (ICCSP), 2015 International Conference on, Melmaruvathur, 2015, pp. 0550-0554. doi: 10.1109/ICCSP.2015.7322545
Abstract: Audio mining is a speaker independent speech processing technique and is related to data mining. Keyword spotting plays an important role in audio mining. Keyword spotting is retrieval of all instances of a given keyword in spoken utterances. It is well suited to data mining tasks that process large amount of speech such as telephone routing and to audio document indexing. Feature extraction is the first step for all speech processing tasks. This Paper presents an approach for keyword spotting in isolated Tamil utterances using Multidimensional Mel Frequency Cepstral Coefficient feature vectors and DTW algorithm. The accuracy of keyword spotting is measured with 12D, 26D and 39D MFCC feature vectors for month names in Tamil language and the performances of the mu1tidimensional MFCCs are compared. The code is developed in the MATLAB environment and performs the identification satisfactorily.
Keywords: data mining; feature extraction; indexing; speaker recognition; 12D MFCC feature vector; 26D MFCC feature vector; 39D MFCC feature vector; DTW algorithm; TLAB environ; Tamil isolated word; audio document indexing; audio mining; isolated Tamil utterance; keyword spotting system; multidimensional MFCC; multidimensional Mel frequency cepstral coefficient feature vector; speaker independent speech processing technique; Accuracy; Algorithm design and analysis; Frequency conversion; Indexes; Mel frequency cepstral coefficient; Pattern matching; Audio mining; Keyword spotting; MFCC Feature vectors; Speech processing
(ID#: 16-9881)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322545&isnumber=7322423
N. Udayanga, A. Madanayake and C. Wijenayake, “FPGA-Based Network-Resonance Applebaum Adaptive Arrays for Directional Spectrum Sensing,” 2015 IEEE 58th International Midwest Symposium on Circuits and Systems (MWSCAS), Fort Collins, CO, 2015, pp. 1-4. doi: 10.1109/MWSCAS.2015.7282165
Abstract: Cognitive radio (CR) depends on the accurate detection of frequency, modulation, and direction pertaining to radio sources, in turn, leading to spatio-temporal directional spectrum sensing. False detections due to high levels of noise and interference may adversely impacts the CR's performance. To address this problem, a novel system architecture that increases the accuracy of directional spectrum sensing in situations with low signal to noise ratio (SNR) is proposed. This work combines adaptive arrays, multidimensional filter theory and cyclostationary feature detection. A linear array Applebaum beamformer is employed in conjunction with a two-dimensional (2-D) planar-resonant beam filter to perform highly directional receive mode wideband beamforming with improved spatial selectivity. A Xilinx Virtex-6 based field programmable gate array (FPGA) prototype of the improved beamforming front-end verifies a clock frequency of 100.9 MHz. The proposed network-resonant Applebaum array provides 6 dB, 5.5 dB and 5 dB noise suppression capability reflected in the spectral correlation function for input SNRs of -20 dB, -25 dB, and -30 dB, respectively, for an RF beam direction 50° degrees from array broadside.
Keywords: array signal processing; cognitive radio; correlation methods; feature extraction; field programmable gate arrays; filtering theory; interference suppression; modulation; radio spectrum management; radiofrequency interference; signal detection; 2D planar-resonant beam filter; FPGA; RF beam direction; SNR; Xilinx Virtex-6; clock frequency; cyclostationary feature detection; directional receive mode wideband beamforming; field programmable gate array; frequency 100.9 MHz; linear array Applebaum beamformer; multidimensional filter theory; network-resonance Applebaum adaptive arrays; network-resonant Applebaum array; noise suppression capability; radio sources; signal to noise ratio; spatial selectivity; spatiotemporal directional spectrum sensing; spectral correlation function; Adaptive arrays; Feature extraction; Frequency modulation; Interference; Sensors; Signal to noise ratio (ID#: 16-9882)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282165&isnumber=7281994
Y. Zhu, A. Jiang, H. K. Kwan and K. He, “Distributed Sensor Network Localization Using Combination and Diffusion Scheme,” 2015 IEEE International Conference on Digital Signal Processing (DSP), Singapore, 2015, pp. 1156-1160. doi: 10.1109/ICDSP.2015.7252061
Abstract: A distributed sensor network localization algorithm is presented in this paper. During the localization procedure, each sensor estimates its own coordinate using a local multidimensional scaling (MDS) algorithm. In contrast to the classical MDS algorithm adopting the centralized processing, all the neighbors of each sensor are considered as anchor nodes in the local MDS algorithm. Furthermore, each sensor's coordinate could be estimated by its neighbors based on their respective knowledge. These local estimates are then collected by corresponding sensors and used in a combination step to finally determine sensors' coordinates. In this way, each sensor's knowledge could be diffused and shared in the network. Simulation results show that, compared to the classical MDS algorithm, the proposed algorithm is more robust to measurement noise. Moreover, when sensors are sparsely connected, the distributed MDS algorithm performs better than the centralized version of the MDS algorithm.
Keywords: diffusion; wireless sensor networks; anchor nodes; centralized processing; combination scheme; coordinate estimation; diffusion scheme; distributed MDS algorithm; distributed sensor network localization algorithm; local MDS algorithm; local multidimensional scaling algorithm; measurement noise; sensor coordinate determination; Ad hoc networks; Distance measurement; Electronic mail; Nickel; Noise; Optimization; Wireless sensor networks; Diffusion scheme; distributed localization; multidimensional scaling (MDS) algorithm; sensor network (ID#: 16-9883)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7252061&isnumber=7251315
W. W. Wang and F. M. Guo, “Simulation of InAlAs/InGaAs/InAs Quantum Dots — Quantum Well Near-Infrared Detector,” 2015 International Conference on Numerical Simulation of Optoelectronic Devices (NUSOD), Taipei, 2015, pp. 101-102. doi: 10.1109/NUSOD.2015.7292842
Abstract: We systematically have studied the InAlAs/InGaAs/InAs quantum dots - quantum well with InP substrate by simulating and analyzed with Crosslight Apsys package. The S (signal)/D (dark current) has best working points at 3.5V and -1.3V at 300K and photocurrent spectrum based on quantum dot in well can tail up to 1.70μm. Simulation result still included InGaAs EL spectrum, dark current and photo-responsivity.
Keywords: III-V semiconductors; aluminium compounds; dark conductivity; electroluminescence; gallium arsenide; indium compounds; infrared detectors; photoconductivity; photodetectors; quantum well devices; semiconductor quantum dots; semiconductor quantum wells; Crosslight Apsys package; InAlAs-InGaAs-InAs; InP; InP substrate; dark current; electroluminescence spectrum; photocurrent spectrum; photoresponsivity; quantum dot-quantum well near-infrared detector; temperature 300 K; voltage -1.3 V; voltage 3.5 V; Atmospheric modeling; Indium gallium arsenide; Photoconductivity; Quantum dots; Resonant tunneling devices; Signal to noise ratio (ID#: 16-9884)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292842&isnumber=7292786
H. Feng and B. Z. Guo, “Distributed Disturbance Estimator and Application to Stabilization of Multi-Dimensional Kirchhoff Equation,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 2501-2506. doi: 10.1109/CDC.2015.7402584
Abstract: In this paper, we present a linear disturbance estimator with time-varying gain to extract the real signal from the corrupted velocity signal. The approach comes from the active disturbance rejection control. A variant form of the estimator can also be served as a tracking differentiator. The estimator itself is relatively independent to the control plants. The result is applied to stabilization for a multi-dimensional Kirchhoff equation as a demonstration.
Keywords: active disturbance rejection control; parameter estimation; signal processing; ADRC; distributed disturbance estimator; linear disturbance estimator; multidimensional Kirchhoff equation; stabilization; time-varying gain; velocity signal extraction; Convergence; Distributed parameter systems; Hilbert space; Numerical simulation; Observers; Robust control; Uncertainty
(ID#: 16-9885)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7402584&isnumber=7402066
S. K. Mahto, A. Choubey and S. Suman, “Linear Array Synthesis with Minimum Side Lobe Level and Null Control Using Wind Driven Optimization,” Signal Processing And Communication Engineering Systems (SPACES), 2015 International Conference on, Guntur, 2015, pp. 191-195. doi: 10.1109/SPACES.2015.7058246
Abstract: This paper presents synthesis of unequally spaced linear array antenna with minimum sidelobe suppression, desired beamwidth and null control using wind driven optimization (WDO) algorithm. The WDO technique is nature-inspired, population based iterative heuristic global optimization algorithm for multidimensional and multimodal problems. The array synthesis objective function is formulated and then optimizes elements location using WDO algorithm to achieve the goal of minimum sidelobe level (SLL) suppression, desired beamwidth and null placement in certain direction. The results of the WDO algorithm are validated by comparing with results obtained using PSO and other evolutionary algorithm as reported in literature for linear array (N=10). The synthesis results such as radiation pattern and convergence graph show that WDO algorithm performs far better than the common PSO, CLPSO and other evolutionary algorithms.
Keywords: antenna radiation patterns; evolutionary computation; iterative methods; linear antenna arrays; particle swarm optimisation; PSO; SLL suppression; WDO algorithm; convergence graph show; evolutionary algorithm; iterative heuristic global optimization algorithm; linear array synthesis; minimum sidelobe level suppression; multimodal problems; null control; particle swarm optimization; radiation pattern; wind driven optimization; Algorithm design and analysis; Arrays; Electromagnetics; Linear antenna arrays; Optimization; Particle swarm optimization; Antenna array; comprehensive learning particle swarm optimization (CLPSO); evolutionary programming; interference; linear array design; particle swarm optimization (PSO); sidelobe level suppression (SLL); wind driven optimization (WDO) (ID#: 16-9886)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058246&isnumber=7058196
M. Cheng, Y. Wu and Y. Chen, “Capacity Analysis for Non-Orthogonal Overloading Transmissions Under Constellation Constraints,” Wireless Communications & Signal Processing (WCSP), 2015 International Conference on, Nanjing, 2015, pp. 1-5. doi: 10.1109/WCSP.2015.7341294
Abstract: In this work, constellation constrained (CC) capacities of a series of non-orthogonal overloading transmission schemes are derived in AWGN channels. All these schemes follow a similar transmission structure, in which modulated symbols are spread on to a group of resource elements (REs) in a sparse manner, i.e., only a part of the REs have nonzero components while the others are filled with zeros. The multiple access schemes follow this structure is called sparse code multiple access (SCMA) in general. In particular, a complete SCMA scheme would combine multi-dimensional modulation and the low density spreading (LDS) together such that the symbols from the same data layer on different REs are different but dependent. If the spread symbols are the same, it is a simplified implementation of SCMA and is called LDS. Furthermore, depending on whether the numbers of non-zero components for each data layer are equal or not, there are regular LDS (LDS in short) and irregular LDS (IrLDS), respectively. The paper would show from theoretical derivation and simulation results that the complete SCMA schemes outperform the simplified version LDS/IrLDS. Moreover, we also show that the application of phase rotation in the modulator can significantly boost the link performance of such non-orthogonal multiple access schemes.
Keywords: AWGN channels; code division multiple access; AWGN channel; SCMA scheme; constellation constraint; low density spreading; multidimensional modulation; nonorthogonal multiple access scheme link performance; nonorthogonal overloading transmission capacity analysis; resource element; sparse code multiple access scheme; 5G mobile communication; Modulation; Multiaccess communication; Nickel; Simulation; Sparse matrices; Constellation Constrained capacity; IrLDS; LDS; Non-orthogonal multiple access; SCMA (ID#: 16-9887)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7341294&isnumber=7340966
Y. Yiru, G. Yinghui and X. Jianyu, “Auto-Encoder Based Modeling of Combustion System for Circulating Fluidized Bed Boiler,” Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, Ningbo, 2015, pp. 1-4. doi: 10.1109/ICSPCC.2015.7338946
Abstract: Deep learning attract the interests of many researchers. Multidimensional algorithms require large data storage space. This paper proposes a modeling of the combustion system used for Circulating Fluidized Bed Boiler (CFBB), which is based on the method of auto-encoder of deep learning. The 20 dimensional input samples set is the input layer, and then the units of hidden layer are calculated. The data dimension is reduced through the auto-encoder, further, these data are as input of the RBF network. The modeling is carried out by the Radical Basis Function (RBF) neutral network. Compared with traditional methods, the auto-encoder is suitable for modeling. The samples are greatly reduced for the subsequent work. Numerical results provided in this paper validate the proposed model and method, as well as the validity of the conversion from the auto-encoder strategy.
Keywords: boilers; combustion; fluidised beds; radial basis function networks; CFBB; auto-encoder based modeling; circulating fluidized bed boiler; combustion system; data dimension; radial basis function neutral network; Combustion; Computational modeling; Data models; Mathematical model; Neural networks; Testing; Training; Circulating fluidized bed boiler (CFBB); auto-encoders; combustion system; modeling (ID#: 16-9888)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338946&isnumber=7338753
P. P. Vaidyanathan, “Multidimensional Ramanujan-sum Expansions on Nonseparable Lattices,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 3666-3670. doi: 10.1109/ICASSP.2015.7178655
Abstract: It is well-known that the Ramanujan-sum cq(n) has applications in the analysis of periodicity in sequences. Recently the author developed a new type of Ramanujan-sum representation especially suited for finite duration sequences x(n): This is based on decomposing x(n) into a sum of signals belonging to so-called Ramanujan subspaces Sqi. This offers an efficient way to identify periodic components using integer computations and projections, since cq(n) is integer valued. This paper revisits multidimensional signals with periodicity on possibly nonseparable integer lattices. Multidimensional Ramanujan-sum and Ramanujan-subspaces are developed for this case. A Ramanujan-sum based expansion for multidimensional signals is then proposed, which is useful to identify periodic components on nonseparable lattices.
Keywords: signal representation; finite duration sequences; integer computations; multidimensional Ramanujan-sum expansions; multidimensional signals; nonseparable lattices; Dictionaries; Discrete Fourier transforms; Finite impulse response filters; Lattices; Matrix decomposition; Tensile stress; Ramanujan-sum on lattices; integer basis; periodic subspaces; periodicity lattices (ID#: 16-9889)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178655&isnumber=7177909
Rui Zeng, Jiasong Wu, L. Senhadji and Huazhong Shu, “Tensor Object Classification via Multilinear Discriminant Analysis Network,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 1971-1975. doi: 10.1109/ICASSP.2015.7178315
Abstract: This paper proposes an multilinear discriminant analysis network (MLDANet) for the recognition of multidimensional objects, knows as tensor objects. The MLDANet is a variation of linear discriminant analysis network (LDANet) and principal component analysis network (PCANet), both of which are the recently proposed deep learning algorithms. The MLDANet consists of three parts: (1) The encoder learned by MLDA from tensor data. (2) Features maps obtained from decoder. (3) The use of binary hashing and histogram for feature pooling. A learning algorithm for MLDANet is described. Evaluations on UCF11 database indicate that the proposed MLDANet outperforms the PCANet, LDANet, MPCA+LDA, and MLDA in terms of classification for tensor objects.
Keywords: feature extraction; image classification; image coding; learning (artificial intelligence); object recognition; principal component analysis; tensors; LDANet; MLDANet; PCANet; UCF11 database; binary hashing; binary histogram; deep learning algorithms; feature pooling; features maps; linear discriminant analysis network; multidimensional object recognition; multilinear discriminant analysis network; principal component analysis network; tensor object classification; Erbium; Deep learning; tensor object classification (ID#: 16-9890)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178315&isnumber=7177909
S. Smith, N. Ravindran, N. D. Sidiropoulos and G. Karypis, “SPLATT: Efficient and Parallel Sparse Tensor-Matrix Multiplication,” Parallel and Distributed Processing Symposium (IPDPS), 2015 IEEE International, Hyderabad, 2015, pp. 61-70. doi: 10.1109/IPDPS.2015.27
Abstract: Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.
Keywords: C language; cache storage; data structures; matrix multiplication; shared memory systems; software libraries; sparse matrices; tensors; C library; Khatri-Rao product; SPLATT; cache tiling; cache-friendly reordering; data structure; matricide tensor multiplication; multidimensional arrays; parallel sparse tensor-matrix multiplication; shared-memory parallelism; sparse tensor factorization; three-mode tensors; Algorithm design and analysis; Context; Data structures; Memory management; Parallel processing; Sparse matrices; Tensile stress; CANDECOMP; CPD; PARAFAC; Sparse tensors; parallel (ID#: 16-9891)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161496&isnumber=7161257
J. L. Jodra, I. Gurrutxaga and J. Muguerza, “A Study of Memory Consumption and Execution Performance of the cuFFT Library,” 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, 2015, pp. 323-327. doi: 10.1109/3PGCIC.2015.66
Abstract: The Fast Fourier Transform (FFT) is an essential primitive that has been applied in various fields of science and engineering. In this paper, we present a study of the Nvidia's cuFFT library — a proprietary FFT implementation for Nvidia's Graphics Processing Units — to identify the impact that two configuration parameters have in its execution. One useful feature of the cuFFT library is that it can be used to efficiently calculate several FFTs at once. In this work we analyse the effect this feature has on memory consumption and execution time in order to find a useful trade-off. Another important feature of the library is that it supports sophisticated input and output data layouts. This feature allows, for instance, to perform multidimensional FFT decomposition with no need of data transpositions. We have identified some patterns which may help to decide the parameters and values that are the key for achieving increased performance in a FFT calculation. We believe that this study will help researchers who wish to use the cuFFT library to decide what parameters values are best suited to achieve higher performance in their execution, both in time and memory consumption.
Keywords: fast Fourier transforms; graphics processing units; libraries; mathematics computing; Nvidia cuFFT library; Nvidia graphics processing units; execution performance; execution time; fast Fourier transform; input data layout; memory consumption; multidimensional FFT decomposition; output data layout; Fast Fourier transforms; Graphics processing units; Layout; Libraries; Memory management; Signal processing algorithms; CUDA; cuFFT (ID#: 16-9892)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7424583&isnumber=7424499
S. Javed, T. Bouwmans and S. K. Jung, “Stochastic Decomposition into Low Rank and Sparse Tensor for Robust Background Subtraction,” Imaging for Crime Prevention and Detection (ICDP-15), 6th International Conference on, London, 2015, pp. 1-6. doi: 10.1049/ic.2015.0105
Abstract: Background subtraction (BS) is a very important task for various computer vision applications. Higher-Order Robust Principal Component Analysis (HORPCA) based robust tensor recovery or decomposition provides a very nice potential for BS. The BG sequence is then modeled by underlying low-dimensional subspace called low-rank while the sparse tensor constitutes the foreground (FG) mask. However, traditional tensor based decomposition methods are sensitive to outliers and due to the batch optimization methods, high dimensional data should be processed. As a result, huge memory usage and computational issues arise in earlier approaches which are not desirable for real-time systems. In order to tackle these challenges, we apply the idea of stochastic optimization on tensor for robust low-rank and sparse error separation. Only one sample per time instance is processed from each unfolding matrices of tensor in our scheme to separate the low-rank and sparse component and update the low dimensional basis when a new sample is revealed. This iterative multi-dimensional tensor data optimization scheme for decomposition is independent of the number of samples and hence it reduces the memory and computational complexities. Experimental evaluations on both synthetic and real-world datasets demonstrate the robustness and comparative performance of our approach as compared to its batch counterpart without sacrificing the online processing.
Keywords: computational complexity; computer vision; image sequences; iterative methods; optimisation; principal component analysis; stochastic processes; tensors; video signal processing; BG sequence; HORPCA based robust tensor decomposition; HORPCA based robust tensor recovery; batch optimization methods; computational complexity reduction; computer vision applications; foreground mask; high dimensional data; higher-order robust principal component analysis; iterative multidimensional tensor data optimization scheme; low rank tensor; memory complexity reduction; robust background subtraction; robust low-rank error separation; sparse error separation; sparse tensor; stochastic decomposition; stochastic optimization; video analysis; Background/Foreground Separation; Low-rank tensor; Stochastic optimization; Tensor decomposition (ID#: 16-9893)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7317993&isnumber=7244054
Sangeetha P., Karthik M. and Kalavathi Devi T., “VLSI Architectures for the 4-Tap and 6-Tap 2-D Daubechies Wavelet Filters Using Pipelined Direct Mapping Method,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, Coimbatore, 2015, pp. 1-6. doi: 10.1109/ICIIECS.2015.7193010
Abstract: This paper presents simple design of multilevel two dimensional (2D) Daubechies wavelet transform with pipelined direct mapping method for image compression. Daubechies 4-tap (Daub4) selected with pipelined direct mapping technique. Due to separability property of the multi-dimensional Daubechies, the architecture has been implemented using a cascade of two N-point one-dimensional (1-D) Daub4 and Daub6. The 2-dimensinal discrete wavelet transform lifting scheme algorithm has been implemented using MATLAB program for both modules forward daubechies wavelet transform (FDWT) and inverse daubechies wavelet transform (IDWT) to determine the peak signal to noise ratio (PSNR) and correlation for the retrieved image.
Keywords: VLSI; data compression; digital signal processing chips; discrete wavelet transforms; image coding; image filtering; image retrieval; medical image processing; pipeline arithmetic; 2D Daubechies wavelet filters; 2D Daubechies wavelet transform; 2D discrete wavelet transform lifting scheme algorithm; FDWT; IDWT; MATLAB program; PSNR; VLSI architectures; forward Daubechies wavelet transform; image compression; inverse Daubechies wavelet transform; multidimensional Daubechies; peak signal to noise ratio; pipelined direct mapping method; Biomedical imaging; Computer architecture; Conferences; Discrete wavelet transforms; Image coding; Daubechies wavelet filter; MATLAB (ID#: 16-9894)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193010&isnumber=7192777
C. Anagnostopoulos and P. Triantafillou, “Learning Set Cardinality in Distance Nearest Neighbours,” Data Mining (ICDM), 2015 IEEE International Conference on, Atlantic City, NJ, 2015, pp. 691-696. doi: 10.1109/ICDM.2015.17
Abstract: Distance-based nearest neighbours (dNN) queries and aggregations over their answer sets are important for exploratory data analytics. We focus on the Set Cardinality Prediction (SCP) problem for the answer set of dNN queries. We contribute a novel, query-driven perspective for this problem, whereby answers to previous dNN queries are used to learn the answers to incoming dNN queries. The proposed novel machine learning (ML) model learns the dynamically changing query patterns space and thus it can focus only on the portion of the data being queried. The model enjoys several comparative advantages in prediction error and space requirements. This is in addition to being applicable in environments with sensitive data and/or environments where data accesses are too costly to execute, where the data-centric state-of-the-art is inapplicable and/or too costly. A comprehensive performance evaluation of our model is conducted, evaluating its comparative advantages versus acclaimed methods (i.e., different self-tuning histograms, sampling, multidimensional histograms, and the power-method).
Keywords: data analysis; learning (artificial intelligence); query processing; ML model; SCP problem; dNN query; data access; distance-based nearest neighbour query; exploratory data analytics; learning set cardinality; machine learning model; query pattern space; set cardinality prediction problem; Adaptation models; Estimation; Histograms; Prototypes; Quantization (signal); Solid modeling; Yttrium; Query-driven set cardinality prediction; distance nearest neighbors analytics; hetero-associative competitive learning; local regression vector quantization (ID#: 16-9895)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373374&isnumber=7373293
U. Arora and N. Sukavanam, “Approximate Controllability of a Second Order Delayed Semilinear Stochastic System with Nonlocal Conditions,” Signal Processing, Computing and Control (ISPCC), 2015 International Conference on, Waknaghat, 2015, pp. 230-235. doi: 10.1109/ISPCC.2015.7375031
Abstract: In this paper, the approximate controllability of a second order delayed semilinear stochastic system with nonlocal conditions has been discussed. The control function for this system has been established with the help of infinite dimensional controllability operator. Using this control function, the sufficient conditions for the approximate controllability of the proposed system have been obtained using Sadovskii's Fixed Point theorem.
Keywords: controllability; delay systems; multidimensional systems; stochastic systems; Sadovskii fixed point theorem; approximate controllability; infinite dimensional controllability operator; nonlocal conditions; second order delayed semilinear stochastic system; sufficient conditions; Aerospace electronics; Controllability; Generators; Hilbert space; Stochastic systems; Yttrium; Approximate Controllability; Delayed System; Sadovskii's Fixed Point Theorem; Semilinear Systems; Stochastic Control System (ID#: 16-9896)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7375031&isnumber=7374979
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Resiliency 2015 |
Resiliency is one of the five hard problems of the Science of Security. Research work in this area has been growing. The work cited here was presented in 2015.
J. Rajamäki, “Cyber Security Education as a Tool for Trust-Building in Cross-Border Public Protection and Disaster Relief Operations,” Global Engineering Education Conference (EDUCON), 2015 IEEE, Tallinn, 2015, pp. 371-378. doi: 10.1109/EDUCON.2015.7095999
Abstract: Public protection and disaster relief (PPDR) operations are increasingly more dependent on networks and data processing infrastructure. Incidents such as natural hazards and organized crime do not respect national boundaries. As a consequence, there is an increased need for European collaboration and information sharing related to public safety communications (PSC) and information exchange technologies and procedures - and trust is the keyword here. According to our studies, the topic “trust-building” could be seen as the most important issue with regard to multi-agency PPDR cooperation. Cyber security should be seen as a key enabler for the development and maintenance of trust in the digital world. It is important to complement the currently dominating “cyber security as a barrier” perspective by emphasizing the role of “cyber security as an enabler” of new business, interactions, and services - and recognizing that trust is a positive driver for growth. Public safety infrastructure is becoming more dependent on unpredictable cyber risks. Everywhere, present computing means that PPDR agencies do not know when they are using dependable devices or services, and there are chain reactions of unpredictable risks. If cyber security risks are not made ready, PPDR agencies, like all organizations, will face severe disasters over time. Investing in systems that improve confidence and trust can significantly reduce costs and improve the speed of interaction. From this perspective, cyber security should be seen as a key enabler for the development and maintenance of trust in the digital world, and it has the following themes: security technology, situation awareness, security management and resiliency. Education is the main driver for complementing the currently dominating “cyber security as a barrier” perspective by emphasizing the role of “cyber security as an enabler”.
Keywords: computer aided instruction; computer science education; emergency management; trusted computing; PPDR operation; PSC; cross-border public protection operation; cyber security education; cybersecurity-as-a-barrier perspective; cybersecurity-as-an-enabler perspective; disaster relief operation; information exchange; multiagency PPDR cooperation; public safety communications; resiliency theme; security management theme; security technology theme; situation awareness theme; trust building; Computer security; Education; Europe; Organizations; Safety; Standards organizations; cyber security; education; public protection and disaster relief; trust-building (ID#: 16-9574)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7095999&isnumber=7095933
T. Aoyama, H. Naruoka, I. Koshijima, W. Machii and K. Seki, “Studying Resilient Cyber Incident Management from Large-Scale Cyber Security Training,” Control Conference (ASCC), 2015 10th Asian, Kota Kinabalu, 2015, pp. 1-4.
doi: 10.1109/ASCC.2015.7244713
Abstract: The study on human contribution to cyber resilience is unexplored terrain in the field of critical infrastructure security. So far cyber resilience has been discussed as an extension of the IT security research. The current discussion is focusing on technical measures and policy preparation to mitigate cyber security risks. In this human-factor based study, the methodology to achieve high resiliency of the organization by better management is discussed. A field observation was conducted in the large-scale cyber security hands-on training at ENCS (European Network for Cyber Security, The Hague, NL) to determine management challenges that could occur in a real-world cyber incident. In this paper, the possibility to extend resilience-engineering framework to assess organization's behavior in cyber crisis management is discussed.
Keywords: human factors; risk management; security of data; ENCS; European Network for Cyber Security; NL; The Hague; cyber crisis management; cyber incident management; cyber resilience; cyber security risk management human-factor; large-scale cyber security hands-on training; resilience-engineering framework; Computer security; Games; Monitoring; Organizations; Resilience; Training; critical infrastructure; cyber security; management; resilience engineering (ID#: 16-9575)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7244713&isnumber=7244373
T. Hayajneh, T. Zhang and B. J. Mohd, “Security Issues in WSNs with Cooperative Communication,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 451-456. doi: 10.1109/CSCloud.2015.78
Abstract: Cooperative communication is a technique that helps to improve the communication performance in wireless networks. It allows the nodes to rely on their neighbors when transmitting packets providing some diversity gain. Wireless sensor networks (WSNs) can benefit from cooperative communication to, which was proven by other researcher in the field. In this paper we consider security issues in WSNs with cooperative communications. We study such issues at each of the main protocol layers: physical layer, data link layer, network layer, services (topology) layer, and application layer. For each layer, we clarify the main task, enumerate the main attacks and threats, specify the primary security approaches and techniques, if any, and discuss possible new attacks and problems that may arise with the use of cooperative communications. Further, we showed for some attacks (e.g. jamming, packet dropping, and wormhole) that using cooperative communication improves the network resiliency and reliability. This paper builds the foundations and clarifies the specifications for a needed security protocol in WSNs with cooperative communications that can enhance its performance and resiliency against cyber-attacks.
Keywords: cooperative communication; protocols; telecommunication network reliability; telecommunication security; wireless sensor networks; WSN; application layer; cyber-attack; data link layer; network layer; physical layer; security issue; service layer; wireless sensor network reliability; Cooperative communication; Jamming; Protocols; Relays; Security; Sensors; Wireless sensor networks; Cooperative Communication; Security attacks; resiliency (ID#: 16-9576)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371521&isnumber=7371418
L. Kypus, L. Vojtech and J. Hrad, “Security of ONS Service for Applications of the Internet of Things and Their Pilot Implementation in Academic Network,” Carpathian Control Conference (ICCC), 2015 16th International, Szilvasvarad, 2015,
pp. 271-276. doi: 10.1109/CarpathianCC.2015.7145087
Abstract: The aim of the Object name services (ONS) project was to find a robust and stable way of automated communication to utilize name and directory services to support radio-frequency identification (RFID) ecosystem, mainly in the way that can leverage open source and standardized services and capability to be secured. All this work contributed to the new RFID services and Internet of Things (IoT) heterogeneous environments capabilities presentation. There is an increasing demand of transferred data volumes associated with each and every IP or non-IP discoverable objects. For example RFID tagged objects and sensors, as well as the need to bridge remaining communication compatibility issues between these two independent worlds. RFID and IoT ecosystems require sensitive implementation of security approaches and methods. There are still significant risks associated with their operations due to the content nature. One of the reasons of past failures could be lack of security as the integral part of design of each particular product, which is supposed to build ONS systems. Although we focused mainly on the availability and confidentiality concerns in this paper, there are still some remaining areas to be researched. We tried to identify the hardening impact by metrics evaluating operational status, resiliency, responsiveness and performance of managed ONS solution design. Design of redundant and hardened testing environment under tests brought us the visibility into the assurance of the internal communication security and showed behavior under the load of the components in such complex information service, with respect to an overall quality of the delivered ONS service.
Keywords: Internet of Things; radiofrequency identification; telecommunication security; ONS service; RFID; academic network; object name services; radio-frequency identification; Operating systems; Protocols; Radiofrequency identification; Security; Servers; Standards; Virtual private networks; IPv6; ONS; security hardening (ID#: 16-9577)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145087&isnumber=7145033
A. Fressancourt and M. Gagnaire, “A SDN-Based Network Architecture for Cloud Resiliency,” Consumer Communications and Networking Conference (CCNC), 2015 12th Annual IEEE, Las Vegas, NV, 2015, pp. 479-484. doi: 10.1109/CCNC.2015.7158022
Abstract: In spite of their commercial success, Cloud services are still subject to two major weak points: data security and infrastructure resiliency. In this paper, we propose an original Cloud network architecture aiming at improving the resiliency of Cloud network infrastructures interconnecting remote data centers. The main originality of this architecture consists in exploiting the principles of Software Defined Networking (SDN) in order to adapt the rerouting strategies in case of network failure according to a set of requirements. In existing Cloud networks configurations, network recovery after a fiber cut is achieved by means of the usage of redundant bandwidth capacity preplanned through backup links. Such an approach has two drawbacks. First, it induces at a large scale a non-negligible additional cost for the Cloud Service Providers (CSP). Second, the pre-computation of the rerouting strategy may not be suited to the specific quality of service requirements of the various data flows that were transiting on the failing link. To prevent these two drawbacks, we propose that CSPs deploy their services in several redundant data centers and make sure that those data centers are properly interconnected via the Internet. For that purpose, we propose that a CSP may use the services of multiple (typically two) Internet Service Providers to interconnect its data centers via the Internet. In practice, we propose that a set of “routing inflection points” may form an overlay network exploiting a specific routing strategy. We propose that this overlay is coordinated by a Software Defined Networking-based centralized controller. Thus, such a CSP may choose the network path between two data centers the most suited to the underlying traffic QoS requirement. The proposed approach enables this CSP a certain independency from its network providers. In this paper, we present this new Cloud architecture. We outline how our approach mixes concepts taken from both SDN an- Segment Routing. Unlike the protection techniques used by existing CSPs, we explain how this approach can be used to implement fast rerouting strategy for inter-data center data exchanges.
Keywords: cloud computing; computer network security; quality of service; software defined networking; telecommunication network routing; telecommunication traffic; CSP; Internet service providers; SDN based network architecture; cloud network architecture; cloud network infrastructures; cloud networks configurations; cloud resiliency; cloud service providers; cloud services; data centers; data security; fiber cut; network failure; remote data centers; rerouting strategy; routing strategy; traffic QoS requirement; Computer architecture; Internet; Multiprotocol label switching; Peer-to-peer computing; Routing; Routing protocols; Servers; Overlay; Resiliency; Segment Routing; Software-Defined Networks (ID#: 16-9578)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158022&isnumber=7157933
J. D. Ansilla, N. Vasudevan, J. JayachandraBensam and J. D. Anunciya, “Data Security in Smart Grid with Hardware Implementation Against DoS Attacks,” Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, Nagercoil, 2015, pp. 1-7. doi: 10.1109/ICCPCT.2015.7159274
Abstract: Cultivation of Smart Grid refurbish with brisk and ingenious. The delinquent breed and sow mutilate in massive. This state of affair coerces security as a sapling which incessantly is to be irrigated with Research and Analysis. The Cyber Security is endowed with resiliency to the SYN flooding induced Denial of Service attack in this work. The proposed secure web server algorithm embedded in the LPC1768 processor ensures the smart resources to be precluded from the attack.
Keywords: Internet; computer network security; power engineering computing; smart power grids; DoS attacks; LPC1768 processor; SYN flooding; cybersecurity; data security; denial of service attack; secure Web server algorithm; smart grid; smart resources; Computer crime; Computers; Floods; IP networks; Protocols; Servers; ARM Processor; DoS; Hardware Implementation; SYNflooding; Smart Grid (ID#: 16-9579)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159274&isnumber=7159156
A. S. Prasad, D. Koll and X. Fu, “On the Security of Software-Defined Networks,” Software Defined Networks (EWSDN), 2015 Fourth European Workshop on, Bilbao, 2015, pp. 105-106. doi: 10.1109/EWSDN.2015.70
Abstract: To achieve a widespread deployment of Software-Defined Networks (SDNs) these networks need to be secure against internal and external misuse. Yet, currently, compromised end hosts, switches, and controllers can be easily exploited to launch a variety of attacks on the network itself. In this work we discuss several attack scenarios, which — although they have a serious impact on SDN — have not been thoroughly addressed by the research community so far. We evaluate currently existing solutions against these scenarios and formulate the need for more mature defensive means.
Keywords: computer network security; software defined networking; SDNs; software-defined network security; Computer crime; Fabrication; Network topology; Switches; Topology; Resiliency; SDN; Security; Software-defined Networks (ID#: 16-9580)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313625&isnumber=7313575
F. Machida, M. Fujiwaka, S. Koizumi and D. Kimura, “Optimizing Resiliency of Distributed Video Surveillance System for Safer City,” Software Reliability Engineering Workshops (ISSREW), 2015 IEEE International Symposium on, Gaithersburg, MD, 2015, pp. 17-20. doi: 10.1109/ISSREW.2015.7392029
Abstract: Real-time video surveillance system is becoming an important function to keep safe city by monitoring the places attracting crowds. The system needs to be resilient so that it can timely detect abnormal events and swiftly deliver alerts to security agencies whenever events occur. In this paper, we present an architecture of resilient video surveillance system that persists even when the workload of video analysis surges by any changes in a target physical area. The proposed architecture is based on a distributed computing platform which can allocate virtual machines dynamically in response to local demand increase. In order to estimate the necessary amount of resources for video analysis, we propose a socio-ICT model that consists of a system dynamics and a queueing model. A simulation study on the model demonstrates how our platform can adapt to the changes in the target physical area and improves the resiliency.
Keywords: distributed processing; queueing theory; video signal processing; video surveillance; virtual machines; distributed computing platform; distributed video surveillance system; information and communications technology; queueing model; resilient video surveillance system; socio-ICT model; system dynamics; video analysis; virtual machines; Cities and towns; Computational modeling; Face; Streaming media; System dynamics; Video surveillance; distributed system; resiliency (ID#: 16-9581)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7392029&isnumber=7392022
O. Popescu and D. C. Popescu, “Sub-Band Precoded OFDM for Enhanced Physical Layer Resiliency in Wireless Communication Systems,” Communications and Networking (BlackSeaCom), 2015 IEEE International Black Sea Conference on, Constanta, 2015, pp.1-4. doi: 10.1109/BlackSeaCom.2015.7185074
Abstract: Orthogonal Frequency Division Multiplexing (OFDM) is the preferred modulation scheme for current and future broadband wireless systems, and has been incorporated in many standards. In spite of its many benefits, which include robust performance in noise, fading channels and uncorrelated interference, OFDM systems have poor performance in the presence of jamming. In this paper we study the use of sub-band precoding to protect OFDM systems against jamming attacks and enhance their physical layer security.
Keywords: OFDM modulation; fading channels; radiofrequency interference; telecommunication security; broadband wireless systems; fading channels; jamming attacks; modulation scheme; orthogonal frequency division multiplexing; physical layer resiliency; physical layer security; subband precoded OFDM; uncorrelated interference; wireless communication systems; Bandwidth; Bit error rate; Discrete Fourier transforms; Jamming; OFDM; Physical layer; Wireless communication; jamming environment; precoding
(ID#: 16-9582)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7185074&isnumber=7185069
R. Garcia and C. E. Chow, “Identity Considerations for Public Sector Hybrid Cloud Computing Solutions,” Computer Communication and Informatics (ICCCI), 2015 International Conference on, Coimbatore, 2015, pp. 1-8. doi: 10.1109/ICCCI.2015.7218091
Abstract: Cloud computing is a relatively new paradigm that provides increased flexibility and resiliency in information technology service delivery. The inherent elasticity and cost savings in public cloud computing have attracted many in the private and ever-cautious public sectors. Presuming the construct of a hybrid cloud offering composed of a combined public and private cloud solution, the public sector (government) has a daunting dilemma in balancing resilience and security. Key to the public sector requirement is continuity of mission critical and mission support operations. When involved in national security, disaster response, defense, or homeland security missions, the criticality of service availability is elevated. Delays in service provisioning and collaboration are often as a result of identity management issues within participating organizations. This paper details the public sector cloud security and usability needs, draws on previous definitions and models for cloud security, and provides an identity management architectural model for use in some of the most critical of public sector mission sets.
Keywords: cloud computing; security of data; combined public-private cloud solution; cost savings; defense security missions; disaster response; homeland security missions; identity management; identity management architectural model; information technology service delivery; mission critical continuity; mission critical operations; mission support operations; national security; public sector cloud security; public sector hybrid cloud computing solutions; resilience balancing; security balancing; service availability criticality; service provisioning; Cloud computing; Computational modeling; Government; Mission critical systems; Mobile communication; Security; availability; confidentiality; hybrid cloud; information security management systems; integrity; public sector; security (ID#: 16-9583)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218091&isnumber=7218046
T. Xu and M. Potkonjak, “Digital PUF Using Intentional Faults,” Quality Electronic Design (ISQED), 2015 16th International Symposium on, Santa Clara, CA, 2015, pp. 448-451. doi: 10.1109/ISQED.2015.7085467
Abstract: Digital systems have numerous advantages over analog systems including robustness, resiliency against operational variations. However, one of the most popular hardware security primitive, PUF, has been an analog component. In this paper, we propose the concept of digital PUF where the core idea is to intentionally use high-risk synthesis to induce defects in circuits. Due to the effect of process variation, each manufactured digital implementation is unique with high probability. Compared to the traditional delay based PUF, the induced defects in circuit are permanent defects that guarantee the fault-based digital PUF resilient against operational variations. Meanwhile, our proposed design takes advantage of the digital functionality of the circuits, thus, easy to be integrated with digital logic. We experiment on the standard array multiplier module. Our standard security analysis indicates ideal security properties of the digital PUF.
Keywords: copy protection; integrated circuit design; logic design; fault based digital PUF; hardware security primitive; high risk synthesis; induced circuit defect; intentional fault; operational variation; physical unclonable function; process variation; standard array multiplier module; Adders; Bridge circuits; Circuit faults; Hamming distance; Logic gates; Security; Wires; Intentional Faults; Physical Unclonable Function (PUF); Security; Testing (ID#: 16-9584)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085467&isnumber=7085355
J. Rajamäki and R. Pirinen, “Critical Infrastructure Protection: Towards a Design Theory for Resilient Software-Intensive Systems,” Intelligence and Security Informatics Conference (EISIC), 2015 European, Manchester, 2015, pp. 184-184. doi: 10.1109/EISIC.2015.32
Abstract: Modern societies are highly dependent on different critical software-intensive information systems that support society. Designing security for these information systems has been particularly challenging since the technologies that make up these systems. Revolutionary advances in hardware, networking, information and human interface technologies require new ways of thinking about how these resilient software-intensive systems (SIS) are conceptualized, built and evaluated. Our research in this area is to develop a design theory (DT) for resilient SISs so that communities developing and operating different information technologies can share knowledge and best practices using a common frame of reference.
Keywords: critical infrastructures; data protection; safety-critical software; security of data; software fault tolerance; DT; SIS; critical infrastructure protection; critical software-intensive information system; design theory; software-intensive system resiliency; Computer security; Information security; Privacy; Software; Systematics; cyber security; design theory; software-intensive systems; trust-building (ID#: 16-9585)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7379753&isnumber=7379706
M. S. Bruno, “Resilience Engineering: A Report on the Needs of the Stakeholder Communities and the Prospects for Responsive Educational Programs,” Interactive Collaborative Learning (ICL), 2015 International Conference on, Florence, 2015, pp. 699-702. doi: 10.1109/ICL.2015.7318113
Abstract: Recent natural and man-made disruptions in major urban areas around the globe have over the last decade spurred widespread interest in the improvement of community resilience. We here define “community” in general terms ranging from local neighborhoods to a nation (and beyond). Resilience as articulated in this manner is not easily quantified, standardized, measured, and modeled. Success will require the integration of seemingly disparate disciplines (e.g., behavioral psychology and software engineering), the involvement of widely diverse stakeholders (e.g., power authorities and the insurance industry), and perhaps even the invention of new fields of study (e.g., measurement science). Given the vast scope of this domain, and the numerous activities in the area already planned or underway around the world, it is essential that a careful assessment be conducted with the aim of identifying the applications of Resilience Engineering; the gaps in our ability to understand, communicate and improve community resilience; and the potential need for — and design of — an academic program aimed at the development of resilience professionals. Stevens Institute of Technology has since 1992 been working with Federal, State and local government officials and industry representatives to improve the resiliency of coastal communities to threats posed by natural hazards including tropical and extra-tropical storms, and flooding. These activities have included the development and delivery of a number of different coastal hazards educational programs, some tailored to engineers and planners, others to government and industry decision-makers and policy makers, and still others to the general public. Over the last 8 years, in large part due to Stevens' leadership of the National Center for Maritime Security, our work in hazard mitigation and resiliency has evolved into an All Hazards approach that includes threats posed by both natural hazards and man-made events. Recently- and in partnership with Lloyd's Register Foundation (LRF), Stevens hosted an international workshop to examine the role of Resilience Engineering in improving the resilience of communities and engineered systems in the range of sectors of interest to the LRF. The workshop included the participation of experts from around the world representing a diverse array of disciplines relevant to resiliency. A report summarizing the workshop findings was prepared that identified the research and education areas most needed to effectively enhance community resilience. In the present paper, we are taking this examination to the logical next step - does Resilience Engineering merit consideration as a new field of study? If yes, at what level and in what format should it be delivered in order to ensure the essential multi-disciplinary treatment of this complex topic? We examine a few of the emerging programs elsewhere around the world to provide a framework of possible approaches. Important international initiatives such as the Rockefeller Foundation's 100 Resilient Cities are opening doors to the creation of new government positions (e.g., Chief Resilience Officer); the same development has been occurring in private industry for several years. Our experience at Stevens suggests strongly that any such academic program must be tied to research, preferably research that is applications-oriented and impactful. Strong collaboration with government and private sector stakeholders is essential to success.
Keywords: education; social sciences; National Center for Maritime Security; Stevens Institute of Technology; all hazards approach; coastal communities; community resilience; hazard mitigation; hazard resiliency; resilience engineering; responsive educational programs; stakeholder communities; Conferences; Education; Hazards; Hurricanes; Industries; Resilience; Stakeholders; community; curriculum;engineering; hazards; infrastructure; resiliency; systems (ID#: 16-9586)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7318113&isnumber=7317975
P. R. Vamsi and K. Kant, “A Taxonomy of Key Management Schemes of Wireless Sensor Networks,” Advanced Computing & Communication Technologies (ACCT), 2015 Fifth International Conference on, Haryana, 2015, pp. 690-696. doi: 10.1109/ACCT.2015.109
Abstract: Research on secure key management in Wireless Sensor Networks (WSNs) using cryptography has gained significant attention in the research community. However, ensuring security with public key cryptography is a challenging task due to resource limitation of sensor nodes in WSNs. In recent years, numerous researchers have proposed efficient lightweight key management schemes to establish secure communication. In this paper, the authors provide a study on several key management schemes developed for WSNs and their taxonomy with respect to various network and security metrics.
Keywords: public key cryptography; telecommunication security; wireless sensor networks; WSN; network metrics; resource limitation; secure communication; secure key management Schemes; security metrics; sensor nodes; Authentication; Cryptography; Peer-to-peer computing; Polynomials; Protocols; Wireless sensor networks; Key distribution; Key management; Resiliency; Security (ID#: 16-9587)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7079167&isnumber=7079031
R. Rastgoufard, I. Leevongwat and P. Rastgoufard, “Impact of Hurricanes on Gulf Coast Electric Grid Islanding of Industrial Plants,” Power Systems Conference (PSC), 2015 Clemson University, Clemson, SC, 2015, pp. 1-5. doi: 10.1109/PSC.2015.7101692
Abstract: The purpose of this study is to determine the impact of seasonal hurricanes and tropical storms on the security and delivery of electricity in Gulf Coast states' electric grid to industrial customers. The Gulf Coast in general and the state of Louisiana in particular include a relatively high number of industrial plants that are connected to the electric grid, and continuity of electricity to these industrial plants is of vital importance in flow of energy from Gulf Coast states to the rest of the nation. The purpose of this paper is to identify the tropical storms and their characteristics in the last fifty years, to determine the existing industrial plants, including their products and services, and to develop an algorithm that results in the impact of tropical storms and hurricanes on continuity of electricity to any specific industrial plant in the Gulf Coast geographical area. The study determines “islanding” of part of the grid that may result from the impact of the simulated tropical storms and would provide sufficient information to plant managers for continuing or halting their plant operation in time prior to the landing of tropical storms or hurricanes. This paper includes a summary of simulation results of 50 historical and 26 hypothetical tropical storms on selected industrial plants in the geographical area. To display the usefulness and practicality of the developed algorithm, we will include results of one case study in the state of Louisiana. Further work on use of the developed algorithm in hardening and development of resilient transmission system in Gulf Coast states will be reported in future publications.
Keywords: distributed power generation; industrial plants; power distribution faults; storms; Louisiana; electric grid islanding; geographical area; gulf coast; industrial plants; resilient transmission system; seasonal hurricanes; tropical storms; Biological system modeling; Hurricanes; Industrial plants; Power transmission lines; Storms; Tropical cyclones; Wind speed; power system analysis; power system resiliency; power system security and reliability; transmission system hardening (ID#: 16-9588)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7101692&isnumber=7101673
E. W. Fulp, H. D. Gage, D. J. John, M. R. McNiece, W. H. Turkett and X. Zhou, “An Evolutionary Strategy for Resilient Cyber Defense,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417814
Abstract: Many cyber attacks can be attributed to poorly configured software, where administrators are often unaware of insecure settings due to the configuration complexity or the novelty of an attack. A resilient configuration management approach would address this problem by updating configuration settings based on current threats while continuing to render useful services. This responsive and adaptive behavior can be obtained using an evolutionary algorithm, where security measures of current configurations are employed to evolve new configurations. Periodically, these configurations are applied across a collection of computers, changing the systems' attack surfaces and reducing their exposure to vulnerabilities. The effectiveness of this evolutionary strategy for defending RedHat Linux Apache web-servers is analyzed experimentally through a study of configuration fitness, population diversity, and resiliency observations. Configuration fitness reflects the level of system confidentiality, integrity and availability; whereas, population diversity gauges the heterogeneous nature of the configuration sets. The computers' security depends upon the discovery of a diverse set of highly fit parameter configurations. Resilience reflects the evolutionary algorithm's adaptability to new security threats. Experimental results indicate the approach is able to determine and maintain secure parameter settings when confronted with a variety of simulated attacks over time.
Keywords: Internet; Linux; evolutionary computation; security of data; RedHat Linux Apache web-servers; adaptive behavior; configuration complexity; configuration management approach; cyber attacks; evolutionary algorithm; evolutionary strategy; resilient cyber defense; security threats; Biological cells; Computers; Guidelines; Security; Sociology; Software; Statistics (ID#: 16-9589)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417814&isnumber=7416057
B. Bhargava, P. Angin, R. Ranchal and S. Lingayat, “A Distributed Monitoring and Reconfiguration Approach for Adaptive Network Computing,” Reliable Distributed Systems Workshop (SRDSW), 2015 IEEE 34th Symposium on, Montreal, QC, 2015, pp. 31-35. doi: 10.1109/SRDSW.2015.16
Abstract: The past decade has witnessed immense developments in the field of network computing thanks to the rise of the cloud computing paradigm, which enables shared access to a wealth of computing and storage resources without needing to own them. While cloud computing facilitates on-demand deployment, mobility and collaboration of services, mechanisms for enforcing security and performance constraints when accessing cloud services are still at an immature state. The highly dynamic nature of networks and clouds makes it difficult to guarantee any service level agreements. On the other hand, providing quality of service guarantees to users of mobile and cloud services that involve collaboration of multiple services is contingent on the existence of mechanisms that give accurate performance estimates and security features for each service involved in the composition. In this paper, we propose a distributed service monitoring and dynamic service composition model for network computing, which provides increased resiliency by adapting service configurations and service compositions to various types of changes in context. We also present a greedy dynamic service composition algorithm to reconfigure service orchestrations to meet user-specified performance and security requirements. Experiments with the proposed algorithm and the ease-of-deployment of the proposed model on standard cloud platforms show that it is a promising approach for agile and resilient network computing.
Keywords: cloud computing; quality of service; security of data; software fault tolerance; software prototyping; agile network computing; distributed service monitoring; dynamic service composition model; greedy dynamic service composition algorithm; quality of service; security requirement; service orchestration reconfiguration; Cloud computing; Context; Heuristic algorithms; Mobile communication; Monitoring; Quality of service; Security; adaptability; agile computing; monitoring; resilience; service-oriented computing (ID#: 16-9590)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371438&isnumber=7371403
L. F. Cómbita, J. Giraldo, A. A. Cárdenas and N. Quijano, “Response and Reconfiguration of Cyber-Physical Control Systems: A Survey,” Automatic Control (CCAC), 2015 IEEE 2nd Colombian Conference on, Manizales, 2015, pp. 1-6. doi: 10.1109/CCAC.2015.7345181
Abstract: The integration of physical systems with distributed embedded computing and communication devices offers advantages on reliability, efficiency, and maintenance. At the same time, these embedded computers are susceptible to cyber-attacks that can harm the performance of the physical system, or even drive the system to an unsafe state; therefore, it is necessary to deploy security mechanisms that are able to automatically detect, isolate, and respond to potential attacks. Detection and isolation mechanisms have been widely studied for different types of attacks; however, automatic response to attacks has attracted considerably less attention. Our goal in this paper is to identify trends and recent results on how to respond and reconfigure a system under attack, and to identify limitations and open problems. We have found two main types of attack protection: (i) preventive, which identifies the vulnerabilities in a control system and then increases its resiliency by modifying either control parameters or the redundancy of devices; (ii) reactive, which responds as soon as the attack is detected (e.g., modifying the non-compromised controller actions).
Keywords: embedded systems; game theory; redundancy; security of data; attack protection; cyber-physical control system; detection mechanism; distributed embedded computing; embedded computer; isolation mechanism; reliability; security mechanism; Actuators; Game theory; Games; Security; Sensor systems (ID#: 16-9591)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345181&isnumber=7345173
S. Trajanovski, F. A. Kuipers, Y. Hayel, E. Altman and P. Van Mieghem, “Designing Virus-Resistant Networks: A Game-Formation Approach,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 294-299. doi: 10.1109/CDC.2015.7402216
Abstract: Forming, in a decentralized fashion, an optimal network topology while balancing multiple, possibly conflicting objectives like cost, high performance, security and resiliency to viruses is a challenging endeavor. In this paper, we take a game-formation approach to network design where each player, for instance an autonomous system in the Internet, aims to collectively minimize the cost of installing links, of protecting against viruses, and of assuring connectivity. In the game, minimizing virus risk as well as connectivity costs results in sparse graphs. We show that the Nash Equilibria are trees that, according to the Price of Anarchy (PoA), are close to the global optimum, while the worst-case Nash Equilibrium and the global optimum may significantly differ for small infection rate and link installation cost. Moreover, the types of trees, in both the Nash Equilibria and the optimal solution, depend on the virus infection rate, which provides new insights into how viruses spread: for high infection rate τ, the path graph is the worst- and the star graph is the best-case Nash Equilibrium. However, for small and intermediate values of τ, trees different from the path and star graphs may be optimal.
Keywords: computer network security; computer viruses; game theory; telecommunication network topology; Internet; Nash equilibria; autonomous system; decentralized fashion; game-formation approach; global optimum; optimal network topology; price of anarchy; virus resiliency; virus security; virus-resistant network design; worst-case Nash equilibrium; Games; Nash equilibrium; Network topology; Peer-to-peer computing; Security; Stability analysis; Viruses (medical) (ID#: 16-9592)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7402216&isnumber=7402066
C. Aduba and C.-h. Won, “Resilient Cumulant Game Control for Cyber-Physical Systems,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287422
Abstract: In this paper, we investigate the resilient cumulant game control problem for a cyber-physical system. The cyberphysical system is modeled as a linear hybrid stochastic system with full-state feedback. We are interested in 2-player cumulant Nash game for a linear Markovian system with quadratic cost function where the players optimize their system performance by shaping the distribution of their cost function through cost cumulants. The controllers are optimally resilient against control feedback gain variations. We formulate and solve the coupled first and second cumulant Hamilton-Jacobi-Bellman (HJB) equations for the dynamic game. In addition, we derive the optimal players strategy for the second cost cumulant function. The efficiency of our proposed method is demonstrated by solving a numerical example.
Keywords: Markov processes; game theory; optimisation; security of data; HJB equation; Hamilton-Jacobi-Bellman equation; Nash game; control feedback gain variation; cumulant game control resiliency; cyber-physical system; full-state feedback; linear Markovian system; linear hybrid stochastic system; quadratic cost function optimization; security vulnerability; Cost function; Cyber-physical systems; Games; Mathematical model; Nash equilibrium; Trajectory (ID#: 16-9593)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287422&isnumber=7287407
S. Nabavi and A. Chakrabortty, “An Intrusion-Resilient Distributed Optimization Algorithm for Modal Estimation in Power Systems,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 39-44. doi: 10.1109/CDC.2015.7402084
Abstract: In this paper we present an intrusion-resilient distributed algorithmic approach to estimate the electro-mechanical oscillation modes of a large power system using Synchrophasor measurements. For this, we first show how to distribute the centralized Prony method over a network consisting of several computational areas using a distributed variant of alternating direction method of multipliers (D-ADMM). We then add a cross-verification step to show the resiliency of this algorithm against the cyber-attacks that may happen in the form of data manipulation. We illustrate the robustness of our method in face of intrusion for a case study on IEEE 68-bus power system.
Keywords: optimisation; phasor measurement; power engineering computing; power system state estimation; security of data; D-ADMM; IEEE 68-bus power system; centralized Prony method; cross-verification step; cyber-attacks; distributed variant of alternating direction method of multipliers; electro-mechanical oscillation modes; intrusion-resilient distributed optimization algorithm; large power system; modal estimation; power systems; synchrophasor measurements; Computer architecture; Damping; Distributed databases; Estimation; Handheld computers; Phasor measurement units; Power systems (ID#: 16-9594)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7402084&isnumber=7402066
G. Torres et al., “Distributed StealthNet (D-SN): Creating a Live, Virtual, Constructive (LVC) Environment for Simulating Cyber-Attacks for Test and Evaluation (T&E),” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 1284-1291. doi: 10.1109/MILCOM.2015.7357622
Abstract: The Services have become increasingly dependent on their tactical networks for mission command functions, situational awareness, and target engagements (terminal weapon guidance). While the network brings an unprecedented ability to project force by all echelons in a mission context, it also brings the increased risk of cyber-attack on the mission operation. With both this network use and vulnerability in mind, it is necessary to test new systems (and networked Systems of Systems (SoS)) in a cyber-vulnerable network context. A new test technology, Distributed-StealthNet (D-SN), has been created by the Department of Defense Test Resource Management Center (TRMC) to support SoS testing with cyber-attacks against mission threads. D-SN is a simulation/emulation based virtual environment that can provide a representation of a full scale tactical network deployment (both Radio Frequency (RF) segments and wired networks at command posts). D-SN has models of real world cyber threats that affect live tactical systems and networks. D-SN can be integrated with live mission Command and Control (C2) hardware and then a series of cyber-attacks using these threat models can be launched against the virtual network and the live hardware to determine the SoS's resiliency to sustain the tactical mission. This paper describes this new capability and the new technologies developed to support this capability.
Keywords: command and control systems; computer network security; military communication; wide area networks; C2 hardware; Command and Control hardware; D-SN; LVC environment; T&E; TRMC; cyberattack simulation; cybervulnerable network context; department of defense test resource management center; distributed stealthnet; live, virtual, constructive environment; tactical network; test and evaluation; Computational modeling; Computer architecture; Computers; Hardware; Ports (Computers); Real-time systems; Wide area networks (ID#: 16-9595)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357622&isnumber=7357245
Y. Wang, I. R. Chen and J. H. Cho, “Trust-Based Task Assignment in Autonomous Service-Oriented Ad Hoc Networks,” Autonomous Decentralized Systems (ISADS), 2015 IEEE Twelfth International Symposium on, Taichung, 2015, pp.71-77. doi: 10.1109/ISADS.2015.19
Abstract: We propose and analyze a trust management protocol for autonomous service-oriented mobile ad hoc networks (MANETs) populated with service providers (SPs) and service requesters (SRs). We demonstrate the resiliency and convergence properties of our trust protocol design for service-oriented MANETs in the presence of malicious nodes performing opportunistic service attacks and slandering attacks. Further, we consider a situation in which a mission comprising dynamically arriving tasks must achieve multiple conflicting objectives, including maximizing the mission reliability, minimizing the utilization variance, and minimizing the delay to task completion. We devise a trust-based heuristic algorithm to solve this multi-objective optimization problem with a linear runtime complexity, thus allowing dynamic node-to-task assignment to be performed at runtime. Through extensive simulation, we demonstrate that our trust-based node-to-task assignment algorithm outperforms a non-trust-based counterpart using blacklisting techniques while performing close to the ideal solution quality with perfect knowledge of node reliability over a wide range of environmental conditions.
Keywords: access protocols; mobile ad hoc networks; optimisation; telecommunication security; autonomous service-oriented mobile ad hoc networks MANET; blacklisting techniques; dynamic node-to-task assignment; linear runtime complexity; malicious nodes; multiobjective optimization problem; opportunistic service attacks; service providers; service requesters; slandering attacks; trust management protocol; trust-based heuristic algorithm; trust-based task assignment; Ad hoc networks; Heuristic algorithms; Mobile computing; Peer-to-peer computing; Protocols; Reliability; Runtime; multi-objective optimization; performance analysis; service-oriented mobile ad hoc networks; task assignment; trust (ID#: 16-9596)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098240&isnumber=7098213
S. Subha and U. G. Sankar, “Message Authentication and Wormhole Detection Mechanism in Wireless Sensor Network,” Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, Coimbatore, 2015, pp. 1-4. doi: 10.1109/ISCO.2015.7282382
Abstract: One of the most effective way to prevent unauthorized and corrupted message being forwarded in wireless sensor network. But there is high computational and communication overhead in addition to lack of scalability and resilience to node compromise attacks. So to address these issues, a polynomial-based scheme was recently introduced. However, this scheme and its extensions all have the weakness of a built-in threshold determined by the degree of the polynomial. when the number of messages transmitted is larger than this threshold, the adversary can fully recover the polynomial. In the existing system, an unconditionally secure and efficient source anonymous message authentication (SAMA) scheme is presented which is based on the optimal modified Elgamal signature (MES) scheme on elliptic curves. This MES scheme is secure against adaptive chosen-message attacks in the random oracle model. This scheme enables the intermediate nodes to authenticate the message so that all corrupted message can be detected and dropped to conserve the sensor power. While achieving compromise resiliency, flexible-time authentication and source identity protection, this scheme does not have the threshold problem. While enabling intermediate nodes authentication, this scheme allows any node to transmit an unlimited number of messages without suffering the threshold problem. But by using this method the black hole and gray hole attacks are detected but wormhole attack is doesn't detect. Because the wormhole attack is one of the harmful attacks which degrade the network performance. So, in the proposed system, one innovative technique is introduced which is called an efficient wormhole detection mechanism in the wireless sensor networks. In this method, considers the RTT between two successive nodes and those nodes' neighbor number which is needed to compare those values of other successive nodes. The identification of wormhole attacks is based on the two faces. The first consideration is that the transmission time between two wormhole attack affected nodes is considerable higher than that between two normal neighbor nodes. The second detection mechanism is based on the fact that by introducing new links into the network, the adversary increases the number of neighbors of the nodes within its radius. An experimental result shows that the proposed method achieves high network performance.
Keywords: polynomials; telecommunication security; wireless sensor networks; MES scheme; SAMA; adaptive chosen message attacks; black hole attacks; corrupted message; elliptic curves; gray hole attacks; message authentication; modified Elgamal signature; node compromise attacks; polynomial based scheme; random oracle model; source anonymous message authentication; unauthorized message; unlimited number; wireless sensor network; wormhole detection mechanism; Computational modeling; Cryptography; Scalability; Terminology; Hop-by-hop authentication; public-key cryptosystem; source privacy (ID#: 16-9597)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282382&isnumber=7282219
G. Patounas, Y. Zhang and S. Gjessing, “Evaluating Defence Schemes Against Jamming in Vehicle Platoon Networks,” Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on, Las Palmas, 2015, pp. 2153-2158. doi: 10.1109/ITSC.2015.348
Abstract: This paper studies Intelligent Transportation Systems (ITS), Vehicular Ad hoc Networks (VANETs) and their role in future transport. It focuses on prevention, detection and mitigation of denial of service attacks in a vehicle platoon. We evaluate the physical workings of a vehicle platoon as well as the wireless communication between the vehicles and the possibility of malicious interference. Defence methods against jamming attacks are implemented and tested. These include methods for interference reduction, data redundancy and warning systems based on on-board vehicle sensors. The results presented are positive and the defences successful in increasing a vehicle platoon's resiliency to attacks.
Keywords: intelligent transportation systems; jamming; radiofrequency interference; telecommunication security; vehicular ad hoc networks; ITS; VANET; data redundancy; defence methods; denial of service attacks; evaluating defence schemes; interference reduction; jamming attacks; malicious interference; on-board vehicle sensors; vehicle platoon networks; vehicular ad hoc networks; warning systems; wireless communication; Array signal processing; Global Positioning System; Interference; Jamming; Sensors; Vehicles; Wireless communication (ID#: 16-9598)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313440&isnumber=7312804
C. Lee, H. Shim and Y. Eun, “Secure and Robust State Estimation Under Sensor Attacks, Measurement Noises, and Process Disturbances: Observer-Based Combinatorial Approach,” Control Conference (ECC), 2015 European, Linz, 2015, pp. 1872-1877. doi: 10.1109/ECC.2015.7330811
Abstract: This paper presents a secure and robust state estimation scheme for continuous-time linear dynamical systems. The method is secure in that it correctly estimates the states under sensor attacks by exploiting sensing redundancy, and it is robust in that it guarantees a bounded estimation error despite measurement noises and process disturbances. In this method, an individual Luenberger observer (of possibly smaller size) is designed from each sensor. Then, the state estimates from each of the observers are combined through a scheme motivated by error correction techniques, which results in estimation resiliency against sensor attacks under a mild condition on the system observability. Moreover, in the state estimates combining stage, our method reduces the search space of a minimization problem to a finite set, which substantially reduces the required computational effort.
Keywords: continuous time systems; error correction; linear systems; observers; redundancy; robust control; security; Luenberger observer; bounded estimation error; continuous-time linear dynamical system; error correction technique; observer-based combinatorial approach; robust state estimation; search space; secure state estimation; sensor attack; Indexes; Minimization; Noise measurement; Observers; Redundancy; Robustness (ID#: 16-9599)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7330811&isnumber=7330515
K. Futamura et al., “vDNS Closed-Loop Control: A Framework for an Elastic Control Plane Service,” Network Function Virtualization and Software Defined Network (NFV-SDN), 2015 IEEE Conference on, San Francisco, CA, 2015, pp. 170-176. doi: 10.1109/NFV-SDN.2015.7387423
Abstract: Virtual Network Functions (VNFs) promise great efficiencies in deploying and operating new services, in terms of performance, resiliency and cost. However, today most operational VNF clouds are still generally static after their initial instantiation, thus not realizing many of the potential benefits of virtualization and enhanced orchestration. In this paper, we explore a large-scale operational instantiation of a virtual Domain Name System (vDNS) and present an analytical framework and platform to improve its efficiency during normal and adverse network traffic conditions, such as those caused by Distributed Denial-of-Service (DDoS) attacks and site failures. Using dynamic virtual machine instantiation, we show that under normal daily cycles we can run vDNS resolvers at higher target load, increasing the transactional efficiency of the underlying hardware by more than 10%, and improving client latency due to lower recursion rates. We demonstrate a method of reducing reaction time and service impacts due to malicious network traffic, such as during a DDoS event, by automatically redeploying virtual resources at selected nodes in the network. We quantify the tradeoff between spare hardware costs and latency under site failures, taking advantage of SDN controller-based flow redirection. This work is part of AT&T's ongoing network transformation through network function virtualization (NFV), software-defined networking (SDN), and enhanced orchestration.
Keywords: cloud computing; computer network security; software defined networking; virtualisation; AT&T; DDoS event; SDN; SDN controller-based flow redirection; VNF clouds; distributed denial-of-service attacks; dynamic virtual machine instantiation; elastic control plane service; large-scale operational instantiation; malicious network traffic; network function virtualization; network traffic conditions; network transformation; reaction time; recursion rates; site failures; software-defined networking; transactional efficiency; vDNS closed-loop control; vDNS resolvers; virtual domain name system; virtual network functions; Computer crime; Conferences; Hardware; Servers; Software defined networking; Telemetry (ID#: 16-9600)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7387423&isnumber=7387384
J. Schneider, C. Romanowski, R. K. Raj, S. Mishra and K. Stein, “Measurement of Locality Specific Resilience,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7225332
Abstract: Resilience has been defined at the local, state, and national levels, and subsequent attempts to refine the definition have added clarity. Quantitative measurements, however, are crucial to a shared understanding of resilience. This paper reviews the evolution of resiliency indicators and metrics and suggests extensions to current indicators to measure functional resilience at a jurisdictional or community level. Using a management systems approach, an input/output model may be developed to demonstrate abilities, actions, and activities needed to support a desired outcome. Applying systematic gap analysis and an improvement cycle with defined metrics, the paper proposes a model to evaluate a community's operational capability to respond to stressors. As each locality is different-with unique risks, strengths, and weaknesses-the model incorporates these characteristics and calculates a relative measure of maturity for that community. Any community can use the resulting model output to plan and improve its resiliency capabilities.
Keywords: emergency management; social sciences; community operational capability; functional resilience measurement; locality specific resilience measurement; quantitative measurement; resiliency capability; resiliency indicators; resiliency metrics; systematic gap analysis; Economics; Emergency services; Hazards; Measurement; Resilience; Standards; Training; AHP; community resilience; operational resilience modeling; resilience capability metrics (ID#: 16-9601)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225332&isnumber=7190491
K. Kaminska, “Tapping into Social Media and Digital Humanitarians for Building Disaster Resilience in Canada,” Humanitarian Technology Conference (IHTC2015), 2015 IEEE Canada International, Ottawa, ON, 2015, pp. 1-4. doi: 10.1109/IHTC.2015.7274444
Abstract: Social media offers the opportunity to connect with the public, improve situational awareness, and to reach people quickly with alerts, warnings and preparedness messages. However, the ever increasing popularity of social networking can also lead to 'information overload' which can prevent disaster management organizations from processing and using social media information effectively. This limitation can be overcome through collaboration with 'digital humanitarians' — tech savvy volunteers, who are leading the way in crisis-mapping and crowdsourcing of disaster information. Since the 2010 earthquake in Haiti, their involvement has become an integral part of the international community's response to major disasters. For example, the United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA) activated the Digital Humanitarian Network during the 2013 response to typhoon Haiyan/Yolanda [1]. Our previous research has shown that Canada's disaster management community has not yet fully taken advantage of all the opportunities that social media offers, including the potential of collaboration with digital humanitarians [2]. This finding has led to the development of an experiment designed to test how social media aided collaboration can enable enhanced situational awareness and improve recovery outcomes. The experiment took place in November 2014 as a part of the third Canada-US Enhanced Resiliency Experiment (CAUSE III), which is an experiment series that focuses on enhancing resilience through situational awareness interoperability. This paper describes the results of the experiment and Canadian efforts to facilitate effective information exchange between disaster management officials, digital humanitarians as well as the public at large, so as to improve situational awareness and build resilience, both at the community and the national level.
Keywords: emergency management; social networking (online); CAUSE III; Canada-US enhanced resiliency experiment; UNOCHA; United Nations Office for the Coordination of Humanitarian Affairs; community level; digital humanitarians; disaster management organizations; disaster resilience; information overload; national level; recovery outcomes; situational awareness; social media information; social networking; Collaboration; Disaster management; Media; Organizations; Safety; Twitter; disaster management; resilience; social media; social network analysis (ID#: 16-9602)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7274444&isnumber=7238033
A. Alzahrani and R. F. DeMara, “Hypergraph-Cover Diversity for Maximally-Resilient Reconfigurable Systems,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1086-1092. doi: 10.1109/HPCC-CSS-ICESS.2015.294
Abstract: Scaling trends of reconfigurable hardware (RH) and their design flexibility have proliferated their use in dependability-critical embedded applications. Although their reconfigurability can enable significant fault tolerance, due to the complexity of execution time in their design flow, in-field reconfigurability can be infeasible and thus limit such potential. This need is addressed by developing a graph and set theoretic approach, named hypergraph-cover diversity (HCD), as a preemptive design technique to shift the dominant costs of resiliency to design-time. In particular, union-free hypergraphs are exploited to partition the reconfigurable resources pool into highly separable subsets of resources, each of which can be utilized by the same synthesized application netlist. The diverse implementations provide reconfiguration-based resilience throughout the system lifetime while avoiding the significant overheads associated with runtime placement and routing phases. Two novel scalable algorithms to construct union-free hypergraphs are proposed and described. Evaluation on a Motion-JPEG image compression core using a Xilinx 7-series-based FPGA hardware platform demonstrates a statistically significant increase in fault tolerance and area efficiency when using proposed work compared to commonly-used modular redundancy approaches.
Keywords: data compression; embedded systems; field programmable gate arrays; graph theory; image coding; motion estimation; reconfigurable architectures; HCD; Motion-JPEG image compression core; RH; Xilinx 7-series-based FPGA hardware platform; area efficiency; dependability-critical embedded applications; design flexibility; execution time; fault tolerance; hypergraph-cover diversity; in-field reconfigurability; maximally-resilient reconfigurable systems; preemptive design technique; reconfigurable hardware; reconfigurable resource partitioning; reconfiguration-based resilience; resiliency costs; routing phases; runtime placement; separable resource subsets; set theoretic approach; statistical analysis; synthesized application netlist; union-free hypergraphs; Circuit faults; Embedded systems; Fault tolerance; Fault tolerant systems; Field programmable gate arrays; Hardware; Runtime; Area Efficiency; Design Diversity; FPGAs; Fault Tolerance; Hypergraphs; Reconfigurable Systems; Reliability (ID#: 16-9603)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336313&isnumber=7336120
P. P. Hung and E. N. Huh, “A Cost and Contention Conscious Scheduling for Recovery in Cloud Environment,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 26-31. doi: 10.1109/HPCC-CSS-ICESS.2015.325
Abstract: Cloud Computing (CC) model plays an important role for the growth of contemporary IT industry where stability, availability and partition tolerance of computational resources mean a great deal. It is of utmost significance that not only cloud services are to be provided with satisfactory performance but also they are able to minimize and resiliently recover from potential damages when cloud infrastructures are subject to changes and/or disasters. This study discusses a method that results in potentially better resiliency and faster recovery from failures based on the well-known genetic algorithm. Moreover, we aim to achieve a globally optimized performance as well as a service solution that can remain financially and operationally balanced according to customer preferences. The proposed methodology has undergone various and intensive evaluations to be proclaimed of their effectiveness and efficiency, even when put under tight comparison with other existing work of relevant aspects.
Keywords: DP industry; cloud computing; costing; genetic algorithms; scheduling; system recovery; CC model; cloud computing model; cloud environment; computational resources; contemporary IT industry; contention conscious scheduling; cost; failure recovery; genetic algorithm; Arrays; Cloud computing; Genetic algorithms; Processor scheduling; Program processors; Schedules; Servers; Task scheduling; parallel computing; recovery time; big data (ID#: 16-9604)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336139&isnumber=7336120
N. Dutt, A. Jantsch and S. Sarma, “Self-Aware Cyber-Physical Systems-on-Chip,” Computer-Aided Design (ICCAD), 2015 IEEE/ACM International Conference on, Austin, TX, 2015, pp. 46-50. doi: 10.1109/ICCAD.2015.7372548
Abstract: Self-awareness has a long history in biology, psychology, medicine, and more recently in engineering and computing, where self-aware features are used to enable adaptivity to improve a system's functional value, performance and robustness. With complex many-core Systems-on-Chip (SoCs) facing the conflicting requirements of performance, resiliency, energy, heat, cost, security, etc. — in the face of highly dynamic operational behaviors coupled with process, environment, and workload variabilities — there is an emerging need for self-awareness in these complex SoCs. Unlike traditional MultiProcessor Systems-on-Chip (MPSoCs), self-aware SoCs must deploy an intelligent co-design of the control, communication, and computing infrastructure that interacts with the physical environment in real-time in order to modify the system's behavior so as to adaptively achieve desired objectives and Quality-of-Service (QoS). Self-aware SoCs require a combination of ubiquitous sensing and actuation, health-monitoring, and statistical model-building to enable the SoC's adaptation over time and space. After defining the notion of self-awareness in computing, this paper presents the Cyber-Physical System-on-Chip (CPSoC) concept as an exemplar of a self-aware SoC that intrinsically couples on-chip and cross-layer sensing and actuation using a sensor-actuator rich fabric to enable self-awareness.
Keywords: actuators; cyber-physical systems; sensors; statistical analysis; system-on-chip; SoC adaptation; communication infrastructure; computing infrastructure; control infrastructure; cross layer sensing; cross-layer actuation; dynamic operational behaviors; health monitoring; intelligent codesign; physical environment; quality-of-service; self-aware SoC; self-aware cyber-physical system-on-chip; sensor-actuator rich fabric; statistical model-building; system behavior; ubiquitous actuation; ubiquitous sensing; workload variabilities; Computational modeling; Computer architecture; Context; Predictive models; Sensors; Software; System-on-chip
(ID#: 16-9605)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7372548&isnumber=7372533
A. Sajadi, R. M. Kolacinski and K. A. Loparo, “Impact of Wind Turbine Generator Type in Large-Scale Offshore Wind Farms on Voltage Regulation in Distribution Feeders,” Innovative Smart Grid Technologies Conference (ISGT), 2015 IEEE Power & Energy Society, Washington, DC, 2015, pp. 1-5. doi: 10.1109/ISGT.2015.7131785
Abstract: The goal of the U.S. Department of Energy (DOE) roadmap [1] is a 20% penetration of wind energy into the generation mix by 2030. Attaining this objective will help protect the environment and reduce fossil fuel dependency, thus improving energy security and independence. This paper discusses how the technology used in large scale offshore wind farms impacts voltage regulation in distribution feeders. Although the offshore wind farms are integrated into an interconnected power system through transmission lines, the system constraints can cause stability, resiliency and reliability issues. The major types of machine used in offshore wind farms are modeled using a generic model of General Electric (GE) wind machines. The transmission and distribution system models are based on the actual existing regional FirstEnergy/PJM power grid in Midwestern of United State. In addition, the impact of installing Static VAR Compensator (SVC) at Points of Interconnection (POI) on voltage regulation is investigated.
Keywords: offshore installations; power distribution control; static VAr compensators; voltage control; wind power plants; wind turbines; GE wind machines; General Electric wind machines; POI; SVC; distribution feeders; distribution system models; interconnected power system; large-scale offshore wind farms; points of interconnection; static VAr compensator; transmission system models; voltage regulation; wind turbine generator; Generators; Indexes; Power system stability; Static VAr compensators; Voltage control; Wind farms; Wind power generation; DFIG; Induction Machine; Offshore Wind; Synchronous Machine; Voltage Regulation; Wind Integration (ID#: 16-9606)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131785&isnumber=7131775
A. Malikopoulos, Tao Zhang, K. Heaslip and W. Fehr, “Panel 2: Connected Electrified Vehicles and Cybersecurity,” Transportation Electrification Conference and Expo (ITEC), 2015 IEEE, Dearborn, MI, 2015, pp. 1-1. doi: 10.1109/ITEC.2015.7165725
Abstract: Summary form only given, as follows. The complete presentation was not made available for publication as part of the conference proceedings. The development and deployment of a fully connected transportation system that makes the most of multi-modal, transformational applications requires a robust, underlying technological platform. The platform is a combination of well-defined technologies, interfaces, and processes that, combined, ensure safe, stable, interoperable, reliable system operations that minimize risk and maximize opportunities. The primary application area of connected vehicles is the vehicle safety. These applications are designed to increase situational awareness and reduce or eliminate crashes through vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) data transmission that supports: driver advisories, driver warnings, and vehicle and/or infrastructure controls. These technologies may potentially address a great majority of crash scenarios with unimpaired drivers, preventing tens of thousands of automobile crashes every year. Since V2V and V2I communications and a significant data processing are involved, the connected vehicles concept also requires resiliency and immunity for cyber security issues. This panel session will discuss technology, applications, dedicated short range communications (DSRC) technology and capabilities, policy and institutional issues, and international research on the subject matter.
Keywords: (not provided) (ID#: 16-9607)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165725&isnumber=7165721
R. Routray, “Cloud Storage Infrastructure Optimization Analytics,” Cloud Engineering (IC2E), 2015 IEEE International Conference on, Tempe, AZ, 2015, pp. 92-92. doi: 10.1109/IC2E.2015.83
Abstract: Summary form only given. Emergence and adoption of cloud computing have become widely prevalent given the value proposition it brings to an enterprise in terms of agility and cost effectiveness. Big data analytical capabilities (specifically treating storage/system management as a big data problem for a service provider) using Cloud delivery models is defined as Analytics as a Service or Software as a Service. This service simplifies obtaining useful insights from an operational enterprise data center leading to cost and performance optimizations.Software defined environments decouple the control planes from the data planes that were often vertically integrated in a traditional networking or storage systems. The decoupling between the control planes and the data planes enables opportunities for improved security, resiliency and IT optimization in general. This talk describes our novel approach in hosting the systems management platform (a.k.a. control plane) in the cloud offered to enterprises in Software as a Service (SaaS) model. Specifically, in this presentation, focus is on the analytics layer with SaaS paradigm enabling data centers to visualize, optimize and forecast infrastructure via a simple capture, analyze and govern framework. At the core, it uses big data analytics to extract actionable insights from system management metrics data. Our system is developed in research and deployed across customers, where core focus is on agility, elasticity and scalability of the analytics framework. We demonstrate few system/storage management analytics case studies to demonstrate cost and performance optimization for both cloud consumer as well as service provider. Actionable insights generated from the analytics platform are implemented in an automated fashion via an OpenStack based platform.
Keywords: cloud computing; data analysis; optimisation; Analytics as a Service; OpenStack based platform; SaaS model; Software as a Service; cloud delivery models; cloud storage infrastructure optimization analytics; data analytical capabilities; data analytics; data planes; management metric data system; management platform system; operational enterprise data center; performance optimizations; software defined environments; value proposition; Big data; Cloud computing; Computer science; Conferences; Optimization; Software as a service; Storage management (ID#: 16-9608)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092904&isnumber=7092808
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.