Publications of Interest

 

 
SoS Logo

Publications of Interest

 

The Publications of Interest section contains bibliographical citations, abstracts if available, and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years; others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness to current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: news@scienceofsecurity.net

(ID#:15-7305)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence

Adaptive Filtering 2015

 

 
SoS Logo

Adaptive Filtering

2015


As the power of digital signal processors has increased, adaptive filters are now routinely used in many devices as varied as mobile phones, printers, cameras, power systems, GPS devices, and medical monitoring equipment. An adaptive filter uses an optimization algorithm in a system with a linear filter to adjust parameters that have a transfer function controlled by variable parameters. Because of the complexity of the optimization algorithms, most of these adaptive filters are digital filters. They are required for some applications because some parameters of the desired processing operation are not known in advance or are changing. The works cited here are articles about adaptive filtering as it relates to the Science of Security. Articles were published in 2015.



Ozdil, O.; Gunes, A., “Unsupervised Hyperspectral Image Segmentation Using Adaptive Bilateral Filtering,” in Signal Processing and Communications Applications Conference (SIU), 2015 23th, vol., no., pp. 1010–1013, 16–19 May 2015. doi:10.1109/SIU.2015.7130003
Abstract: This paper proposes the use of adaptive bilateral filter for the segmentation of hyperspectral images. First, the spectral bands are selected according to the information contained in each band. Then on each band, adaptive bilateral filter is applied in order to increase the spatial correlation of each pixel. The results are evaluated based on the successful segmentation percentage. It is shown that the segmentation accuracy of k-means clustering algorithm is increased.
Keywords: adaptive filters; correlation methods; hyperspectral imaging; image segmentation; adaptive bilateral filtering; k-means clustering; segmentation accuracy; spatial correlation; spectral bands; unsupervised hyperspectral image segmentation; Biomedical imaging; Correlation; Histograms; Hyperspectral imaging; Image segmentation; Hyperspectral image processing; bilateral filter; k-means algorithm; segmentation (ID#: 15-7160)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130003&isnumber=7129794

 

Yixian Zhu; Xianghong Cheng; Lei Wang; Ling Zhou, “An Intelligent Fault-Tolerant Strategy for AUV Integrated Navigation Systems,” in Advanced Intelligent Mechatronics (AIM), 2015 IEEE International Conference on, vol., no., pp. 269–274, 7–11 July 2015. doi:10.1109/AIM.2015.7222543
Abstract: To ensure the security and reliability of the autonomous underwater vehicle (AUV), an intelligent fault-tolerant strategy for integrated navigation systems is presented in this paper. The improved federated Kalman filter (FKF) is designed to fuse the multiple subsystems, including strapdown inertial navigation system (SINS), magnetic compass (MCP), Doppler velocity log (DVL) and terrain aided navigation (TAN). The intelligent fault-tolerant structure of SINS/MCP/DVL/TAN integrated navigation system is first established, which includes adaptive local filters and fault isolation decision (FID) modules. Fuzzy logic is introduced to adaptively adjust the measurement covariance matrixes of local filters online. FID module is implemented based on fuzzy reasoning. The simulation results show that, the proposed fault-tolerant strategy detects and insulates the faults effectively, which can greatly improve the reliability and guarantee the safety of AUVs in complex underwater environments.
Keywords: Kalman filters; autonomous underwater vehicles; covariance matrices; fault tolerant control; inertial navigation; AUV; AUV integrated navigation systems; DVL; Doppler velocity log; FID modules; MCP; SINS; TAN; adaptive local filters; autonomous underwater vehicle; fault isolation decision module; fuzzy logic; fuzzy reasoning; improved federated Kalman filter; integrated navigation systems; intelligent fault-tolerant strategy; magnetic compass; measurement covariance matrixes; strapdown inertial navigation system; terrain aided navigation; Adaptive filters; Covariance matrices; Fault tolerance; Fault tolerant systems; Navigation; Noise measurement; Sensors (ID#: 15-7161)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7222543&isnumber=7222494

 

Nivedhitha, R.; Abirami, S.; Krishnan, R.B.; Raajan, N.R., “Proficient Toning Mechanism for Firewall Policy Assessment,” in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, vol., no., pp. 1–8, 19–20 March 2015. doi:10.1109/ICCPCT.2015.7159306
Abstract: The tool firewall is the software or hardware procedure that facilitates to guard data and it filter the entire traffic voyage the network boundary. It might be configured to restrict or to permit certain devices or applications to access our data sources available in our network. Packet matching over the firewall tool can be treated as a taper setting trouble: All network data packet consist of its own addressing fields, which must be examined beside every firewall policies to locate the earliest identical rule. Surviving Firewall applications such as CISCO PIX Firewalls and Checkpoint FireWall-1 provide various built-in software tools that permit firewalls as Bundle or Sorted and these tacked Firewalls will partake their charges. The main accusatives of these surviving mechanisms are focusing only to mend the Performance, Exploitation of resources and protection. But still these mechanisms not succeed to attain superior execution while focusing on usage of resources. To handle this difficulty, the projected study is applied in Java software as a Firewall tool which holds an Adaptive Firewall Policies filtering procedure using “Arithmetic Proficient Toning” mechanism, which upgrades the performance of the firewalls over the network in conditions of resources exploitation, services delay and throughput. This anticipated work brought out an adaptative Firewall Policies Diminution Procedure along with an efficient packet filtering mechanism, which dilutes firewall rules execution without compromising the System Security. From the results of our anticipated research, it is founded that this projected practice is a proficient and practical algorithm for firewall policy toning and it dilutes the overall servicing cost, which helps to attain concert at a more prominent grade.
Keywords: Java; firewalls; CISCO PIX firewalls; Java software; adaptative firewall policies diminution procedure; adaptive firewall policies filtering procedure; arithmetic proficient toning mechanism; checkpoint firewall-1; firewall policy assessment; firewall rule execution; packet filtering mechanism; packet matching; resources exploitation; software tools; surviving mechanisms; traffic voyage filtering; Data structures; Databases; Filtering; Firewalls (computing); Protocols; Software; Adaptative Filter; Firewall Policies Assessment; Proficient Toning; Resource Exploitation (ID#: 15-7162)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159306&isnumber=7159156

 

Rathgeb, C.; Gomez-Barrero, M.; Busch, C.; Galbally, J.; Fierrez, J., “Towards Cancelable Multi-Biometrics Based on Bloom Filters: A Case Study on Feature Level Fusion of Face and Iris,” in Biometrics and Forensics (IWBF), 2015 International Workshop on, vol., no., pp. 1–6, 3–4 March 2015. doi:10.1109/IWBF.2015.7110225
Abstract: In this work we propose a generic framework for generating an irreversible representation of multiple biometric templates based on adaptive Bloom filters. The presented technique enables a feature level fusion of different biometrics (face and iris) to a single protected template, improving privacy protection compared to the corresponding systems based on a single biometric trait. At the same time, a significant gain in biometric performance is achieved, confirming the soundness of the proposed technique.
Keywords: data privacy; data structures; face recognition; image fusion; iris recognition; adaptive bloom filter; biometric performance; biometric trait; face feature level fusion; iris feature level fusion; multibiometrics; multiple biometric template; privacy protection; Face; Feature extraction; Iris recognition; Privacy; Transforms; Bloom filter; Template protection; biometric fusion; face; iris (ID#: 15-7163)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7110225&isnumber=7110217

 

More, V.; Kumar, H.; Kaingade, S.; Gaidhani, P.; Gupta, N., “Visual Odometry Using Optic Flow for Unmanned Aerial Vehicles,” in Cognitive Computing and Information Processing (CCIP), 2015 International Conference on, vol., no., pp. 1–6, 3–4 March 2015. doi:10.1109/CCIP.2015.7100731
Abstract: Use of computer vision on Unmanned Aerial Vehicles (UAV) has been a promising area of research given its potential applications in exploration, surveillance and security. Localization in indoor, unknown environments can become increasingly difficult due to irregularities or complete absence of GPS. Advent of small, light and high performance cameras and computing hardware has enabled design of autonomous systems. In this paper, the optic flow principle is employed for estimating two dimensional motion of the UAV using a downward facing monocular camera. Combining it with an ultrasonic sensor, UAV’s three dimensional position is estimated. Position estimation and trajectory tracking have been performed and verified in a laboratory setup. All computations are carried out onboard the UAV using a miniature single board computer.
Keywords: autonomous aerial vehicles; cameras; distance measurement; image sequences; motion estimation; robot vision; ultrasonic transducers; GPS; UAV three dimensional position estimation; autonomous systems; computer vision; downward facing monocular camera; high performance cameras; miniature single board computer; optic flow; trajectory tracking; two dimensional motion estimation; ultrasonic sensor; unmanned aerial vehicles; visual odometry; Adaptive optics; Cameras; Computer vision; Image motion analysis; Optical filters; Optical imaging; Optical sensors; UAV; navigation; odometry; optic flow (ID#: 15-7164)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100731&isnumber=7100673

 

Vandana, M.; Manmadhan, S., “Self Learning Network Traffic Classification,” in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, vol., no., pp. 1–5, 19–20 March 2015. doi:10.1109/ICIIECS.2015.7193038
Abstract: Network management is part of traffic engineering and security. The current solutions - Deep Packet Inspection (DPI) and statistical classification, rely on the availability of a training set. In case of these there is a cumbersome need to regularly update the signatures. Further their visibility is limited to classes the classifier has been trained for. Unsupervised algorithms have been envisioned as a an alternative to automatically identify classes of traffic. To address these issues Self Learning Network Traffic Classification is proposed. It uses unsupervised algorithms along with an adaptive seeding approach to automatically lets classes of traffic to emerge, making them identified and labelled. Unlike traditional classifiers, there is no need of a-priori knowledge of signatures nor a training set to extract the signatures. Instead, Self Learning Network Traffic Classification automatically groups flows into pure (or homogeneous) clusters using simple statistical features. This label assignment (which is still based on some manual intervention) ensures that class labels can be easily discovered. Furthermore, Self Learning Network Traffic Classification uses an iterative seeding approach which will boost its ability to cope with new protocols and applications. Unlike state-of-art classifiers, the biggest advantage of Self Learning Network Traffic Classification is its ability to discover new protocols and applications in an almost automated fashion.
Keywords: pattern classification; statistical analysis; traffic engineering computing; unsupervised learning; DPI; adaptive seeding approach; deep packet inspection; network management; protocols; self learning network traffic classification; statistical classification; traffic engineering; unsupervised machine learning; Classification algorithms; Clustering algorithms; Filtering; IP networks; Ports (Computers); Protocols; Telecommunication traffic; Traffic classification; clustering; self-seeding; unsupervised machine learning (ID#: 15-7165)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193038&isnumber=7192777


Note:

Articles listed on these pages have been found on publicly available Internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Agents 2015

 

 
SoS Logo

Agents

2015


In computer science, a software agent is a computer program that acts on behalf of a user or other program. Specific types of agents include intelligent agents, autonomous agents, distributed agents, multi-agent systems, and mobile agents. Because of the variety of agents and the privileges agents have to represent the user or program, they are of significant cybersecurity community research interest. The works cited here look at those related to privacy, cyberphysical systems, and other hard problem areas. They were published in 2015.



Vegh, L.; Miclea, L., “A Simple Scheme for Security and Access Control in Cyber-Physical Systems,” in Control Systems and Computer Science (CSCS), 2015 20th International Conference on, vol., no., pp. 294-299, 27-29 May 2015. doi:10.1109/CSCS.2015.13
Abstract: In a time when technology changes continuously, where things you need today to run a certain system, might not be needed tomorrow anymore, security is a constant requirement. No matter what systems we have, or how we structure them, no matter what means of digital communication we use, we are always interested in aspects like security, safety, privacy. An example of the ever-advancing technology are cyber-physical systems. We propose a complex security architecture that integrates several consecrated methods such as cryptography, steganography and digital signatures. This architecture is designed to not only ensure security of communication by transforming data into secret code, it is also designed to control access to the system and detect and prevent cyber attacks.
Keywords: authorisation; cryptography; digital signatures; steganography; access control; cyber attacks; cyber-physical system; security architecture; security requirement; system security; Computer architecture; Digital signatures; Encryption; Public key; cyber-physical systems; multi-agent systems (ID#: 15-7166)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168445&isnumber=7168393

 

Vegh, L.; Miclea, L., “Access Control in Cyber-Physical Systems Using Steganography and Digital Signatures,” in Industrial Technology (ICIT), 2015 IEEE International Conference on, vol., no., pp. 1504-1509, 17-19 March 2015. doi:10.1109/ICIT.2015.7125309
Abstract: In a world in which technology has an essential role, security of the systems we use is a crucial aspect. Most of the time this means ensuring communications' security, protecting data and it automatically makes us think of cryptography, changing the form of the data so no one can view it without authorization. Cyber-physical systems are more and more present in critical applications in which security is of the utmost importance. In the present paper, we propose a look on security not by encrypting data but by controlling the access to the system. For this we combine digital signatures with an encryption algorithm with divided private key in order to control access to the system and to define roles for each user. We also add steganography, to increase the level of security of the system.
Keywords: authorisation; data protection; digital signatures; private key cryptography; steganography; access control; authorization; communication security; cryptography; cyber-physical systems; data protection; divided private key; encryption algorithm; Access control; Digital signatures; Encryption; Multi-agent systems; Public key; digital signature; hierarchical access; multi-agent systems; (ID#: 15-7167)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7125309&isnumber=7125066

 

Leitao, P.; Barbosa, J.; Papadopoulou, M.-E.C.; Venieris, I.S., “Standardization in Cyber-Physical Systems: The ARUM Case,” in Industrial Technology (ICIT), 2015 IEEE International Conference on, vol., no., pp. 2988-2993, 17-19 March 2015. doi:10.1109/ICIT.2015.7125539
Abstract: Cyber-physical systems concept supports the realization of the Industrie 4.0 vision towards the computerization of traditional industries, aiming to achieve intelligent and reconfigurable factories. Standardization assumes a critical role in the industrial adoption of cyber-physical systems, namely in the integration of legacy systems as well as the smooth migration from existing running systems to the new ones. This paper analyses some existing standards in related fields and presents identified limitations and efforts for a wider acceptance of such systems by industry. Special attention is devoted to the efforts to develop a standard-compliant service-oriented multi-agent system solution within the ARUM project.
Keywords: Internet; multi-agent systems; production engineering computing; production facilities; production management; service-oriented architecture; software maintenance; ARUM project; Industrie 4.0 vision; adaptive production management project; cyberphysical systems; industry computerization; intelligent factories; legacy systems; reconfigurable factories; standard-compliant service-oriented multiagent system solution; Industries; Interoperability; Protocols; Real-time systems; Security; Standards
(ID#: 15-7168)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7125539&isnumber=7125066

 

Tsigkanos, C.; Pasquale, L.; Ghezzi, C.; Nuseibeh, B., “Ariadne: Topology Aware Adaptive Security for Cyber-Physical Systems,” in Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, vol. 2, no., pp. 729-732, 16-24 May 2015. doi:10.1109/ICSE.2015.234
Abstract: This paper presents Ariadne, a tool for engineering topology aware adaptive security for cyber-physical systems. It allows security software engineers to model security requirements together with the topology of the operational environment. This model is then used at runtime to perform speculative threat analysis to reason about the consequences that topological changes arising from the movement of agents and assets can have on the satisfaction of security requirements. Our tool also identifies an adaptation strategy that applies security controls when necessary to prevent potential security requirements violations.
Keywords: security of data; software tools; Ariadne tool; adaptation strategy; cyber-physical systems; engineering topology aware adaptive security; security software engineers; speculative threat analysis; Adaptation models; Mobile handsets; Ports (Computers); Runtime; Security; Servers; Topology; Adaptive Systems; Verification (ID#: 15-7169)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7203054&isnumber=7202933

 

Xiaofan He; Huaiyu Dai; Peng Ning, “Improving Learning and Adaptation in Security Games by Exploiting Information Asymmetry,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 1787-1795, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218560
Abstract: With the advancement of modern technologies, the security battle between a legitimate system (LS) and an adversary is becoming increasingly sophisticated, involving complex interactions in unknown dynamic environments. Stochastic game (SG), together with multi-agent reinforcement learning (MARL), offers a systematic framework for the study of information warfare in current and emerging cyber-physical systems. In practical security games, each player usually has only incomplete information about the opponent, which induces information asymmetry. This work exploits information asymmetry from a new angle, considering how to exploit local information unknown to the opponent to the player’s advantage. Two new MARL algorithms, termed minimax-PDS and WoLF-PDS, are proposed, which enable the LS to learn and adapt faster in dynamic environments by exploiting its private local information. The proposed algorithms are provably convergent and rational, respectively. Also, numerical results are presented to show their effectiveness through two concrete anti-jamming examples.
Keywords: learning (artificial intelligence); multi-agent systems; security of data; stochastic games; LS; MARL; SG; WoLF-PDS; adaptation; concrete anti-jamming; cyber-physical systems; information asymmetry; information warfare; legitimate system; minimax-PDS; multiagent reinforcement learning; security games; stochastic game; unknown dynamic environments; Computers; Conferences; Games; Heuristic algorithms; Jamming; Security; Sensors (ID#: 15-7170)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218560&isnumber=7218353

 

Weina Ma; Sartipi, K.; Sharghigoorabi, M., “Security Middleware Infrastructure for Medical Imaging System Integration,” in Advanced Communication Technology (ICACT), 2015 17th International Conference on, vol., no., pp. 353-357, 1-3 July 2015. doi:10.1109/ICACT.2015.7224818
Abstract: With the increasing demand of electronic medical records sharing, it is a challenge for medical imaging service providers to protect the patient privacy and secure their IT infrastructure in an integrated environment. In this paper, we present a novel security middleware infrastructure for seamlessly and securely linking legacy medical imaging systems, diagnostic imaging web applications as well as mobile applications. Software agent such as user agent and security agent have been integrated into medical imaging domains that can be trained to perform tasks. The proposed security middleware utilizes both online security technologies such as authentication, authorization and accounting, and post security procedures to discover system security vulnerability. By integrating with the proposed security middleware, both legacy system users and Internet users can be uniformly identified and authenticated; access to patient diagnostic images can be controlled based on patient’s consent directives and other access control polices defined at a central point; relevant user access activities can be audited at a central repository; user access behaviour patterns are mined to refine existing security policies. A case study is presented based on the proposed infrastructure.
Keywords: authorisation; data privacy; medical image processing; middleware; software agents; IT infrastructure security; accounting technology; authentication technology; authorization technology; diagnostic imaging Web applications; electronic medical records; information technology; legacy medical imaging systems; medical imaging service providers; medical imaging system integration; mobile applications; patient privacy; security agent; security middleware infrastructure; software agent; system security vulnerability; user agent; Authentication; Authorization; Biomedical imaging; Middleware; Picture archiving and communication systems; Access Control; Agent; Behaviour Pattern; Medical Imaging; Middleware; Security (ID#: 15-7171)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7224818&isnumber=7224736

 

Salih, R.M.; Lilien, L.T., “Protecting Users’ Privacy in Healthcare Cloud Computing with APB-TTP,” in Pervasive Computing and Communication Workshops (PerCom Workshops), 2015 IEEE International Conference on, vol., no., pp. 236-238, 23-27 March 2015. doi:10.1109/PERCOMW.2015.7134034
Abstract: We report on use of Active Privacy Bundles using a Trusted Third Party (APB-TTP) for protecting privacy of users’ healthcare data (incl. patients’ Electronic Health Records). APB-TTP protects data that are being disseminated among different authorized parties within a healthcare cloud. We are nearing completion of the pilot APB-TTP for healthcare applications, and commencing work on its extension, named Active Privacy Bundles with Multi Agents (APB-MA).
Keywords: cloud computing; data privacy; data protection; electronic health records; health care; information dissemination; multi-agent systems; trusted computing; APB-TTP; active privacy bundle with multiagents; healthcare applications; healthcare cloud computing; patient electronic health records; trusted third party; user privacy protection; Cloud computing; Data privacy; Electronic medical records; Medical services; Pervasive computing; Privacy; Security; active privacy bundle; confidentiality; privacy; trust; virtual machine (ID#: 15-7172)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7134034&isnumber=7133953

 

Shunrong Jiang; Xiaoyan Zhu; Ripei Hao; Haotian Chi; Hui Li; Liangmin Wang, “Lightweight and Privacy-Preserving Agent Data Transmission for Mobile Healthcare,” in Communications (ICC), 2015 IEEE International Conference on, vol., no., pp. 7322-7327, 8-12 June 2015. doi:10.1109/ICC.2015.7249496
Abstract: With the pervasiveness of smartphones and the advance of wireless body sensor networks (WBSNs), mobile healthcare (m-healthcare) has attracted considerable interest recently. In m-Healthcare, users’ smartphones serve as bridges connecting their WBSNs and the healthcare center (HCC), i.e., send users' personal health information (PHI) collected by WBSNs to the HCC and receive the feedback. However, users’ smartphones are not always available (e.g., left at home or out of power), resulting in an unexpected interruption of medical services sometimes, which are not considered in most existing schemes for m-healthcare. In this paper, we propose a lightweight and privacy-preserving agent data transmission scheme for m-healthcare in opportunistic social networks on condition that the smartphone is not available. By using the proposed protocol, we can provide uninterrupted healthcare while keeping the user’s identity and PHI private during the agent transmitting of PHI. Security and performance analysis show that the proposed scheme can realize privacy-preservation and achieve secure end-to-end communication for m-healthcare, and is suitable for resource-limited WBSNs.
Keywords: body sensor networks; data communication; data privacy; health care; medical information systems; mobile computing; smart phones; social networking (online); telecommunication security; HCC; PHI; healthcare center; lightweight agent data transmission scheme; m-healthcare; medical services; mobile healthcare; opportunistic social networks; personal health information; privacy preserving agent data transmission; protocol; resource limited WBSN; secure end-to-end communication; security analysis; smartphone; wireless body sensor networks; Cryptography; Data communication; Data privacy; Medical services; Privacy; Smart phones (ID#: 15-7173)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249496&isnumber=7248285

 

Chih Hung Wang; Hsiao Chien Sung, “Delegation-Based Roaming Payment Protocol with Location and Purchasing Privacy Protection,” in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, vol., no., pp. 97-103, 24-26 May 2015. doi:10.1109/AsiaJCIS.2015.25
Abstract: We proposed a new delegation-based roaming payment protocol for portable communication systems (PCS), by leveraging the good performance of blind signature in the regard of user privacy, which can provide unlink ability between PCS and service providers. However, the ability to discover the malicious user's identification is still remained. Home agents can detect the misbehavior and identify the mobile user if she/he doubly spends the e-cash in roaming. Due to the delegation-based authentication, the foreign agent can validate the communication without needing to reveal the real identity of the mobile user. Moreover, the computational cost can be reduced by using elliptic curve operations.
Keywords: cryptographic protocols; data privacy; electronic money; mobile commerce; public key cryptography; purchasing; blind signature; delegation based roaming payment protocol; e-cash; electronic cash; elliptic curve operation; location protection; malicious user identification; portable communication systems; purchasing privacy protection; user privacy; Authentication; Ciphers; Mobile communication; Privacy; Protocols; Public key; Delegation; blind signature; network security; payment protocol; roaming authentication (ID#: 15-7174)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153942&isnumber=7153836

 

Falcone, R.; Sapienza, A.; Castelfranchi, C., “Recommendation of Categories in an Agents World: The Role of (Not) Local Communicative Environments,” in Privacy, Security and Trust (PST), 2015 13th Annual Conference on, vol., no., pp. 7-13, 21-23 July 2015. doi:10.1109/PST.2015.7232948
Abstract: Due to Internet and social media web, the world as we know it is deeply changing integrating two different aspects of the social interaction: the one that develop in the real world and the one that develop in web society. In this paper we focus on the importance of generalized knowledge (agents' categories) in order to understand how much it is crucial in these two worlds. The cognitive advantage of generalized knowledge can be synthesized in this claim: "It allows us to know a lot about something/somebody we do not directly know". At a social level this means that I can know a lot of things on people that I never met; it is social "prejudice" with its good side and fundamental contribution to social exchange. In this study we will analyse and present some differences between the social relationships in the two worlds and how they influence categories' reputation. On this basis, we will experimentally inquire the role played by categories' reputation with respect to the reputation and opinion on single agents: when it is better to rely on the first ones and when are more reliable the second ones. We will consider these simulations for both the two kind of world, investigating how the parameters defining the specific environment (number of agents, their interactions, transfer of reputation, and so on) determine the use of categories' reputation and trying to understand how the role played by categories will be important in the new digital worlds.
Keywords: Internet; cognition; multi-agent systems; social networking (online); trusted computing; Internet; Web society; agents world; categories reputation; cognitive advantage; generalized knowledge; local communicative environment; recommendation; social interaction; social media Web; Context; Dogs; Organizations; Reliability; Sociology; Statistics; Uncertainty; cognitive analysis; social simulations; trust (ID#: 15-7175)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232948&isnumber=7232940

 

Fadaraliki, D.I.; Rajendran, S., “Process Offloading from Android Device to Cloud Using JADE,” in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, vol., no., pp. 1-5, 19-20 March 2015. doi:10.1109/ICCPCT.2015.7159260
Abstract: Offloading of data, applications, processes and services in mobile devices is done to reduce the power consumption by the mobile device and also to allow high end complex processes to run on a mobile interface utilizing the processing capabilities and storage mechanism of the cloud (not the mobile devices). Due to the fact that processing and management data in distributed or remote locations (cloud), security and privacy is dependent upon the cloud providers. Data, instructions and code is transmitted between nodes (service provider and mobile device) as plain code. In this research, we propose the use of a mobile agent based framework that allows capabilities to transmit data between remote nodes. The agents' responsibilities include automatically migrating the bundled state and code from one authenticated mobile user to execute at a remote location (cloud environment) and return the results to the mobile device without the knowledge and involvement of the user. The agents can also be equipped with intelligent behaviours to check for tampering by malicious host on the code or bundled data. This framework is to be developed using a java based platform called JADE.
Keywords: Java; cloud computing; data privacy; mobile computing; security of data; smart phones; user interfaces; Android device; JADE; Java based platform; cloud storage mechanism; data security; mobile agent based framework; mobile cloud computing; mobile interface; process offloading; Containers; Java; Mobile agents; Mobile communication; Mobile handsets; Security; Virtual machining; cloud environment; mobile agent; offloading (ID#: 15-7176)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159260&isnumber=7159156

 

Dali, L.; Abouelmehdi, K.; Bentajer, A.; Elsayed, H.; Abdelmajid, E.; Abderahim, B., “A Survey of Intrusion Detection System,” in Web Applications and Networking (WSWAN), 2015 2nd World Symposium on, vol., no., pp. 1-6, 21-23 March 2015. doi:10.1109/WSWAN.2015.7210351
Abstract: In this paper, we presented a survey on intrusion detection systems (IDS). First, we referred to different mechanisms of intrusion detection. Furthermore, we detailed the types of IDS. We have focused on the application IDS, specifically on the IDS Network, and the IDS in the cloud computing environment. Finally, the contribution of every single type of IDS is described.
Keywords: cloud computing; security of data; IDS network; cloud computing environment; intrusion detection system; Cloud computing; Computer science; Computers; Intrusion detection; Monitoring; Privacy; Cloud Computing; Intrusion Detection System; Multi Agents; Web Security (ID#: 15-7177)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210351&isnumber=7209078

 

Jemel, M.; Ben Azzouna, N.; Ghedira, K., “ECA Rules for Controlling Authorisation Plan to Satisfy Dynamic Constraints,” in Privacy, Security and Trust (PST), 2015 13th Annual Conference on, vol., no., pp. 133-138, 21-23 July 2015. doi:10.1109/PST.2015.7232964
Abstract: The workflow satisfiability problem has been studied by researchers in the security community using various approaches. The goal is to ensure that the user/role is authorised to execute the current task and that this permission doesn't prevent the remaining tasks in the workflow instance to be achieved. A valid authorisation plan consists in affecting authorised roles and users to workflow tasks in such a way that all the authorisation constraints are satisfied. Previous works are interested in workflow satisfiability problem by considering intra-instance constraints, i.e. constraints which are applied to a single instance. However, inter-instance constraints which are specified over multiple workflow instances are also paramount to mitigate the security frauds. In this paper, we present how ECA (Event-Condition-Action) paradigm and agent technology can be exploited to control authorisation plan in order to meet dynamic constraints, namely intra-instance and inter-instance constraints. We present a specification of a set of ECA rules that aim to achieve this goal. A prototype implementation of our proposed approach is also provided in this paper.
Keywords: authorisation; software agents; ECA rules; agent technology; authorisation constraints; authorisation plan control; dynamic constraints; event-condition-action paradigm; interinstance constraints; intrainstance constraints; security community; security frauds; workflow satisfiability problem; Authorization; Complexity theory; Context; Engines; Planning; Receivers (ID#: 15-7178)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232964&isnumber=7232940
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Anonymity and Privacy 2015

 

 
SoS Logo

Anonymity and Privacy

2015


Minimizing privacy risk is one of the major problems attendant on the development of social media and hand-held smart phone technologies. K-anonymity is one main method for anonymizing data. Many of the articles cited here focus on k-anonymity to ensure privacy. Others look at elliptic keys and privacy enhancing techniques more generally. These articles were presented in 2015. The Science of Security topics addressed include privacy, governance-based collaboration, resiliency, and metrics.



Ward, J.R.; Younis, M., “Base Station Anonymity Distributed Self-Assessment in Wireless Sensor Networks,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, vol., no., pp. 103, 108, 27-29 May 2015. doi:10.1109/ISI.2015.7165947
Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. In most applications, the sensors act as data sources and forward information generated by event triggers to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN with the least amount of effort. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to identify the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Previous work has proposed anonymity-boosting techniques to improve the BS’s anonymity posture, but all require some amount of overhead such as increased energy consumption, increased latency, or decreased throughput. If the BS understood its own anonymity posture, then it could evaluate whether the benefits of employing an anti-traffic analysis technique are worth the associated overhead. In this paper we propose two distributed approaches to allow a BS to assess its own anonymity and correspondingly employ anonymity-boosting techniques only when needed. Our approaches allow a WSN to increase its anonymity on demand, based on real-time measurements, and therefore conserve resources. The simulation results confirm the effectiveness of our approaches.
Keywords: security of data; wireless sensor networks; WSN; anonymity-boosting techniques; anti-traffic analysis technique; base station; base station anonymity distributed self-assessment; conventional security mechanisms; improved BS anonymity; Current measurement; Energy consumption; Entropy; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 15-6515)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165947&isnumber=7165923

 

Kangsoo Jung; Seongyong Jo; Seog Park, “A Game Theoretic Approach for Collaborative Caching Techniques in Privacy Preserving Location-Based Services,” Big Data and Smart Computing (BigComp), 2015 International Conference on, vol., no., pp. 59, 62, 9-11 Feb. 2015. doi:10.1109/35021BIGCOMP.2015.7072852
Abstract: The number of users who use location-based services (LBS) is increasing rapidly along with the proliferation of mobile devices such as the smartphone. However, LBS users have concerned about their privacy because the collected individual location information can pose a privacy violation. Therefore, it is no wonder that a lot of research is being conducted on topic such as location k-anonymity and pseudonym to prevent privacy threats. However, existing research has several limitations when applied to real world applications. In this paper, we propose a novel architecture to preserve the location privacy in LBS using the Virtual Individual Server (VIS) to overcome drawbacks in existing techniques. We also introduce the collaborative caching technique which shares extra query results among users to mitigate privacy/performance tradeoffs. Game theory is used to overcome the free rider problem that can occur during the sharing process. Simulation results show that the proposed technique achieves sufficient privacy protection and reduces system performance degradation.
Keywords: data privacy; game theory; mobile computing; smart phones; LBS; VIS; collaborative caching technique; free rider problem; game theoretic approach; location information; location k-anonymity; mobile devices; privacy preserving location-based services; privacy protection; privacy threats; privacy violation; smartphone; virtual individual server; Electronic countermeasures; Frequency modulation; Integrated circuits; Caching; Location-based service; Privacy (ID#: 15-6516)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7072852&isnumber=7072806

 

Abuzneid, A.-S.; Sobh, T.; Faezipour, M., “An Enhanced Communication Protocol for Anonymity and Location Privacy in WSN,” Wireless Communications and Networking Conference Workshops (WCNCW), 2015 IEEE, vol., no., pp. 91, 96, 9-12 March 2015. doi:10.1109/WCNCW.2015.7122535
Abstract: Wireless sensor networks (WSNs) consist of many sensors working as hosts. These sensors can sense a phenomenon and represent it in a form of data. There are many applications for WSNs such as object tracking and monitoring where the objects need protection. Providing an efficient location privacy solution would be challenging to achieve due to the exposed nature of the WSN. The communication protocol needs to provide location privacy measured by anonymity, observability, capture-likelihood and safety period. We extend this work to allow for countermeasures against semi-global and global adversaries. We present a network model that is protected against a sophisticated passive and active attacks using local, semi-global, and global adversaries.
Keywords: protocols; telecommunication security; wireless sensor networks; WSN; active attacks; anonymity; capture-likelihood; communication protocol enhancement; global adversaries; local adversaries; location privacy; object tracking; observability; passive attacks; safety period; semiglobal adversaries; wireless sensor networks; Conferences; Energy efficiency; Internet of things; Nickel; Privacy; Silicon; Wireless sensor networks; WSN; contextual privacy; privacy; sink privacy; source location privacy (ID#: 15-6517)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122535&isnumber=7122513

 

Ammar, Nariman; Malik, Zaki; Medjahed, Brahim; Alodib, Mohammed, “K-Anonymity Based Approach for Privacy-Preserving Web Service Selection,” Web Services (ICWS), 2015 IEEE International Conference on, vol., no., pp. 281, 288, June 27 2015–July 2 2015. doi:10.1109/ICWS.2015.46
Abstract: To guarantee privacy in service oriented environments, it is essential to check for compatibility between a client’s privacy requirements and a Web service privacy policies before invoking the Web service operation. In this paper, we focus on privacy at the Web service operation level. We present an approach that integrates k-Anonymity into a privacy management framework using Web Services Conversation Language (WSCL) definitions. In particular, we use the notion of k-Anonymity to determine the extent to which the invocation of an operation can be inferred if one knows that a downstream operation was invoked. We provide both a formal definition as well as an implementation of the proposed approach.
Keywords: Arrays; Data privacy; Government; Phase change materials; Privacy; Silicon; Web services; K-Anonymity; Service selection; privacy (ID#: 15-6518)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195580&isnumber=7195533

 

Niu, B.; Xiaoyan Zhu; Weihao Li; Hui Li; Yingjuan Wang; Zongqing Lu, “A Personalized Two-Tier Cloaking Scheme for Privacy-Aware Location-Based Services,” Computing, Networking and Communications (ICNC), 2015 International Conference on, vol., no., pp. 94, 98, 16-19 Feb. 2015. doi:10.1109/ICCNC.2015.7069322
Abstract: The ubiquity of modern mobile devices with GPS modules and Internet connectivity such as 3G/4G techniques have resulted in rapid development of Location-Based Services (LBSs). However, users enjoy the convenience provided by the untrusted LBS server at the cost of their privacy. To protect user’s sensitive information against adversaries with side information, we design a personalized spatial cloaking scheme, termed TTcloak, which provides k-anonymity for user’s location privacy, 1-diversity for query privacy and desired size of cloaking region for mobile users in LBSs, simultaneously. TTcloak uses Dummy Query Determining (DQD) algorithm and Dummy Location Determining (DLD) algorithm to find out a set of realistic cells as candidates, and employs a CR-refinement Module (CRM) to guarantee that dummy users are assigned into the cloaking region with desired size. Finally, thorough security analysis and empirical evaluation results verify our proposed TTcloak.
Keywords:  3G mobile communication; 4G mobile communication; Global Positioning System; Internet; data privacy; mobile computing; mobility management (mobile radio); telecommunication security; telecommunication services;3G techniques; 4G techniques; CR-refinement module; CRM; DLD algorithm; DQD algorithm; GPS modules; Internet connectivity; LBS server; TTcloak; cloaking region; dummy location determining algorithm; dummy query determining algorithm; dummy users; mobile users; modern mobile devices; personalized spatial cloaking scheme; personalized two-tier cloaking scheme; privacy-aware location-based services; query privacy; security analysis; user location privacy; Algorithm design and analysis; Complexity theory; Entropy; Mobile radio mobility management; Privacy; Servers (ID#: 15-6519)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069322&isnumber=7069279

 

Firoozjaei, M.D.; Jaegwan Yu; Hyoungshick Kim, “Privacy Preserving Nearest Neighbor Search Based on Topologies in Cellular Networks,” Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, vol., no., pp. 146, 149, 24-27 March 2015. doi:10.1109/WAINA.2015.22
Abstract: As the popularity of location-based services (LBSes) is increasing, the location privacy has become a main concern. Among the rich collection of location privacy techniques, the spatial cloaking is one of the most popular techniques. In this paper, we propose a new spatial cloaking technique to hide a user’s location under a cloaking of the serving base station (SeNB) and anonymize SeNB with a group of dummy locations in the neighboring group of another base station as central eNB (CeNB). Unlike the most existing approaches for selecting a dummy location, such as the center of a virtual circle, we select a properly chosen dummy location from real locations of eNBs to minimize side information for an adversary. Our experimental results show that the proposed scheme can achieve a reasonable degree of accuracy (>96%) for nearest neighbor services while providing a high level of location privacy.
Keywords: cellular neural nets; data privacy; mobile computing; CeNB; LBSes; SeNB; base station; cellular networks; central eNB; dummy location; location privacy techniques; location-based services; privacy preserving nearest neighbor search; serving base station; spatial cloaking technique; virtual circle; Conferences; Google; Monte Carlo methods; Nearest neighbor searches; Network topology; Privacy; Topology; Location-based service (LBS); anonymity; eNode B (eNB); spatial cloaking (ID#: 15-6520)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096162&isnumber=7096097

 

Lindenberg, Pierre Pascal; Bo-Chao Cheng; Yu-Ling Hsueh, “Novel Location Privacy Protection Strategies for Location-Based Services,” Ubiquitous and Future Networks (ICUFN), 2015 Seventh International Conference on, vol., no., pp. 866, 870, 7-10 July 2015. doi:10.1109/ICUFN.2015.7182667
Abstract: The usage of Location-Based Services (LBS) holds a potential privacy issue when people exchange their locations for information relative to these locations. While most people perceive these information exchange services as useful, others do not, because an adversary might take advantage of the users’ sensitive data. In this paper, we propose k-path, an algorithm for privacy protection for continuous location tracking-typed LBS. We take inspiration in k-anonymity to hide the user location or trajectory among k locations or trajectories. We introduce our simulator as a tool to test several strategies to hide users’ locations. Afterwards, this paper will give an evaluation about the effectiveness of several approaches by using the simulator and data provided by the GeoLife data set.
Keywords: Data privacy; History; Mobile radio mobility management; Privacy; Sensitivity; Trajectory; Uncertainty; Location-Based Service; k-anonymity (ID#: 15-6521)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182667&isnumber=7182475

 

Amin, R.; Biswas, G.P., “Anonymity Preserving Secure Hash Function Based Authentication Scheme for Consumer USB Mass Storage Device,” Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, vol., no., pp. 1, 6, 7-8 Feb. 2015. doi:10.1109/C3IT.2015.7060190
Abstract: A USB (Universal Serial Bus) mass storage device, which makes a (USB) device accessible to a host computing device and enables file transfers after completing mutual authentication between the authentication server and the user. It is also very popular device because of its portability, large storage capacity and high transmission speed. To protect the privacy of a file transferred to a storage device, several security protocols have been proposed but none of them is completely free from security weaknesses. Recently He et al. proposed a multi-factor based security protocol which is efficient but the protocol is not applicable for practical implementation, as they does not provide password change procedure which is an essential phase in any password based user authentication and key agreement protocol. As the computation and implementation of the cryptographic one-way hash function is more trouble-free than other existing cryptographic algorithms, we proposed a light weight and anonymity preserving three factor user authentication and key agreement protocol for consumer mass storage devices and analyzes our proposed protocol using BAN logic. Furthermore, we have presented informal security analysis of the proposed protocol and confirmed that the protocol is completely free from security weaknesses and applicable for practical implementation.
Keywords: cryptographic protocols; file organisation; BAN logic; USB device; anonymity preserving secure hash function based authentication scheme; anonymity preserving three factor user authentication; authentication server; consumer USB mass storage device; consumer mass storage devices; cryptographic algorithms; cryptographic one-way hash function; file transfers; host computing device; informal security analysis; key agreement protocol; multifactor based security protocols; password based user authentication; password change procedure; storage capacity; universal serial bus mass storage device; Authentication; Cryptography; Protocols; Servers; Smart cards; Universal Serial Bus; Anonymity; Attack; File Secrecy; USB MSD; authentication (ID#: 15-6522)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060190&isnumber=7060104

 

Mingming Guo; Pissinou, N.; Iyengar, S.S., “Pseudonym-Based Anonymity Zone Generation for Mobile Service with Strong Adversary Model,” Consumer Communications and Networking Conference (CCNC), 2015 12th Annual IEEE, vol., no., pp. 335, 340, 9-12 Jan. 2015. doi:10.1109/CCNC.2015.7157998
Abstract: The popularity of location-aware mobile devices and the advances of wireless networking have seriously pushed location-based services into the IT market. However, moving users need to report their coordinates to an application service provider to utilize interested services that may compromise user privacy. In this paper, we propose an online personalized scheme for generating anonymity zones to protect users with mobile devices while on the move. We also introduce a strong adversary model, which can conduct inference attacks in the system. Our design combines a geometric transformation algorithm with a dynamic pseudonyms-changing mechanism and user-controlled personalized dummy generation to achieve strong trajectory privacy preservation. Our proposal does not involve any trusted third-party and will not affect the existing LBS system architecture. Simulations are performed to show the effectiveness and efficiency of our approach.
Keywords: authorisation; data privacy; mobile computing; IT market; LBS system architecture; anonymity zone generation; application service provider; dynamic pseudonyms-changing mechanism; geometric transformation algorithm; inference attacks; location-aware mobile devices; location-based services; mobile devices; mobile service; online personalized scheme; pseudonym-based anonymity zone generation; strong-adversary model; strong-trajectory privacy preservation; user data protection; user privacy; user-controlled personalized dummy generation; wireless networking; Computational modeling; Privacy; Quality of service; Anonymity Zone; Design; Geometric; Location-based Services; Pseudonyms; Trajectory Privacy Protection (ID#: 15-6523)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7157998&isnumber=7157933

 

Sharma, V.; Chien-Chung Shen, “Evaluation of an Entropy-Based K-Anonymity Model for Location Based Services,” Computing, Networking and Communications (ICNC), 2015 International Conference on, vol., no., pp. 374, 378, 16-19 Feb. 2015. doi:10.1109/ICCNC.2015.7069372
Abstract: As the market for cellular telephones, and other mobile devices, keeps growing, the demand for new services arises to attract the end users. Location Based Services (LBS) are becoming important to the success and attractiveness of next generation wireless systems. To access location-based services, mobile users have to disclose their location information to service providers and third party applications. This raises privacy concerns, which have hampered the widespread use of LBS. Location privacy mechanisms include Anonymization, Obfuscation, Policy Based Scheme, k-anonymity and Adding Fake Events. However most existing solutions adopt the k-anonymity principle. We propose an entropy based location privacy mechanism to protect user information against attackers. We look at the effectiveness of the technique in a continuous LBS scenarios, i.e., where users are moving and recurrently requesting for Location Based Services, we also evaluate the overall performance of the system with its drawbacks.
Keywords: data protection; mobile handsets; mobility management (mobile radio); next generation networks; LBS; cellular telephone; entropy-based k-anonymity model evaluation; location based service; location privacy mechanism; mobile device; mobile user; next generation wireless system; policy based scheme; user information protection; Computational modeling; Conferences; Entropy; Measurement; Mobile communication; Privacy; Query processing; Location Based Services (LBS); entropy; k-anonymity; privacy (ID#: 15-6524)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069372&isnumber=7069279

 

Papapetrou, E.; Bourgos, V.F.; Voyiatzis, A.G., “Privacy-Preserving Routing in Delay Tolerant Networks Based on Bloom Filters,” World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2015 IEEE 16th International Symposium on, vol., no., pp. 1, 9, 14-17 June 2015. doi:10.1109/WoWMoM.2015.7158148
Abstract: Privacy preservation in opportunistic networks, such as disruption and delay tolerant networks, constitutes a very challenging area of research. The wireless channel is vulnerable to malicious nodes that can eavesdrop data exchanges. Moreover, all nodes in an opportunistic network can act as routers and thus, gain access to sensitive information while forwarding data. Node anonymity and data protection can be achieved using encryption. However, cryptography-based mechanisms are complex to handle and computationally expensive for the participating (mobile) nodes. We propose SimBet-BF, a privacy-preserving routing algorithm for opportunistic networks. The proposed algorithm builds atop the SimBet algorithm and uses Bloom filters so as to represent routing as well as other sensitive information included in data packets. SimBet-BF provides anonymous communication and avoids expensive cryptographic operations, while the functionality of the SimBet algorithm is not significantly affected. In fact, we show that the required security level can be achieved with a negligible routing performance trade-off.
Keywords: delay tolerant networks; delays; radio networks; telecommunication network routing; telecommunication security; Bloom filters; SimBet algorithm; cryptography based mechanisms; eavesdrop data exchanges; expensive cryptographic operations; malicious nodes; mobile nodes; opportunistic networks; privacy preserving routing algorithm; wireless channel; Cryptography; Measurement; Peer-to-peer computing; Privacy; Protocols; Routing (ID#: 15-6525)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158148&isnumber=7158105

 

Christin, D.; Bub, D.M.; Moerov, A.; Kasem-Madani, S., “A Distributed Privacy-Preserving Mechanism for Mobile Urban Sensing Applications,” Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1, 6, 7-9 April 2015. doi:10.1109/ISSNIP.2015.7106932
Abstract: In urban sensing applications, participants carry mobile devices that collect sensor readings annotated with spatiotemporal information. However, such annotations put the participants’ privacy at stake, as they can reveal their whereabouts and habits to the urban sensing campaign administrators. A solution to protect the participants’ privacy is to apply the concept of k-anonymity. In this approach, the reported participants’ locations are modified such that at least k - 1 other participants appear to share the same location, and hence become indistinguishable from each other. In existing implementations of k-anonymity, the participants need to reveal their precise locations to either a third party or other participants in order to find k - 1 other participants. As a result, the participants’ location privacy may still be endangered in case of ill-intentioned third-party administrators and/or participants. We tackle this challenge by proposing a novel approach that supports the participants in their search for other participants without disclosing their exact locations to any other parties. To evaluate our approach, we conduct a threat analysis and study its feasibility by means of extensive simulations using a real-world dataset.
Keywords: mobile handsets; sensors; distributed privacy-preserving mechanism; k-anonymity; mobile urban sensing applications; real-world dataset; threat analysis (ID#: 15-6526)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106932&isnumber=7106892

 

Rahman, M.; Sampangi, R.V.; Sampalli, S., “Lightweight Protocol for Anonymity and Mutual Authentication in RFID Systems,” Consumer Communications and Networking Conference (CCNC), 2015 12th Annual IEEE, vol., no., pp. 910, 915, 9-12 Jan. 2015. doi:10.1109/CCNC.2015.7158097
Abstract: Radio Frequency Identification (RFID) technology is rapidly making its way to next generation automatic identification systems. Despite encouraging prospects of RFID technology, security threats and privacy concerns limit its widespread deployment. Security in passive RFID tag based systems is a challenge owing to the severe resource restrictions. In this paper, we present a lightweight anonymity / mutual authentication protocol that uses a unique choice of pseudorandom numbers to achieve basic security goals, i.e. confidentiality, integrity and authentication. We validate our protocol by security analysis.
Keywords: cryptographic protocols; data integrity; radiofrequency identification; confidentiality; integrity; lightweight anonymity protocol; mutual authentication protocol; next generation automatic Identification systems; passive RFID tag system security; pseudorandom numbers; radio frequency identification technology; security analysis; Authentication; Passive RFID tags; Privacy; Protocols; Servers; Anonymity; Mutual Authentication; RFID Security; Security (ID#: 15-6527)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158097&isnumber=7157933

 

Chaudhari, Manali; Dharawath, Srinu, “Toward a Statistical Framework for Source Anonymity in Sensor Network Using Quantitative Measures,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, vol., no., pp. 1, 5, 19-20 March 2015. doi:10.1109/ICIIECS.2015.7193169
Abstract: In some applications in sensor network the location and privacy of certain events must remain anonymous or undetected even by analyzing the network traffic. In this paper the framework for modeling, investigating and evaluating the sensor network is suggested and results are charted. Suggested two folded structure introduces the notion of “interval indistinguishability” which gives a quantitative evaluation to form anonymity in sensor network and secondly it charts source anonymity to statistical problem of binary hypothesis checking with nuisance parameters. The system is made energy efficient by enhancing the available techniques for choosing cluster head. The energy efficiency of the sensor network is charted.
Keywords: Conferences; Energy efficiency; Privacy; Protocols; Technological innovation; Wireless sensor networks; Binary Hypothesis; Interval Indistinguishability; Wireless Sensor Network; residual energy (ID#: 15-6528)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193169&isnumber=7192777

 

Seungsoo Baek; Seung-Hyun Seo; Seungjoo Kim, “Preserving Biosensor Users’ Anonymity over Wireless Cellular Network,” Ubiquitous and Future Networks (ICUFN), 2015 Seventh International Conference on, vol., no., pp. 470, 475, 7-10 July 2015. doi:10.1109/ICUFN.2015.7182588
Abstract: A wireless body sensor network takes a significant part in mobile E-healthcare monitoring service. Major concerns for patient’s sensitive information are related to secure data transmission and preserving anonymity. So far, most researchers have only focused on security or privacy issues related to wireless body area network (WBAN) without considering all the communication vulnerabilities. However, since bio data sensed by biosensors travel over both WBAN and the cellular network, it is required to study about a privacy-enhanced scheme that covers all the secure communications. In this paper, we first point out the weaknesses of previous work in [9]. Then, we propose a novel privacy-enhanced E-healthcare monitoring scheme in wireless cellular network. Our proposed scheme provides anonymous communication between a patient and a doctor in a wireless cellular network satisfying security requirements.
Keywords: Bioinformatics; Cloning; Cloud computing; Medical services; Mobile communication; Smart phones; Wireless communication; Anonymity; E-healthcare; Privacy; Unlinkability; Wireless body area network; Wireless cellular network (ID#: 15-6529)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182588&isnumber=7182475

 

Jagdale, B.N.; Patil, M.S., “Emulating Cryptographic Operations for Secure Routing in Ad-Hoc Network,” Pervasive Computing (ICPC), 2015 International Conference on, vol., no., pp. 1, 4, 8-10 Jan. 2015. doi:10.1109/PERVASIVE.2015.7086969
Abstract: MANET is used by many researchers to provide security and to implement protocols for secure routing. Privacy and security are important in applications like Military and Law-Enforcement MANETs. Communication in MANET is more susceptible due to broadcasting nature of radio transmission. It is necessary to provide security against inside and outside adversaries. There are many existing schemes which provide privacy preserving routing. These schemes do not offer complete unlink ability and unobservability. We propose unobservable secure routing protocol where data packets and control packets are completely protected. It achieves content unobservability by applying group signature and ID-based encryption. This protocol works in two stages anonymous key establishment and unobservable route discovery. We implement unobservable secure routing protocol with security algorithms RSA, AES, DES with AODV in NS-2 and compare it with AODV. Our protocol is more efficient than existing schemes.
Keywords: cryptographic protocols; data privacy; mobile ad hoc networks; routing protocols; telecommunication security; AES security algorithms; AODV security algorithms; DES security algorithms; ID-based encryption; NS-2 simulation; RSA security algorithms; anonymous key establishment; control packets; cryptographic operations; data packets; group signature; law-enforcement MANETs; mobile ad hoc network; privacy preserving routing; radio transmission; secure routing protocol; unobservable route discovery; Cryptography; Delays; Mobile ad hoc networks; Protocols; Routing; MANET; anonymity; group signature; privacy; routing; security; unobservability (ID#: 15-6530)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086969&isnumber=7086957

 

Wallace, Bruce; Goubran, Rafik A.; Knoefel, Frank; Marshall, Shawn; Porter, Michelle; Harlow, Madelaine; Puli, Akshay, “Automation of the Validation, Anonymization, and Augmentation of Big Data from a Multi-year Driving Study,” Big Data (BigData Congress), 2015 IEEE International Congress on, vol., no., pp. 608, 614, June 27 2015–July 2 2015. doi:10.1109/BigDataCongress.2015.93
Abstract: The Candrive/Ozcandrive project is a long term study that is now entering its sixth year focused on improving the safety of older drivers. The study includes 256 older drivers in the Ottawa area and is an example of a longitudinal study that generates big data sensor information recorded from the participant vehicles. This paper uses the Can drive data and proposes solutions that would enable differential privacy including a theoretical open access model for the data using k anonymity techniques for any combination of 7 parameters that have identifiable attributes. The dataset includes an in-vehicle sensor that captures Global Positioning System (GPS) and On Board Diagnostics II (OBDII) data for every second that the vehicle is operating. The resulting data set includes hundreds to thousands of hours of data for each of the study vehicles. The paper discusses methods to address the challenge of transitioning a large data set of GPS and other raw sensor samples to data ready to analyze. Automated methods to detect and correct any issues in the individual data samples along with the needed tools to adapt the raw sensor data into formats that can be easily processed are shown. The paper provides solutions to ensure k anonymity based privacy of the study participant’s identity for seven parameters including location of their home through vehicle location information or through a combination of the sensor information. The paper presents mechanisms to augment the captured sensor data through fusion with external data resources to bring added information to the data set including weather information, road information from mapping sources and day/night status. The paper will present the performance applicability for analysis of the resulting dataset within a cloud computing architecture.
Keywords: Data privacy; Engines; Meteorology; Privacy; Roads; Vehicles; Differential Privacy; Global Positioning System (GPS); data analytics; driving; k-Anonymity (ID#: 15-6531)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207277&isnumber=7207183

 

Kosugi, T.; Hayafuji, T.; Mambo, M., “On the Traceability of the Accountable Anonymous Channel,” Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, vol., no., pp. 6, 11, 24-26 May 2015. doi:10.1109/AsiaJCIS.2015.29
Abstract: Anonymous channels guaranteeing anonymity of senders such as Tor are effective for whistle-blowing and other privacy sensitive scenarios. However, there is a risk of being abused for illegal activities. As a countermeasure to illegal activities using an anonymous channel, it is natural to construct an accountable anonymous channel which can revoke anonymity of senders when an unlawful message was sent out from them. In this paper, we point out that an accountable anonymous channel THEMIS does not provide anonymity in a perfect way and there is a possibility that attackers can identify senders even if messages are not malicious. Feasibility of tracing senders is analyzed by using simulation. Moreover, we give a simple remedy of the flaw in THEMIS.
Keywords: computer network security; cryptographic protocols; data privacy; THEMIS accountable anonymous channel traceability; attacker possibility; illegal activity; privacy sensitive scenario; sender anonymity; sender tracing; unlawful message; whistle-blowing scenario; Art; Encryption; Mathematical model; Payloads; Public key; Receivers (ID#: 15-6532)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153848&isnumber=7153836

 

Reddy, J.M.; Hota, C., “Heuristic-Based Real-Time P2P Traffic Identification,” Emerging Information Technology and Engineering Solutions (EITES), 2015 International Conference on, vol., no., pp. 38, 43, 20-21 Feb. 2015. doi:10.1109/EITES.2015.16
Abstract: Peer-to-Peer (P2P) networks have seen a rapid growth, spanning diverse applications like online anonymity (Tor), online payment (Bit coin), file sharing (Bit Torrent), etc. However, the success of these applications has raised concerns among ISPs and Network administrators. These types of traffic worsen the congestion of the network, and create security vulnerabilities. Hence, P2P traffic identification has been researched actively in recent times. Early P2P traffic identification approaches were based on port-based inspection. Presently, Deep Packet Inspection (DPI) is a prominent technique used to identify P2P traffic. But it relies on payload signatures which are not resilient against port masquerading, traffic encryption and NATing. In this paper, we propose a novel P2P traffic identification mechanism based on the host behaviour from the transport layer headers. A set of heuristics was identified by analysing the off-line datasets collected in our test bed. This approach is privacy preserving as it does not examine the payload content. The usefulness of these heuristics is shown on real-time traffic traces received from our campus backbone, where in the best case only 0.20% of flows were unknown.
Keywords: cryptography; data privacy; peer-to-peer computing; telecommunication security; telecommunication traffic; Bit coin ;DPI; ISP; NATing; P2P network; P2P traffic identification mechanism; bit torrent; deep packet inspection; file sharing; heuristic-based real-time P2P traffic identification; network administrator; off-line dataset; online anonymity; online payment; payload signature; peer-to-peer network; port masquerading; port-based inspection; privacy preserving; real-time traffic; security vulnerability; traffic encryption; transport layer header; Accuracy; Internet; Payloads; Peer-to-peer computing; Ports (Computers); Protocols; Servers (ID#: 15-6533)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7083382&isnumber=7082065

 

Daubert, J.; Grube, T.; Muhlhauser, M.; Fischer, M., “Internal Attacks in Anonymous Publish-Subscribe P2P Overlays,” Networked Systems (NetSys), 2015 International Conference and Workshops on, vol., no., pp. 1, 8, 9-12 March 2015. doi:10.1109/NetSys.2015.7089074
Abstract: Privacy, in particular anonymity, is desirable in Online Social Networks (OSNs) like Twitter, especially when considering the threat of political repression and censorship. P2P-based publish-subscribe is a well suited paradigm for OSN scenarios as users can publish and follow topics of interest. However, anonymity in P2P-based publish-subscribe (pub-sub) has been hardly analyzed so far. Research on add-on anonymization systems such as Tor mostly focuses on large scale traffic analysis rather than malicious insiders. Therefore, we analyze colluding insider attackers in more detail that operate on the basis of timing information. For that, we model a generic anonymous pub-sub system, present an attacker model, and discuss timing attacks. We analyze these attacks by a realistic simulation model and discuss potential countermeasures. Our findings indicate that even few malicious insiders are capable to disclose a large number of participants, while an attacker using large amounts of colluding nodes achieves only minor additional improvements.
Keywords: data privacy; overlay networks; peer-to-peer computing; social networking (online); OSN; P2P-based publish-subscribe; Twitter; add-on anonymization system; anonymous publish-subscribe P2P overlays; colluding insider attackers; generic anonymous pub-sub system; internal attacks; online social networks; peer-to-peer overlay; timing information; Delays; Mathematical model; Protocols; Publish-subscribe; Subscriptions; Topology (ID#: 15-6534)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7089074&isnumber=7089054

 

Carnielli, A.; Aiash, M., “Will ToR Achieve Its Goals in the ‘Future Internet’? An Empirical Study of Using ToR with Cloud Computing,” Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, vol., no., pp. 135, 140, 24-27 March 2015. doi:10.1109/WAINA.2015.78
Abstract: With the wide development and deployment of mobile devices and gadgets, a larger number of users go online in so many aspects of their daily lives. The challenge is to enjoy the conveniences of online activities while limiting privacy scarifies. In response to the increasing number of online-hacking scandals, mechanisms for protecting users privacy continue to evolve. An example of such mechanisms is the Onion Router (ToR), a free software for enabling online anonymity and resisting censorship. Despite the fact that ToR is a dominant anonymizerin the current Internet, the emergence of new communication and inter-networking trends such as Cloud Computing, Software Defined Networks and Information Centric Networks places a question mark whether ToR will fulfil its promises with these trend of the “Future Internet”. This paper aims at answering the question by implementing ToR on a number of Cloud platforms and discussing the security properties of ToR.
Keywords: cloud computing; data protection; security of data; Internet; ToR; communication trends; dominant anonymizer; information centric networks; internetworking trends; mobile devices; mobile gadgets; online activities; online anonymity; online-hacking scandals; security properties; software defined networks; the onion router; user privacy protection; Cloud computing; IP networks; Public key; Relays; Servers (ID#: 15-6535)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096160&isnumber=7096097
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Clean Slate 2015

 

 
SoS Logo

Clean Slate

2015


The “clean slate” approach looks at designing networks and internets from scratch, with security built in, in contrast to the evolved Internet in place. The research presented here covers a range of research topics, and includes items of interest to the Science of Security, including human behavior, resilience, metrics, and policy governance. These works were published or presented in 2015.



Zhong Shao; “Clean-Slate Development of Certified OS Kernels,” CPP ’15, Proceedings of the 2015 Conference on Certified Programs and Proofs, January 2015, Pages 95–96. doi:10.1145/2676724.2693180
Abstract: The CertiKOS project at Yale aims to develop new language-based technologies for building large-scale certified system software. Initially, we thought that verifying an OS kernel would require new program logics and powerful proof automation tools, but it should not be much different from standard Hoare-style program verification. After several years of trials and errors, we have decided to take a different path from the one we originally planned. We now believe that building large-scale certified system software requires a fundamental shift in the way we design the underlying programming languages, program logics, and proof assistants. In this talk, I outline our new clean-slate approach, explain its rationale, and describe various lessons and insights based on our experience with the development of several new certified OS kernels.
Keywords: abstraction layer, certified os kernels, horizontal composition, program verification, vertical composition (ID#: 15-6973)
URL: http://doi.acm.org/10.1145/2676724.2693180

 

Guoshun Nan, Xiuquan Qiao, Yukai Tu, Wei Tan, Lei Guo, Junliang Chen; “Design and Implementation: the Native Web Browser and Server for Content-Centric Networking,” SIGCOMM ’15, Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, August 2015, Pages 609–610. doi:10.1145/2829988.2790024
Abstract: Content-Centric Networking (CCN) has recently emerged as a clean-slate Future Internet architecture which has a completely different communication pattern compared with exiting IP network. Since the World Wide Web has become one of the most popular and important applications on the Internet, how to effectively support the dominant browser and server based web applications is a key to the success of CCN. However, the existing web browsers and servers are mainly designed for the HTTP protocol over TCP/IP networks and cannot directly support CCN-based web applications. Existing research mainly focuses on plug-in or proxy/gateway approaches at client and server sides, and these schemes seriously impact the service performance due to multiple protocol conversions. To address above problems, we designed and implemented a CCN web browser and a CCN web server to natively support CCN protocol. To facilitate the smooth evolution from IP networks to CCN, CCNBrowser and CCNxTomcat also support the HTTP protocol besides the CCN. Experimental results show that CCNBrowser and CCNxTomcat outperform existing implementations. Finally, a real CCN-based web application is deployed on a CCN experimental testbed, which validates the applicability of CCNBrowser and CCNxTomcat.
Keywords: content-centric networking, web browser, web server (ID#: 15-6974)
URL: http://doi.acm.org/10.1145/2829988.2790024

 

Tim Nelson, Andrew D. Ferguson, Da Yu, Rodrigo Fonseca, Shriram Krishnamurthi; “Exodus: Toward Automatic Migration of Enterprise Network Configurations to SDNs,” SOSR ’15, Proceedings of the 1st ACM SIGCOMM Symposium on Software Defined Networking Research, June 2015, Article No. 13. doi:10.1145/2774993.2774997
Abstract: We present the design and a prototype of Exodus, a system that consumes a collection of router configurations (e.g., in Cisco IOS), compiles these into a common, intermediate semantic form, and then produces corresponding SDN controller software in a high-level language. Exodus generates networks that are functionally similar to the original networks, with the advantage of having centralized programs that are verifiable and evolvable. Exodus supports a wide array of IOS features, including non-trivial kinds of packet-filtering, reflexive access-lists, NAT, VLANs, static and dynamic routing. Implementing Exodus has exposed several limitations in both today’s languages for SDN programming and in OpenFlow itself. We briefly discuss these lessons learned and provide guidance for future SDN migration efforts.
Keywords: OpenFlow, SDN migration, software-defined networking (ID#: 15-6975)
URL: http://doi.acm.org/10.1145/2774993.2774997

 

Abdulkader Benchi, Pascale Launay, Frédéric Guidec; “Solving Consensus in Opportunistic Networks,” ICDCN ’15, Proceedings of the 2015 International Conference on Distributed Computing and Networking, January 2015, Article No. 1. doi:10.1145/2684464.2684479
Abstract: Opportunistic networks are partially connected wireless ad hoc networks, in which pairwise unpredicted transient contacts between mobile devices are the only opportunities for these devices to exchange information or services. Ensuring the coordination of multiple parts of a distributed application in such conditions is a challenge. This paper presents a system that can solve consensus problems in an opportunistic network. This system combines an implementation of the One-Third Rule (OTR) algorithm with a communication layer that supports network-wide, content-driven message dissemination based on controlled epidemic routing. Experimental results obtained with a small flotilla of smartphones are also presented, that validate the system and demonstrate that consensus can be solved effectively in an opportunistic network.
Keywords: Consensus, opportunistic computing, opportunistic networking (ID#: 15-6976)
URL: http://doi.acm.org/10.1145/2684464.2684479

 

Joel Sommers; “Lowering the Barrier to Systems-level Networking Projects,”  SIGCSE ’15, Proceedings of the 46th ACM Technical Symposium on Computer Science Education, February 2015, Pages 651–656. doi:10.1145/2676723.2677211
Abstract: Developing systems-level networking software to implement switches, routers, and middleboxes is challenging, rewarding, and arguably an essential component for developing a deep understanding of modern computer networks. Unfortunately, existing techniques for building networked system software use low-level and error-prone tools and languages, making this task inaccessible for many undergraduates. Moreover, working at such a low-level of abstraction complicates debugging and testing and can make assessment difficult for instructors and TAs. We describe a Python-based environment called Switchyard that is designed to facilitate student projects for building and testing software-based network devices like switches, routers, and middleboxes. Switchyard exposes a networking abstraction similar to a \textit{raw socket}, which allows a developer to receive and send Ethernet frames on specific network ports, and provides a set of classes to simplify parsing and construction of packets and packet headers. Systems-level software created using Switchyard can be deployed on a standard Linux host or in an emulated environment like Mininet. Perhaps most importantly, Switchyard provides facilities for test-driven development by transparently allowing the underlying network to be replaced with a test harness that is specifically designed to help students through the development and debugging process. We describe experiences with using Switchyard in an undergraduate networking course in which students created an Ethernet learning switch, a fully functional IPv4 router, a firewall with rate limiter, and a deep-packet inspection middlebox device.
Keywords: middleboxes, routing, switching, test-driven development (ID#: 15-6977)
URL: http://doi.acm.org/10.1145/2676723.2677211

 

Sanjib Sur, Teng Wei, Xinyu Zhang; “Bringing Multi-Antenna Gain to Energy-Constrained Wireless Devices,” IPSN ’15, Proceedings of the 14th International Conference on Information Processing in Sensor Networks, April, 2015, Pages 25-36. doi:10.1145/2737095.2737099
Abstract: Leveraging the redundancy and parallelism from multiple RF chains, MIMO technology can easily scale wireless link capacity. However, the high power consumption and circuit-area cost prevents MIMO from being adopted by energy-constrained wireless devices. In this paper, we propose Halma, that can boost link capacity using multiple antennas but a single RF chain, thereby, consuming the same power as SISO. While modulating its normal data symbols, a Halma transmitter hops between multiple passive antennas on a per-symbol basis. The antenna hopping pattern implicitly carriers extra data, which the receiver can decode by extracting the index of the active antenna using its channel pattern as a signature.  We design Halma by intercepting the antenna switching and channel estimation modules in modern wireless systems, including ZigBee and WiFi. Further, we design a model-driven antenna hopping protocol to balance a tradeoff between link quality and dissimilarity of channel signatures. Remarkably, by leveraging the inherent packet structure in ZigBee, Halma’s link capacity can scale well with the number of antennas. Using the WARP software radio, we have implemented Halma along with a ZigBee- and WiFi-based PHY layer. Our experiments demonstrate that Halma can improve ZigBee’s throughput and energy efficiency by multiple folds under realistic network settings. For WiFi, it consumes similar power as SISO, but boosts throughput across a wide range of link conditions and modulation levels.
Keywords: MIMO, energy efficiency, mobile devices (ID#: 15-6978)
URL: http://doi.acm.org/10.1145/2737095.2737099

 

Xinshu Dong, Hui Lin, Rui Tan, Ravishankar K. Iyer, Zbigniew Kalbarczyk; “Software-Defined Networking for Smart Grid Resilience: Opportunities and Challenges,” CPSS ’15, Proceedings of the 1st ACM Workshop on Cyber-Physical System Security, April 2015, Pages 61–68 . doi:10.1145/2732198.2732203
Abstract: Software-defined networking (SDN) is an emerging networking paradigm that provides unprecedented flexibility in dynamically reconfiguring an IP network. It enables various applications such as network management, quality of service (QoS) optimization, and system resilience enhancement. Pilot studies have investigated the possibilities of applying SDN on smart grid communications, while the specific benefits and risks that SDN may bring to the resilience of smart grids against accidental failures and malicious attacks remain largely unexplored. Without a systematic understanding of these issues and convincing validations of proposed solutions, the power industry will be unlikely to embrace SDN, since resilience is always a key consideration for critical infrastructures like power grids. In this position paper, we aim to provide an initial understanding of these issues, by investigating (1) how SDN can enhance the resilience of typical smart grids to malicious attacks, (2) additional risks introduced by SDN and how to manage them, and (3) how to validate and evaluate SDN-based resilience solutions. Our goal is also to trigger more profound discussions on applying SDN to smart grids and inspire innovative SDN-based solutions for enhancing smart grid resilience.
Keywords: cyber-physical systems, cyber-security, resilience, smart grids, software-defined networking (ID#: 15-6979)
URL: http://doi.acm.org/10.1145/2732198.2732203

 

Tal Mizrahi, Efi Saat, Yoram Moses; “Timed Consistent Network Updates,” SOSR ’15, Proceedings of the 1st ACM SIGCOMM Symposium on Software Defined Networking Research, June 2015, Article No. 21. doi:10.1145/2774993.2775001
Abstract: Network updates such as policy and routing changes occur frequently in Software Defined Networks (SDN). Updates should be performed consistently, preventing temporary disruptions, and should require as little overhead as possible. Scalability is increasingly becoming an essential requirement in SDN. In this paper we propose to use time-triggered network updates to achieve consistent updates. Our proposed solution requires lower overhead than existing update approaches, without compromising the consistency during the update. We demonstrate that accurate time enables far more scalable consistent updates in SDN than previously available. In addition, it provides the SDN programmer with fine-grained control over the tradeoff between consistency and scalability.
Keywords: IEEE 1588, PTP, SDN, clock synchronization, management, time (ID#: 15-6980)
URL: http://doi.acm.org/10.1145/2774993.2775001

 

Luyuan Fang, Fabio Chiussi, Deepak Bansal, Vijay Gill, Tony Lin, Jeff Cox, Gary Ratterree; “Hierarchical SDN for the Hyper-Scale, Hyper-Elastic Data Center and Cloud,” SOSR ’15, Proceedings of the 1st ACM SIGCOMM Symposium on Software Defined Networking Research, June 2015, Article No. 7. doi:10.1145/2774993.2775009
Abstract: With the explosive growth in the demand for cloud services, the Data Center and Data Center Interconnect have to achieve hyper-scale and provide unprecedented elasticity and resource availability. The underlay network infrastructure has to scale to support tens of millions of physical endpoints at low cost; the virtualized overlay layer has to scale to millions of Virtual Networks connecting hundreds of millions of Virtual Machines (VMs) and Virtualized Network Functions (VNFs), and provide seamless VM and VNF mobility.  In this paper, we present Hierarchical SDN (HSDN), an architectural solution that achieves hyper scale using surprisingly small forwarding tables in the network nodes. HSDN introduces a new paradigm for the forwarding and control planes, in that all paths in the network are pre-established in the forwarding tables and the labels identify entire paths rather than simply destinations. These properties of HSDN dramatically simplify establishing tunnels, and thus enable optimal handling of both ECMP and any-to-any end-to-end TE, which in turn yields extremely high network utilization with small buffers in the switches. The pre-established tunnels make HSDN the ideal underlay infrastructure to enable seamless and lossless VM and VNF overlay mobility, and achieve excellent elasticity.  HSDN is suitable for a full SDN implementation, using a scalable SDN controller to configure all forwarding tables in the network nodes and in the endpoints, as well as a hybrid approach, using conventional routing protocols in conjunction with a SDN controller.
Keywords: cloud, data center architecture, scalability, software-defined networking, traffic engineering, virtualization (ID#: 15-6981)
URL: http://doi.acm.org/10.1145/2774993.2775009

 

Arne Ludwig, Jan Marcinkowski, Stefan Schmid; “Scheduling Loop-free Network Updates: It’s Good to Relax!,” PODC ’15, Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing, July 2015, Pages 13–22. doi:10.1145/2767386.2767412
Abstract: We consider the problem of updating arbitrary routes in a software-defined network in a (transiently) loop-free manner. We are interested in fast network updates, i.e., in schedules which minimize the number of interactions (i.e., rounds) between the controller and the network nodes. We first prove that this problem is difficult in general: The problem of deciding whether a k-round schedule exists is NP-complete already for k = 3, and there are problem instances requiring Ω(n) rounds, where n is the network size. Given these negative results, we introduce an attractive, relaxed notion of loop-freedom. We prove that O(log n)-round relaxed loop-free schedules always exist, and can also be computed efficiently.
Keywords: NP-hardness, graph algorithms, scheduling, software-defined networking (ID#: 15-6982)
URL: http://doi.acm.org/10.1145/2767386.2767412

 

Pat Pannuto, Yoonmyung Lee, Ye-Sheng Kuo, ZhiYoong Foo, Benjamin Kempke, Gyouho Kim, Ronald G. Dreslinski, David Blaauw, Prabal Dutta; “MBus: An Ultra-Low Power Interconnect Bus for Next Generation Nanopower Systems,” ISCA ’15, Proceedings of the 42nd Annual International Symposium on Computer Architecture, June 2015, Pages 629-641. doi:10.1145/2749469.2750376
Abstract: As we show in this paper, I/O has become the limiting factor in scaling down size and power toward the goal of invisible computing. Achieving this goal will require composing optimized and specialized---yet reusable---components with an interconnect that permits tiny, ultra-low power systems. In contrast to today’s interconnects which are limited by power-hungry pull-ups or high-overhead chip-select lines, our approach provides a superset of common bus features but at lower power, with fixed area and pin count, using fully synthesizable logic, and with surprisingly low protocol overhead.  We present MBus, a new 4-pin, 22.6 pJ/bit/chip chip-to-chip interconnect made of two “shoot-through” rings. MBus facilitates ultra-low power system operation by implementing automatic power-gating of each chip in the system, easing the integration of active, inactive, and activating circuits on a single die. In addition, we introduce a new bus primitive: power oblivious communication, which guarantees message reception regardless of the recipient's power state when a message is sent. This disentangles power management from communication, greatly simplifying the creation of viable, modular, and heterogeneous systems that operate on the order of nanowatts.  To evaluate the viability, power, performance, overhead, and scalability of our design, we build both hardware and software implementations of MBus and show its seamless operation across two FPGAs and twelve custom chips from three different semiconductor processes. A three-chip, 2.2mm3 MBus system draws 8nW of total system standby power and uses only 22.6 pJ/bit/chip for communication. This is the lowest power for any system bus with MBus's feature set.
Keywords: (not provided) (ID#: 15-6983)
URL: http://doi.acm.org/10.1145/2749469.2750376

 

Kevin Boos, Ardalan Amiri Sani, Lin Zhong; “Eliminating State Entanglement with Checkpoint-based Virtualization of Mobile OS Services,” APSys ’15, Proceedings of the 6th Asia-Pacific Workshop on Systems, July 2015, Article No. 20. doi:10.1145/2797022.2797041
Abstract: Mobile operating systems have adopted a service model in which applications access system functionality by interacting with various OS Services in separate processes. These interactions cause application-specific states to be spread across many service processes, a problem we identify as state entanglement. State entanglement presents significant challenges to a wide variety of computing goals: fault isolation, fault tolerance, application migration, live update, and application speculation. We propose CORSA, a novel virtualization solution that uses a lightweight checkpoint/restore mechanism to virtualize OS Services on a per-application basis. This cleanly encapsulates a single application's service-side states into a private virtual service instance, eliminating state entanglement and enabling the above goals. We present empirical evidence that our ongoing implementation of CORSA on Android is feasible with low overhead, even in the worst case of high frequency service interactions.
Keywords: (not provided) (ID#: 15-6984)
URL: http://doi.acm.org/10.1145/2797022.2797041

 

Exequiel Rivas, Mauro Jaskelioff, Tom Schrijvers; “From Monoids to Near-semirings: The Essence of MonadPlus and Alternative,” PPDP ’15, Proceedings of the 17th International Symposium on Principles and Practice of Declarative Programming, July 2015, Pages 196–207. doi:10.1145/2790449.2790514
Abstract: It is well-known that monads are monoids in the category of endofunctors, and in fact so are applicative functors. Unfortunately, the benefits of this unified view are lost when the additional nondeterminism structure of MonadPlus or Alternative is required.  This article recovers the essence of these two type classes by extending monoids to near-semirings with both additive and multiplicative structure. This unified algebraic view enables us to generically define the free construction as well as a novel double Cayley representation that optimises both left-nested sums and left-nested products.
Keywords: Cayley representation, alternative, applicative functor, free construction, monad, monadplus, monoid, near-semiring (ID#: 15-6985)
URL: http://doi.acm.org/10.1145/2790449.2790514

 

Peter Bailis, Alan Fekete, Michael J. Franklin, Ali Ghodsi, Joseph M. Hellerstein, Ion Stoica; “Feral Concurrency Control: An Empirical Investigation of Modern Application Integrity,” SIGMOD ’15, Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, May 2015, Pages 1327–1342. doi:10.1145/2723372.2737784
Abstract: The rise of data-intensive “Web 2.0” Internet services has led to a range of popular new programming frameworks that collectively embody the latest incarnation of the vision of Object-Relational Mapping (ORM) systems, albeit at unprecedented scale. In this work, we empirically investigate modern ORM-backed applications’ use and disuse of database concurrency control mechanisms. Specifically, we focus our study on the common use of feral, or application-level, mechanisms for maintaining database integrity, which, across a range of ORM systems, often take the form of declarative correctness criteria, or invariants. We quantitatively analyze the use of these mechanisms in a range of open source applications written using the Ruby on Rails ORM and find that feral invariants are the most popular means of ensuring integrity (and, by usage, are over 37 times more popular than transactions). We evaluate which of these feral invariants actually ensure integrity (by usage, up to 86.9%) and which—due to concurrency errors and lack of database support—may lead to data corruption (the remainder), which we experimentally quantify. In light of these findings, we present recommendations for database system designers for better supporting these modern ORM programming patterns, thus eliminating their adverse effects on application integrity.
Keywords: application integrity, concurrency control, impedance mismatch, invariants, orms, ruby on rails (ID#: 15-6986)
URL: http://doi.acm.org/10.1145/2723372.2737784

 

Carsten S. Østerlund, Pernille Bjørn, Paul Dourish, Richard Harper, Daniela K. Rosner; “Sociomateriality and Design,” CSCW ’15 Companion, Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing, February 2015, Pages 126–130. doi:10.1145/2685553.2699336
Abstract: Design research and the literature on sociomateriality emerge out of different academic traditions but share a common interest in the material. A sociomaterial perspective allows us to account for the complex ways people mingle and mangle information systems of all sorts into their social endeavors to accomplish organizational tasks. But, how do we account for these sociomaterial phenomena in all their complexity when faced with the task of designing information systems? The panel brings together prominent researchers bridging the gap between design research and the current debate on sociomateriality. Each presenter addresses the challenges associated with informing grounded design work with insights from a highly abstract intellectual debate.
Keywords: design research, ethnography, sociomateriality (ID#: 15-6987)
URL: http://doi.acm.org/10.1145/2685553.2699336

 

Daejun Park, Andrei Stefănescu, Grigore Roşu; “KJS: A Complete Formal Semantics of JavaScript,” PLDI 2015, Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2015, Pages 346–356. doi:10.1145/2737924.2737991
Abstract: This paper presents KJS, the most complete and throughly tested formal semantics of JavaScript to date. Being executable, KJS has been tested against the ECMAScript 5.1 conformance test suite, and passes all 2,782 core language tests. Among the existing implementations of JavaScript, only Chrome V8’s passes all the tests, and no other semantics passes more than 90%. In addition to a reference implementation for JavaScript, KJS also yields a simple coverage metric for a test suite: the set of semantic rules it exercises. Our semantics revealed that the ECMAScript 5.1 conformance test suite fails to cover several semantic rules. Guided by the semantics, we wrote tests to exercise those rules. The new tests revealed bugs both in production JavaScript engines (Chrome V8, Safari WebKit, Firefox SpiderMonkey) and in other semantics. KJS is symbolically executable, thus it can be used for formal analysis and verification of JavaScript programs. We verified non-trivial programs and found a known security vulnerability.
Keywords: JavaScript, K framework, mechanized semantics (ID#: 15-6988)
URL: http://doi.acm.org/10.1145/2737924.2737991

 

Vladimir Andrei Olteanu, Felipe Huici, Costin Raiciu; “Lost in Network Address Translation: Lessons from Scaling the World’s Simplest Middlebox,” HotMiddlebox ’15, Proceedings of the 2015 ACM SIGCOMM Workshop on Hot Topics in Middleboxes and Network Function Virtualization, August 2015, Pages 19–24. doi:10.1145/2785989.2785994
Abstract: To understand whether the promise of Network Function Virtualization can be accomplished in practice, we set out to create a software version of the simplest middlebox that keeps per flow state: the NAT. While there is a lot of literature in the wide area of SDN in general and in scaling middleboxes, we find that by aiming to create a NAT good enough to compete with hardware appliances requires a lot more care than we had thought when we started our work. In particular, limitations of OpenFlow switches force us to rethink load balancing in a way that does not involve the centralized controller at all. The result is a solution that can sustain, on six low-end commodity boxes, a throughput of 40Gbps with 64B packets, on par with industrial offerings but at a third of the cost.  To reach this performance, we designed and implemented our NAT from scratch to be migration friendly and optimized for common cases (inbound traffic, many mappings). Our experience shows that OpenFlow-based load balancing is very limited in the context of NATs (and by relation NFV), and that scalability can only be ensured by keeping the controller out of the data plane.
Keywords: (not provided) (ID#: 15-6989)
URL: http://doi.acm.org/10.1145/2785989.2785994

 

Sadegh Farhang, Yezekael Hayel, Quanyan Zhu; “Physical Layer Location Privacy Issue in Wireless Small Cell Networks,” WiSec ’15, Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 32. doi:10.1145/2766498.2774990
Abstract: High data rates are essential for next-generation wireless networks to support a growing number of computing devices and networking services. Small cell base station (SCBS) (e.g., picocells, microcells, femtocells) technology is a cost-effective solution to address this issue. However, one challenging issue with the increasingly dense network is the need for a distributed and scalable access point association protocol. In addition, the reduced cell size makes it easy for an adversary to map out the geographical locations of the mobile users, and hence breaching their location privacy. To address these issues, we establish a game-theoretic framework to develop a privacy-preserving stable matching algorithm that captures the large scale and heterogeneity nature of 5G networks. We show that without the privacy-preserving mechanism, an attacker can infer the location of the users by observing wireless connections and the knowledge of physical-layer system parameters. The protocol presented in this work provides a decentralized differentially private association algorithm which guarantees privacy to a large number of users in the network. We evaluate our algorithm using case studies, and demonstrate the tradeoff between privacy and system-wide performance for different privacy requirements and a varying number of mobile users in the network. Our simulation results corroborate the result that the total number of mobile users should be lower than the overall network capacity to achieve desirable levels of privacy and QoS.
Keywords: (not provided) (ID#: 15-6990)
URL: http://doi.acm.org/10.1145/2766498.2774990

 

Julius Schulz-Zander, Carlos Mayer, Bogdan Ciobotaru, Stefan Schmid, Anja Feldmann; “OpenSDWN: Programmatic Control over Home and Enterprise WiFi,” SOSR ’15, Proceedings of the 1st ACM SIGCOMM Symposium on Software Defined Networking Research, June 2015, Article No. 16. doi:10.1145/2774993.2775002
Abstract: The quickly growing demand for wireless networks and the numerous application-specific requirements stand in stark contrast to today’s inflexible management and operation of WiFi networks. In this paper, we present and evaluate OpenSDWN, a novel WiFi architecture based on an SDN/NFV approach. OpenSDWN exploits datapath programmability to enable service differentiation and fine-grained transmission control, facilitating the prioritization of critical applications. OpenSDWN implements per-client virtual access points and per-client virtual middleboxes, to render network functions more flexible and support mobility and seamless migration. OpenSDWN can also be used to out-source the control over the home network to a participatory interface or to an Internet Service Provider.
Keywords: WLAN, enterprise, network function virtualization, software-defined networking, software-defined wireless networking (ID#: 15-6991)
URL: http://doi.acm.org/10.1145/2774993.2775002

 

Daniel A. Epstein, An Ping, James Fogarty, Sean A. Munson; “A Lived Informatics Model of Personal Informatics,” UbiComp ’15, Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, September 2015, Pages 731–742. doi:10.1145/2750858.2804250
Abstract: Current models of how people use personal informatics systems are largely based in behavior change goals. They do not adequately characterize the integration of self-tracking into everyday life by people with varying goals. We build upon prior work by embracing the perspective of lived informatics to propose a new model of personal informatics. We examine how lived informatics manifests in the habits of self-trackers across a variety of domains, first by surveying 105, 99, and 83 past and present trackers of physical activity, finances, and location and then by interviewing 22 trackers regarding their lived informatics experiences. We develop a model characterizing tracker processes of deciding to track and selecting a tool, elaborate on tool usage during collection, integration, and reflection as components of tracking and acting, and discuss the lapsing and potential resuming of tracking. We use our model to surface underexplored challenges in lived informatics, thus identifying future directions for personal informatics design and research.
Keywords: finances, lapsing, lived informatics, location, personal informatics, physical activity, self-tracking (ID#: 15-6992)
URL: http://doi.acm.org/10.1145/2750858.2804250

 

Joongi Kim, Keon Jang, Keunhong Lee, Sangwook Ma, Junhyun Shim, Sue Moon; “NBA (Network Balancing Act): A High-Performance Packet Processing Framework for Heterogeneous Processors,” EuroSys ’15, Proceedings of the Tenth European Conference on Computer Systems, April 2015, Article No. 22. doi:10.1145/2741948.2741969
Abstract: We present the NBA framework, which extends the architecture of the Click modular router to exploit modern hardware, adapts to different hardware configurations, and reaches close to their maximum performance without manual optimization. NBA takes advantages of existing performance-excavating solutions such as batch processing, NUMA-aware memory management, and receive-side scaling with multi-queue network cards. Its abstraction resembles Click but also hides the details of architecture-specific optimization, batch processing that handles the path diversity of individual packets, CPU/GPU load balancing, and complex hardware resource mappings due to multi-core CPUs and multi-queue network cards. We have implemented four sample applications: an IPv4 and an IPv6 router, an IPsec encryption gateway, and an intrusion detection system (IDS) with Aho-Corasik and regular expression matching. The IPv4/IPv6 router performance reaches the line rate on a commodity 80 Gbps machine, and the performances of the IPsec gateway and the IDS reaches above 30 Gbps. We also show that our adaptive CPU/GPU load balancer reaches near-optimal throughput in various combinations of sample applications and traffic conditions.
Keywords: (not provided) (ID#: 15-6993)
URL: http://doi.acm.org/10.1145/2741948.2741969

 

Florian Schmidt, Oliver Hohlfeld, René Glebke, Klaus Wehrle; “Santa: Faster Packet Delivery for Commonly Wished Replies,” SIGCOMM ’15, Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, August 2015, Pages 597–598. doi:10.1145/2829988.2790014
Abstract: Increasing network speeds challenge the packet processing performance of networked systems. This can mainly be attributed to processing overhead caused by the split between the kernel-space network stack and user-space applications. To mitigate this overhead, we propose Santa, an application agnostic kernel-level cache of frequent requests. By allowing user-space applications to offload frequent requests to the kernel-space, Santa offers drastic performance improvements and unlocks the speed of kernel-space networking for legacy server software without requiring extensive changes.
Keywords: (not provided) (ID#: 15-6994)
URL: http://doi.acm.org/10.1145/2829988.2790014

 

Nan Cen, Zhangyu Guan, Tommaso Melodia; “Multi-view Wireless Video Streaming Based on Compressed Sensing: Architecture and Network Optimization,” MobiHoc ’15, Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, June 2015, Pages 137–146. doi:10.1145/2746285.2746309
Abstract: Multi-view wireless video streaming has the potential to enable a new generation of efficient and low-power pervasive surveillance systems that can capture scenes of interest from multiple perspectives, at higher resolution, and with lower energy consumption. However, state-of-the-art multi-view coding architectures require relatively complex predictive encoders, thus resulting in high processing complexity and power requirements. To address these challenges, we consider a wireless video surveillance scenario and propose a new encoding and decoding architecture for multi-view video systems based on Compressed Sensing (CS) principles, composed of cooperative sparsity-aware block-level rate-adaptive encoders, feedback channels and independent decoders. The proposed architecture leverages the properties of CS to overcome many limitations of traditional encoding techniques, specifically massive storage requirements and high computational complexity. It also uses estimates of image sparsity to perform efficient rate adaptation and effectively exploit inter-view correlation at the encoder side.  Based on the proposed encoding/decoding architecture, we further develop a CS-based end-to-end rate distortion model by considering the effect of packet losses on the perceived video quality. We then introduce a modeling framework to design network optimization problems in a multi-hop wireless sensor network. Extensive performance evaluation results show that the proposed coding framework and power-minimizing delivery scheme are able to transmit multi-view streams with guaranteed video quality at low power consumption.
Keywords: compressed sensing, multi-view video streaming, network optimization (ID#: 15-6995)
URL: http://doi.acm.org/10.1145/2746285.2746309

 

Petr Hosek, Cristian Cadar; “VARAN the Unbelievable: An Efficient N-version Execution Framework,” ASPLOS ’15, Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, March 2015, Pages 339–353. doi:10.1145/2775054.2694390
Abstract: With the widespread availability of multi-core processors, running multiple diversified variants or several different versions of an application in parallel is becoming a viable approach for increasing the reliability and security of software systems. The key component of such N-version execution (NVX) systems is a runtime monitor that enables the execution of multiple versions in parallel. Unfortunately, existing monitors impose either a large performance overhead or rely on intrusive kernel-level changes. Moreover, none of the existing solutions scales well with the number of versions, since the runtime monitor acts as a performance bottleneck.  In this paper, we introduce Varan, an NVX framework that combines selective binary rewriting with a novel event-streaming architecture to significantly reduce performance overhead and scale well with the number of versions, without relying on intrusive kernel modifications.  Our evaluation shows that Varan can run NVX systems based on popular C10k network servers with only a modest performance overhead, and can be effectively used to increase software reliability using techniques such as transparent failover, live sanitization and multi-revision execution.
Keywords: N-version execution, event streaming, live sanitization, multi-revision execution, record-replay, selective binary rewriting, transparent failover (ID#: 15-6996)
URL: http://doi.acm.org/10.1145/2775054.2694390

 

Naga Katta, Haoyu Zhang, Michael Freedman, Jennifer Rexford; “Ravana: Controller Fault-Tolerance in Software-Defined Networking,” SOSR ’15, Proceedings of the 1st ACM SIGCOMM Symposium on Software Defined Networking Research, June 2015, Article No. 4. doi:10.1145/2774993.2774996
Abstract: Software-defined networking (SDN) offers greater flexibility than traditional distributed architectures, at the risk of the controller being a single point-of-failure. Unfortunately, existing fault-tolerance techniques, such as replicated state machine, are insufficient to ensure correct network behavior under controller failures. The challenge is that, in addition to the application state of the controllers, the switches maintain hard state that must be handled consistently. Thus, it is necessary to incorporate switch state into the system model to correctly offer a “logically centralized” controller. We introduce Ravana, a fault-tolerant SDN controller platform that processes the control messages transactionally and exactly once (at both the controllers and the switches). Ravana maintains these guarantees in the face of both controller and switch crashes. The key insight in Ravana is that replicated state machines can be extended with lightweight switch-side mechanisms to guarantee correctness, without involving the switches in an elaborate consensus protocol. Our prototype implementation of Ravana enables unmodified controller applications to execute in a fault-tolerant fashion. Experiments show that Ravana achieves high throughput with reasonable overhead, compared to a single controller, with a failover time under 100ms.
Keywords: OpenFlow, fault-tolerance, replicated state machines, software-defined networking (ID#: 15-6997)
URL: http://doi.acm.org/10.1145/2774993.2774996

 

Peng Sun, Laurent Vanbever, Jennifer Rexford; “Scalable Programmable Inbound Traffic Engineering,” SOSR ’15, Proceedings of the 1st ACM SIGCOMM Symposium on Software Defined Networking Research, June 2015, Article No. 12.  doi:10.1145/2774993.2775063
Abstract: With the rise of video streaming and cloud services, enterprise and access networks receive much more traffic than they send, and must rely on the Internet to offer good end-to-end performance. These edge networks often connect to multiple ISPs for better performance and reliability, but have only limited ways to influence which of their ISPs carries the traffic for each service. In this paper, we present Sprite, a software-defined solution for flexible inbound traffic engineering (TE). Sprite offers direct, fine-grained control over inbound traffic, by announcing different public IP prefixes to each ISP, and performing source network address translation (SNAT) on outbound request traffic. Our design achieves scalability in both the data plane (by performing SNAT on edge switches close to the clients) and the control plane (by having local agents install the SNAT rules). The controller translates high-level TE objectives, based on client and server names, as well as performance metrics, to a dynamic network policy based on real-time traffic and performance measurements. We evaluate Sprite with live data from “in the wild” experiments on an EC2-based testbed, and demonstrate how Sprite dynamically adapts the network policy to achieve high-level TE objectives, such as balancing YouTube traffic among ISPs to improve video quality.
Keywords: scalability, software-defined networking, traffic engineering (ID#: 15-6998)
URL: http://doi.acm.org/10.1145/2774993.2775063

 

Eddie Q. Yan, Jeff Huang, Gifford K. Cheung; “Masters of Control: Behavioral Patterns of Simultaneous Unit Group Manipulation in StarCraft 2,” CHI ’15, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 3711–3720. doi:10.1145/2702123.2702429
Abstract: Most user interfaces require the user to focus on one element at a time, but StarCraft 2 is a game where players often control more than a hundred units simultaneously. The game interface provides an optional mechanism called “control groups” that allows players to select multiple units and assign them to a group in order to quickly recall previous selections of units. From an analysis of over 3,000 replays, we show that the usage of control groups is a key differentiator of individual players as well as players of different skill levels—novice users rarely use control groups while experts nearly always do. But players also behave differently in how they use their control groups, especially in time-pressured situations. While certain control group behaviors are common across all skill levels, expert players appear to be better at remaining composed and sustaining control group use in battle. We also qualitatively analyze discussions on web forums from players about how they use control groups to provide context about how such a simple interface mechanic has produced numerous ways of optimizing unit control.
Keywords: control groups, player behavior, skill, video games (ID#: 15-6999)
URL: http://doi.acm.org/10.1145/2702123.2702429


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Compressive Sampling 2015

 

 
SoS Logo

Compressive Sampling

2015


Compressive sampling (or compressive sensing) is an important theory in signal processing. It allows efficient acquisition and reconstruction of a signal and may also be the basis for user identification. For the Science of Security, the topic has implications for resilience, cyber-physical systems, privacy, and composability. The works cited here were published or presented in 2015.



Liwen Xu, Xiaohong Hao, Nicholas D. Lane, Xin Liu, Thomas Moscibroda; “Cost-Aware Compressive Sensing for Networked Sensing Systems,” IPSN ’15, Proceedings of the 14th International Conference on Information Processing in Sensor Networks, April 2015, Pages 130–141. doi:10.1145/2737095.2737105
Abstract: Compressive Sensing is a technique that can help reduce the sampling rate of sensing tasks. In mobile crowdsensing applications or wireless sensor networks, the resource burden of collecting samples is often a major concern. Therefore, compressive sensing is a promising approach in such scenarios. An implicit assumption underlying compressive sensing — both in theory and its applications — is that every sample has the same cost: its goal is to simply reduce the number of samples while achieving a good recovery accuracy. In many networked sensing systems, however, the cost of obtaining a specific sample may depend highly on the location, time, condition of the device, and many other factors of the sample.  In this paper, we study compressive sensing in situations where different samples have different costs, and we seek to find a good trade-off between minimizing the total sample cost and the resulting recovery accuracy. We design Cost-Aware Compressive Sensing (CACS), which incorporates the cost-diversity of samples into the compressive sensing framework, and we apply CACS in networked sensing systems. Technically, we use regularized column sum (RCS) as a predictive metric for recovery accuracy, and use this metric to design an optimization algorithm for finding a least cost randomized sampling scheme with provable recovery bounds. We also show how CACS can be applied in a distributed context. Using traffic monitoring and air pollution as concrete application examples, we evaluate CACS based on large-scale real-life traces. Our results show that CACS achieves significant cost savings, outperforming natural baselines (greedy and random sampling) by up to 4x.
Keywords: compressive sensing, crowdsensing, resource-efficiency (ID#: 15-7000)
URL: http://doi.acm.org/10.1145/2737095.2737105

 

Liwen Xu, Xiaohong Hao, Nicholas D. Lane, Xin Liu, Thomas Moscibroda; “More with Less: Lowering User Burden in Mobile Crowdsourcing Through Compressive Sensing,” UbiComp ’15, Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, September 2015, Pages 659–670. doi:10.1145/2750858.2807523
Abstract: Mobile crowdsourcing is a powerful tool for collecting data of various types. The primary bottleneck in such systems is the high burden placed on the user who must manually collect sensor data or respond in-situ to simple queries (e.g., experience sampling studies). In this work, we present Compressive CrowdSensing (CCS) -- a framework that enables compressive sensing techniques to be applied to mobile crowdsourcing scenarios. CCS enables each user to provide significantly reduced amounts of manually collected data, while still maintaining acceptable levels of overall accuracy for the target crowd-based system. Naïve applications of compressive sensing do not work well for common types of crowdsourcing data (e.g., user survey responses) because the necessary correlations that are exploited by a sparsifying base are hidden and non-trivial to identify. CCS comprises a series of novel techniques that enable such challenges to be overcome. We evaluate CCS with four representative large-scale datasets and find that it is able to outperform standard uses of compressive sensing, as well as conventional approaches to lowering the quantity of user data needed by crowd systems.
Keywords: compressive sensing, mobile crowdsensing (ID#: 15-7001)
URL: http://doi.acm.org/10.1145/2750858.2807523

 

Xingsong Hou, Chunli Wang, Xueming Qian; “Compressive Imaging Based on Coefficients Cutting in DLWT Domain and Approximate Message Passing,” ICIMCS ’15, Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, August 2015, Article No. 71. doi:10.1145/2808492.2808563
Abstract: In compressive imaging (CI), accurate coefficients recovery is possible when the transform coefficients for an image are sufficiently sparse. However, conventional transforms, such as discrete cosine transform (DCT) and discrete wavelet transform (DWT), can not acquire a sufficiently sparse representation. A large amount of small coefficients indeed bring interference to the the recovery of large ones. This paper aims to improve the recovery performance by accurately reconstructing the large coefficients as many as possible. Thus, a compressive imaging scheme based on coefficients cutting in directional lifting wavelet transform (DLWT) domain and low-complexity iterative Bayesian algorithm is proposed. The proposed scheme is an improved version of our previous work. In this paper, relations between the best-fitted cutoff ratio and sampling rate are further analyzed. Due to the efficient Bayesian recovery algorithm, the proposed method offers better recovery performance with much lower complexity than our previous work. Experimental results show that our method outperforms many state-of-the-art compressive imaging recovery methods.
Keywords: DLWT, approximate message passing, coefficients cutting, compressive imaging, tail folding (ID#: 15-7002)
URL: http://doi.acm.org/10.1145/2808492.2808563

 

Yun Tan, Xingsong Hou, Xueming Qian; “A Fine-Grain Nonlocal Weighted Average Method for Image CS Reconstruction,” ICIMCS ’15, Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, August 2015, Article No. 70. doi:10.1145/2808492.2808562
Abstract: Compressive sensing can acquire a signal at a sampling rate far below the Nyquist sampling rate if signal is sparse in some domain. However, signal reconstruction from its observations is challenging because it is an implicit ill-pose problem in practice. As classical CS reconstruction methods, total variation (TV) and iteratively reweighted TV (ReTV) methods only exploit local image information, which results in some loss of image structures and causes blocking effect. In this paper, we observe there are abundant nonlocal repetitive structures in nature image, so we propose a novel fine-grain nonlocal weighted average method for nature image CS reconstruction, and we take full use of the nonlocal repetitive structures to recover image from the observations. Besides, an efficient iterative bound optimization algorithm which is stably convergent in our experiments is applied to the above CS reconstruction. The experimental results of different nature images demonstrate that our proposed algorithm can outperform the existing classical nature image CS reconstruction algorithms in Peak-Signal-Noise-Ratio (PSNR) and subjective evaluation.
Keywords: CS reconstruction, compressive sensing, iterative algorithm, nonlocal repetitive (ID#: 15-7003)
URL: http://doi.acm.org/10.1145/2808492.2808562

 

Michael B. Cohen, Richard Peng; “Lp Row Sampling by Lewis Weights,” STOC ’15, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, June 2015, Pages 183–192. doi:10.1145/2746539.2746567
Abstract: We give a simple algorithm to efficiently sample the rows of a matrix while preserving the p-norms of its product with vectors. Given an n * d matrix A, we find with high probability and in input sparsity time an A' consisting of about d log d rescaled rows of A such that |Ax|1 is close to |A'x|1 for all vectors x. We also show similar results for all Lp that give nearly optimal sample bounds in input sparsity time. Our results are based on sampling by “Lewis weights”, which can be viewed as statistical leverage scores of a reweighted matrix. We also give an elementary proof of the guarantees of this sampling process for L1.
Keywords: lp regression, lewis weights, matrix concentration bounds, row sampling (ID#: 15-7004)
URL: http://doi.acm.org/10.1145/2746539.2746567

 

Ying Yan, Jiaxing Zhang, Bojun Huang, Xuzhan Sun, Jiaqi Mu, Zheng Zhang, Thomas Moscibroda; “Distributed Outlier Detection using Compressive Sensing,”  SIGMOD ’15, Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, May 2015, Pages 3–16. doi:10.1145/2723372.2747641
Abstract: Computing outliers and related statistical aggregation functions from large-scale big data sources is a critical operation in many cloud computing scenarios, e.g. service quality assurance, fraud detection, or novelty discovery. Such problems commonly have to be solved in a distributed environment where each node only has a local slice of the entirety of the data. To process a query on the global data, each node must transmit its local slice of data or an aggregated subset thereof to a global aggregator node, which can then compute the desired statistical aggregation function. In this context, reducing the total communication cost is often critical to the overall efficiency.  In this paper, we show both theoretically and empirically that these communication costs can be significantly reduced for common distributed computing problems if we take advantage of the fact that production-level big data usually exhibits a form of sparse structure. Specifically, we devise a new aggregation paradigm for outlier detection and related queries. The paradigm leverages compressive sensing for data sketching in combination with outlier detection techniques. We further propose an algorithm that works even for non-sparse data that concentrates around an unknown value. In both cases, we show that the communication cost is reduced to the logarithm of the global data size. We incorporate our approach into Hadoop and evaluate it on real web-scale production data (distributed click-data logs). Our approach reduces data shuffling IO by up to 99%, and end-to-end job duration by up to 40% on many actual production queries.
Keywords: big sparse data, compressive sensing, distributed aggregation, outlier detection (ID#: 15-7005)
URL: http://doi.acm.org/10.1145/2723372.2747641

 

Marco Trevisi, Ricardo Carmona-Galán, Ángel Rodríguez-Vázquez; “Hardware-Oriented Feature Extraction Based on Compressive Sensing,” ICDSC ’15, Proceedings of the 9th International Conference on Distributed Smart Cameras, September 2015, Pages 211–212.  doi:10.1145/2789116.2802657
Abstract: Feature extraction is used to reduce the amount of resources required to describe a large set of data. A given feature can be represented by a matrix having the same size as the original image but having relevant values only in some specific points. We can consider this set as being sparse. Under this premise many algorithms have been generated to extract features from compressive samples. None of them though is easily described in hardware. We try to bridge the gap between compressive sensing and hardware design by presenting a sparsifying dictionary that allows compressive sensing reconstruction algorithms to recover features. The idea is to use this work as a starting point to the design of a smart imager capable of compressive feature extraction. To prove this concept we have devised a simulation by using the Harris corner detection and applied a standard reconstruction method, the Nesta algorithm, to retrieve corners instead of a full image.
Keywords: compressive feature extraction, compressive sensing (ID#: 15-7006)
URL: http://doi.acm.org/10.1145/2789116.2802657

 

Damian Pfammatter, Domenico Giustiniano, Vincent Lenders; “A Software-Defined Sensor Architecture for Large-Scale Wideband Spectrum Monitoring,” IPSN ’15, Proceedings of the 14th International Conference on Information Processing in Sensor Networks, April 2015, Pages 71–82. doi:10.1145/2737095.2737119
Abstract: Today’s spectrum measurements are mainly performed by governmental agencies which drive around using expensive specialized hardware. The idea of crowdsourcing spectrum monitoring has recently gained attention as an alternative way to capture the usage of wide portions of the wireless spectrum at larger geographical and time scales. To support this vision, we develop a flexible software-defined sensor architecture that enables distributed data collection in real-time over the Internet. Our sensor design builds upon low-cost commercial off-the-shelf (COTS) hardware components with a total cost per sensor device below $100. The low-cost nature of our sensor platform makes the sensing approach particularly suitable for large-scale deployments but imposes technical challenges regarding performance and quality. To circumvent the limits of our solution, we have implemented and evaluated different sensing strategies and noise reduction techniques. Our results suggest that our sensor architecture may be useful in application areas such as dynamic spectrum access in cognitive radios, detecting regions with elevated electro-smog, or simply to gain an understanding of the spectrum usage for advanced signal intelligence such as anomaly detection or policy enforcement.
Keywords: crowdsourcing, distributed, spectrum monitoring, wideband (ID#: 15-7007)
URL: http://doi.acm.org/10.1145/2737095.2737119

 

Anusha Withana, Roshan Peiris, Nipuna Samarasekara, Suranga Nanayakkara; “zSense: Enabling Shallow Depth Gesture Recognition for Greater Input Expressivity on Smart Wearables,” CHI ’15, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 3661–3670. doi:10.1145/2702123.2702371
Abstract: In this paper we present zSense, which provides greater input expressivity for spatially limited devices such as smart wearables through a shallow depth gesture recognition system using non-focused infrared sensors. To achieve this, we introduce a novel Non-linear Spatial Sampling (NSS) technique that significantly cuts down the number of required infrared sensors and emitters. These can be arranged in many different configurations; for example, number of sensor emitter units can be as minimal as one sensor and two emitters. We implemented different configurations of zSense on smart wearables such as smartwatches, smartglasses and smart rings. These configurations naturally fit into the flat or curved surfaces of such devices, providing a wide scope of zSense enabled application scenarios. Our evaluations reported over 94.8% gesture recognition accuracy across all configurations.
Keywords: compressive sensing, interacting with small devices, shallow depth gesture recognition, smart wearables (ID#: 15-7008)
URL: http://doi.acm.org/10.1145/2702123.2702371

 

Aosen Wang, Zhanpeng Jin, Chen Song, Wenyao Xu; “Adaptive Compressed Sensing Architecture in Wireless Brain-Computer Interface,” DAC ’15, Proceedings of the 52nd Annual Design Automation Conference, June 2015, Article No. 173. doi:10.1145/2744769.2744792
Abstract: Wireless sensor nodes advance the brain-computer interface (BCI) from laboratory setup to practical applications. Compressed sensing (CS) theory provides a sub-Nyquist sampling paradigm to improve the energy efficiency of electroencephalography (EEG) signal acquisition. However, EEG is a structure-variational signal with time-varying sparsity, which decreases the efficiency of compressed sensing. In this paper, we present a new adaptive CS architecture to tackle the challenge of EEG signal acquisition. Specifically, we design a dynamic knob framework to respond to EEG signal dynamics, and then formulate its design optimization into a dynamic programming problem. We verify our proposed adaptive CS architecture on a publicly available data set. Experimental results show that our adaptive CS can improve signal reconstruction quality by more than 70% under different energy budgets while only consuming 187.88 nJ/event. This indicates that the adaptive CS architecture can effectively adapt to the EEG signal dynamics in the BCI.
Keywords: (not provided) (ID#: 15-7009)
URL: http://doi.acm.org/10.1145/2744769.2744792

 

Mohammad-Mahdi Moazzami, Dennis E. Phillips, Rui Tan, Guoliang Xing; “ORBIT: A Smartphone-Based Platform for Data-Intensive Embedded Sensing Applications,” IPSN ’15, Proceedings of the 14th International Conference on Information Processing in Sensor Networks, April 2015, Pages 83–94. doi:10.1145/2737095.2737098
Abstract: Owing to the rich processing, multi-modal sensing, and versatile networking capabilities, smartphones are increasingly used to build data-intensive embedded sensing applications. However, various challenges must be systematically addressed before smartphones can be used as a generic embedded sensing platform, including high power consumption, lack of real-time functionality and user-friendly embedded programming support. This paper presents ORBIT, a smartphone-based platform for data-intensive embedded sensing applications. ORBIT features a tiered architecture, in which a smartphone can interface to an energy-efficient peripheral board and/or a cloud service. ORBIT as a platform addresses the shortcomings of current smartphones while utilizing their strengths. ORBIT provides a profile-based task partitioning allowing it to intelligently dispatch the processing tasks among the tiers to minimize the system power consumption. ORBIT also provides a data processing library that includes two mechanisms namely adaptive delay/quality trade-off and data partitioning via multi-threading to optimize resource usage. Moreover, ORBIT supplies an annotation based programming API for developers that significantly simplifies the application development and provides programming flexibility. Extensive microbenchmark evaluation and two case studies including seismic sensing and multi-camera 3D reconstruction, validate the generic design of ORBIT.
Keywords: data processing, data-intensive applications, embedded sensing, smartphone (ID#: 15-7010)
URL: http://doi.acm.org/10.1145/2737095.2737098

 

Pradeep Sen, Matthias Zwicker, Fabrice Rousselle, Sung-Eui Yoon, Nima Khademi Kalantari; “Denoising Your Monte Carlo Renders: Recent Advances in Image-Space Adaptive Sampling and Reconstruction,” SIGGRAPH ’15, ACM SIGGRAPH 2015 Courses, July 2015, Article No. 11. doi:10.1145/2776880.2792740
Abstract: Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. The current shift in the computer graphics industry towards Monte Carlo rendering has sparked renewed interest in effective, practical noise reduction techniques that are applicable to a wide range of rendering effects, and easily integrated into existing production pipelines.  In this course, we survey recent advances in image-space adaptive sampling and reconstruction (filtering) algorithms for noise reduction, which have proven effective at reducing the computational cost of Monte Carlo techniques in practice. These techniques reduce variance by either controlling the sampling density over the image plane, and/or aggregating samples in a reconstruction step, possibly over large image regions in a way that preserves scene detail. To do this, they apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. In some cases, they use the statistical analysis to set the parameters for filtering. In others, they estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. In this course, we aim to provide an overview for practitioners to assess these approaches, and for researchers to identify open research challenges and opportunities for future work.  In an introduction, we will first situate image-space adaptive sampling and reconstruction in the larger context of variance reduction for Monte Carlo rendering, and discuss its conceptual advantages and potential drawbacks. In the next part, we will provide details on five specific state-of-the-art algorithms. We will provide visual and quantitative comparisons, and discuss advantages and disadvantages in terms of image quality, computational requirements, and ease of implementation and integration with existing renderers. We will conclude the course by pointing out how some of these techniques are proving useful in real-world applications. Finally, we will discuss directions for potential further improvements.  This course brings together speakers that have made numerous contributions to image space adaptive rendering, which they presented at recent ACM SIGGRAPH, ACM SIGGRAPH Asia, and other conferences. The speakers bridge the gap between academia and industry, and they will be able to provide insights relevant to researchers, developers, and practitioners alike.
Keywords: (not provided) (ID#: 15-7011)
URL: http://doi.acm.org/10.1145/2776880.2792740

 

Elena Ikonomovska, Sina Jafarpour, Ali Dasdan; “Real-Time Bid Prediction using Thompson Sampling-Based Expert Selection,” KDD ’15, Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, August 2015, Pages 1869–1878. doi:10.1145/2783258.2788586
Abstract: We study online meta-learners for real-time bid prediction that predict by selecting a single best predictor among several subordinate prediction algorithms, here called “experts”. These predictors belong to the family of context-dependent past performance estimators that make a prediction only when the instance to be predicted falls within their areas of expertise. Within the advertising ecosystem, it is very common for the contextual information to be incomplete, hence, it is natural for some of the experts to abstain from making predictions on some of the instances. Experts’ areas of expertise can overlap, which makes their predictions less suitable for merging; as such, they lend themselves better to the problem of best expert selection. In addition, their performance varies over time, which gives the expert selection problem a non-stochastic, adversarial flavor. In this paper we propose to use probability sampling (via Thompson Sampling) as a meta-learning algorithm that samples from the pool of experts for the purpose of bid prediction. We show performance results from the comparison of our approach to multiple state-of-the-art algorithms using exploration scavenging on a log file of over 300 million ad impressions, as well as comparison to a baseline rule-based model using production traffic from a leading DSP platform.
Keywords: bayesian online learning, multi-armed bandits, online advertising, online algorithms, randomized probability matching (ID#: 15-7012)
URL: http://doi.acm.org/10.1145/2783258.2788586

 

Leye Wang, Daqing Zhang, Animesh Pathak, Chao Chen, Haoyi Xiong, Dingqi Yang, Yasha Wang; “CCS-TA: Quality-Guaranteed Online Task Allocation in Compressive Crowdsensing,” UbiComp ’15, Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, September 2015, Pages 683–694. doi:10.1145/2750858.2807513
Abstract: Data quality and budget are two primary concerns in urban-scale mobile crowdsensing applications. In this paper, we leverage the spatial and temporal correlation among the data sensed in different sub-areas to significantly reduce the required number of sensing tasks allocated (corresponding to budget), yet ensuring the data quality. Specifically, we propose a novel framework called CCS-TA, combining the state-of-the-art compressive sensing, Bayesian inference, and active learning techniques, to dynamically select a minimum number of sub-areas for sensing task allocation in each sensing cycle, while deducing the missing data of unallocated sub-areas under a probabilistic data accuracy guarantee. Evaluations on real-life temperature and air quality monitoring datasets show the effectiveness of CCS-TA. In the case of temperature monitoring, CCS-TA allocates 18.0-26.5% fewer tasks than baseline approaches, allocating tasks to only 15.5% of the sub-areas on average while keeping overall sensing error below 0.25°C in 95% of the cycles.
Keywords: crowdsensing, data quality, task allocation (ID#: 15-7013)
URL: http://doi.acm.org/10.1145/2750858.2807513

 

Xiufeng Xie, Eugene Chai, Xinyu Zhang, Karthikeyan Sundaresan, Amir Khojastepour, Sampath Rangarajan; “Hekaton: Efficient and Practical Large-Scale MIMO,” MobiCom ’15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 304–316. doi:10.1145/2789168.2790116
Abstract: Large-scale multiuser MIMO (MU-MIMO) systems have the potential for multi-fold scaling of network capacity. The research community has recognized this theoretical potential and developed architectures [1,2] with large numbers of RF chains. Unfortunately, building the hardware with a large number of RF chains is challenging in practice. CSI data transport and computational overhead of MU-MIMO beamforming can also become prohibitive under large network scale. Furthermore, it is difficult to physically append extra RF chains on existing communication equipments to support such large-scale MU-MIMO architectures.  In this paper, we present Hekaton, a novel large-scale MU-MIMO framework that combines legacy MU-MIMO beamforming with phased-array antennas. The core of Hekaton is a two-level beamforming architecture. First, the phased-array antennas steer spatial beams toward each downlink user to reduce channel correlation and suppress the cross-talk interference in the RF domain (for beamforming gain), then we adopt legacy digital beamforming to eliminate the interference between downlink data streams (for spatial multiplexing gain). In this way, Hekaton realizes a good fraction of potential large-scale MU-MIMO gains even under the limited RF chain number on existing communication equipments.  We evaluate the performance of Hekaton through over-the-air testbed built over the WARPv3 platform and trace-driven emulation. In the evaluations, Hekaton can improve single-cell throughput by up to 2.5X over conventional MU-MIMO with a single antenna per RF chain, while using the same transmit power.
Keywords: large-scale mu-mimo, phased-array antenna, scalability, two-level beamforming (ID#: 15-7014)
URL: http://doi.acm.org/10.1145/2789168.2790116

 

Linghe Kong, Xue Liu: “mZig: Enabling Multi-Packet Reception in ZigBee,” MobiCom ’15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 552–565. doi:10.1145/2789168.2790104
Abstract: This paper presents mZig, a novel physical layer design that enables a receiver to simultaneously decode multiple packets from different transmitters in ZigBee. As a low-power and low-cost wireless protocol, the promising ZigBee has been widely used in sensor networks, cyber-physical systems, and smart buildings. Since ZigBee based networks usually adopt tree or cluster topology, the convergecast scenarios are common in which multiple transmitters need to send packets to one receiver. For example, in a smart home, all appliances report data to one control plane via ZigBee. However, concurrent transmissions in convergecast lead to the severe collision problem. The conventional ZigBee avoids collisions using backoff time, which introduces additional time overhead. Advanced methods resolve collisions instead of avoidance, in which the state-of-the-art ZigZag resolves one m-packet collision requiring m retransmissions. We propose mZig to resolve one m-packet collision by this collision itself, so the theoretical throughput is improved m-fold. Leveraging the unique features in ZigBee’s physical layer including its chip rate, half-sine pulse shaping and O-QPSK modulation, mZig subtly decomposes multiple packets from one collision in baseband signal processing. The practical factors of noise, multipath, and frequency offset are taken into account in mZig design. We implement mZig on USRPs and establish a seven-node testbed. Experiment results demonstrate that mZig can receive up to four concurrent packets in our testbed. The throughput of mZig is 4.5x of the conventional ZigBee and 3.2x of ZigZag in the convergecast with four or more transmitters.
Keywords: collision, convergecast, multi-packet reception, zigbee (ID#: 15-7015)
URL: http://doi.acm.org/10.1145/2789168.2790104

 

Pan Hu, Pengyu Zhang, Deepak Ganesan; “Laissez-Faire: Fully Asymmetric Backscatter Communication,” SIGCOMM ’15, Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, August 2015, Pages 255–267. doi:10.1145/2785956.2787477
Abstract: Backscatter provides dual-benefits of energy harvesting and low-power communication, making it attractive to a broad class of wireless sensors. But the design of a protocol that enables extremely power-efficient radios for harvesting-based sensors as well as high-rate data transfer for data-rich sensors presents a conundrum. In this paper, we present a new {\em fully asymmetric} backscatter communication protocol where nodes blindly transmit data as and when they sense. This model enables fully flexible node designs, from extraordinarily power-efficient backscatter radios that consume barely a few micro-watts to high-throughput radios that can stream at hundreds of Kbps while consuming a paltry tens of micro-watts. The challenge, however, lies in decoding concurrent streams at the reader, which we achieve using a novel combination of time-domain separation of interleaved signal edges, and phase-domain separation of colliding transmissions. We provide an implementation of our protocol, LF-Backscatter, and show that it can achieve an order of magnitude or more improvement in throughput, latency and power over state-of-art alternatives.
Keywords: architecture, backscatter, wireless (ID#: 15-7016)
URL: http://doi.acm.org/10.1145/2785956.2787477

 

Raef Bassily, Adam Smith; “Local, Private, Efficient Protocols for Succinct Histograms,” STOC ’15, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, June 2015, Pages 127–135. doi:10.1145/2746539.2746632
Abstract: We give efficient protocols and matching accuracy lower bounds for frequency estimation in the local model for differential privacy. In this model, individual users randomize their data themselves, sending differentially private reports to an untrusted server that aggregates them. We study protocols that produce a succinct histogram representation of the data. A succinct histogram is a list of the most frequent items in the data (often called “heavy hitters”) along with estimates of their frequencies; the frequency of all other items is implicitly estimated as 0.  If there are n users whose items come from a universe of size d, our protocols run in time polynomial in n and log(d). With high probability, they estimate the accuracy of every item up to error O(√{log(d)/(ε2n)}). Moreover, we show that this much error is necessary, regardless of computational efficiency, and even for the simple setting where only one item appears with significant frequency in the data set.  Previous protocols (Mishra and Sandler, 2006; Hsu, Khanna and Roth, 2012) for this task either ran in time Ω(d) or had much worse error (about √[6]{log(d)/(ε2n)}), and the only known lower bound on error was Ω(1/√{n}).  We also adapt a result of McGregor et al (2010) to the local setting. In a model with public coins, we show that each user need only send 1 bit to the server. For all known local protocols (including ours), the transformation preserves computational efficiency.
Keywords: algorithms, complexity, differential privacy, heavy hitters, local protocols, succinct histograms (ID#: 15-7017)
URL: http://doi.acm.org/10.1145/2746539.2746632

 

Guy Bresler; “Efficiently Learning Ising Models on Arbitrary Graphs,” STOC ’15, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, June 2015, Pages 771–782. doi:10.1145/2746539.2746631
Abstract: Graph underlying an Ising model from i.i.d. samples. Over the last fifteen years this problem has been of significant interest in the statistics, machine learning, and statistical physics communities, and much of the effort has been directed towards finding algorithms with low computational cost for various restricted classes of models. Nevertheless, for learning Ising models on general graphs with p nodes of degree at most d, it is not known whether or not it is possible to improve upon the pd computation needed to exhaustively search over all possible neighborhoods for each node.  In this paper we show that a simple greedy procedure allows to learn the structure of an Ising model on an arbitrary bounded-degree graph in time on the order of p2. We make no assumptions on the parameters except what is necessary for identifiability of the model, and in particular the results hold at low-temperatures as well as for highly non-uniform models. The proof rests on a new structural property of Ising models: we show that for any node there exists at least one neighbor with which it has a high mutual information.
Keywords: ising model, markov random field, structure learning (ID#: 15-7018)
URL: http://doi.acm.org/10.1145/2746539.2746631

 

David Medernach, Jeannie Fitzgerald, R. Muhammad Atif Azad, Conor Ryan; “Wave: Incremental Erosion of Residual Error,” GECCO Companion ’15, Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, July 2015, Pages 1285–1292. doi:10.1145/2739482.2768503
Abstract: Typically, Genetic Programming (GP) attempts to solve a problem by evolving solutions over a large, and usually pre-determined number of generations. However, overwhelming evidence shows that not only does the rate of performance improvement drop considerably after a few early generations, but that further improvement also comes at a considerable cost (bloat). Furthermore, each simulation (a GP run), is typically independent yet homogeneous: it does not re-use solutions from a previous run and retains the same experimental settings.  Some recent research on symbolic regression divides work across GP runs where the subsequent runs optimise the residuals from a previous run and thus produce a cumulative solution; however, all such subsequent runs (or iterations) still remain homogeneous thus using a pre-set, large number of generations (50 or more). This work introduces Wave, a divide and conquer approach to GP whereby a sequence of short but sharp, and dependent yet potentially heterogeneous GP runs provides a collective solution; the sequence is akin to a wave such that each member of the sequence (that is, a short GP run) is a period of the wave. Heterogeneity across periods results from varying settings of system parameters, such as population size or number of generations, and also by alternating use of the popular GP technique known as linear scaling.  The results show that Wave trains faster and better than both standard GP and multiple linear regression, can prolong discovery through constant restarts (which as a side effect also reduces bloat), can innovatively leverage a learning aid, that is, linear scaling at various stages instead of using it constantly regardless of whether it helps and performs reasonably even with a tiny population size (25) which bodes well for real time or data intensive training.
Keywords: fitness landscapes, genetic algorithms, genetic programming, machine learning, performance measures, semantic GP (ID#: 15-7019)
URL: http://doi.acm.org/10.1145/2739482.2768503

 

Paul Tune, Matthew Roughan; “Spatiotemporal Traffic Matrix Synthesis,” SIGCOMM ’15, Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, August 2015, Pages 579–592. doi:10.1145/2829988.2787471
Abstract: Traffic matrices describe the volume of traffic between a set of sources and destinations within a network. These matrices are used in a variety of tasks in network planning and traffic engineering, such as the design of network topologies. Traffic matrices naturally possess complex spatiotemporal characteristics, but their proprietary nature means that little data about them is available publicly, and this situation is unlikely to change.  Our goal is to develop techniques to synthesize traffic matrices for researchers who wish to test new network applications or protocols. The paucity of available data, and the desire to build a general framework for synthesis that could work in various settings requires a new look at this problem. We show how the principle of maximum entropy can be used to generate a wide variety of traffic matrices constrained by the needs of a particular task, and the available information, but otherwise avoiding hidden assumptions about the data. We demonstrate how the framework encompasses existing models and measurements, and we apply it in a simple case study to illustrate the value.
Keywords: maximum entropy, network design, spatiotemporal modeling, traffic engineering, traffic matrix synthesis (ID#: 15-7020)
URL: http://doi.acm.org/10.1145/2829988.2787471

 

Jean Bourgain, Sjoerd Dirksen, Jelani Nelson; “Toward a Unified Theory of Sparse Dimensionality Reduction in Euclidean Space,” STOC ’15, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, June 2015, Pages 499–508. doi:10.1145/2746539.2746541
Abstract: Let Φ∈Rm x n be a sparse Johnson-Lindenstrauss transform [52] with column sparsity s. For a subset T of the unit sphere and ε∈(0,1/2), we study settings for m,s to ensure EΦ supx∈ T |Φ x|22 - 1| < ε, i.e. so that Φ preserves the norm of every x ∈ T simultaneously and multiplicatively up to 1+ε. We introduce a new complexity parameter, which depends on the geometry of T, and show that it suffices to choose s and m such that this parameter is small. Our result is a sparse analog of Gordon’s theorem, which was concerned with a dense Φ having i.i.d. Gaussian entries. We qualitatively unify several results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and Fourier-based restricted isometries. Our work also implies new results in using the sparse Johnson-Lindenstrauss transform in randomized linear algebra, compressed sensing, manifold learning, and constrained least squares problems such as the Lasso.
Keywords: compressed sensing, dimensionality reduction, manifold learning, randomized linear algebra, sparsity (ID#: 15-7021)
URL: http://doi.acm.org/10.1145/2746539.2746541

 

Xingliang Yuan, Cong Wang, Kui Ren; “Enabling IP Protection for Outsourced Integrated Circuit Design,” ASIACCS ’15, Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 237–247. doi:10.1145/2714576.2714601
Abstract: As today’s integrated circuit (IC) has easily involved millions and even billions of gates, known as very large-scale integration (VLSI), one natural trend is to move such prohibitive in-house design procedure to the low-cost public cloud. However, such a migration is also raising a challenging request on practical and privacy-preserving techniques to safeguard the sensitive IC design data, i.e., the Intellectual Property (IP). In this paper, we initiate the first study along the direction, and present a practical system for privacy-preserving IC timing analysis, which refers to an essential and expensive procedure via repeated evaluations of timing delays on a gate-level circuit. For privacy protection, our system leverages a key observation that many IP blocks are universally reused and shared across different IC designs, and thus only a small portion of critical IP blocks need to be protected. By carefully extracting such critical design data from the whole circuit, our system only outsources the non-critical data to the public cloud. As such “data splitting” does not readily facilitate correct timing analysis, we then develop specialized algorithms to enable the public cloud to take only the non-critical data and return intermediate results. Such results can later be integrated with critical design data by the local server for fast timing analysis. We also propose a heuristic algorithm to considerably reduce the bandwidth cost in the system. Through rigorous security analysis, we show our system is resilient to IC reverse engineering and protects both the critical IP gate-level design and functionality. We evaluate our system over large IC benchmarks with up to a million of gates to show the efficiency and effectiveness.
Keywords: integrated circuits, ip protection, secure outsourcing, timing analysis (ID#: 15-7022)
URL: http://doi.acm.org/10.1145/2714576.2714601 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Computational Intelligence 2015

 

 
SoS Logo

Computational Intelligence

2015


Computational intelligence includes such constructs as artificial neural networks, evolutionary computation, and fuzzy logic. It embraces biologically inspired algorithms such as swarm intelligence and artificial immune systems and includes broader fields such as image processing, data mining, and natural language processing. Its relevance to the Science of Security is related to composability and compositionality, as well as cryptography. The works cited here were published in 2015.



Rahmani, A.; Amine, A.; Hamou, M.R., “De-identification of Textual Data Using Immune System for Privacy Preserving in Big Data,” in Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, vol., no.,
pp. 112–116, 13–14 Feb. 2015. doi:10.1109/CICT.2015.146
Abstract: With the growing observed success of big data use, many challenges appeared. Timeless, scalability and privacy are the main problems that researchers attempt to figure out. Privacy preserving is now a highly active domain of research, many works and concepts had seen the light within this theme. One of these concepts is the de-identification techniques. De-identification is a specific area that consists of finding and removing sensitive information either by replacing it, encrypting it or adding a noise to it using several techniques such as cryptography and data mining. In this report, we present a new model of de-identification of textual data using a specific Immune System algorithm known as CLONALG.
Keywords: Big Data; data privacy; text analysis; CLONALG; big data; cryptography; data mining; privacy preserving; specific immune system algorithm; textual data de-identification; Big data; Data models; Data privacy; Immune system; Informatics; Privacy; Security; de-identification; immune systems; privacy preserving (ID#: 15-7127)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078678&isnumber=7078645

 

Gambhir, M.; Doja, M.N.; Moinuddin, “Novel Trust Computation Architecture for Users Accountability in Online Social Networks,” in Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, vol., no., pp. 725–731, 13–14 Feb. 2015. doi:10.1109/CICT.2015.104
Abstract: The Online Social Network (OSN) is a growing platform which enables people to get hold of news, communicate with family and old friends with whom they have lost contact, to promote a business, to invite to an event of friends and to get people to collaborate to create something magical. With the increasing popularity in OSNs, Researchers have been finding out ways to stop the negative activities over the social media by imposing the privacy settings in the leading OSNs. The privacy settings let the user to control who can access what information in his/her profile. None of these have given the entity of trust enough thought. Very less number of trust management models has been implemented in the OSNs for use by the common users. This paper proposes a new 3 Layer secured architecture with a novel mechanism for ensuring more safer online world. It provides a unique global id for each user, evaluates and computes the Trust Factor for a user, thereby measuring the credibility of a user in the OSN space.
Keywords: authorisation; data privacy; social networking (online); trusted computing; OSN; access control; layer secured architecture; online social networks; privacy settings; social media; trust computation architecture; trust factor; trust management models; users accountability; Authentication; Business; Computer architecture; Databases; Servers; Social network services; Global id; Online Social Networks; OpenID; Trust Factor; Trust management (ID#: 15-7128)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078798&isnumber=7078645

 

Hadj Ahmed, B.; Amine, A.; Reda Mohamed, H., “New Private Information Retrieval Protocol Using Social Bees Lifestyle over Cloud Computing,” in Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, vol., no., pp. 161–165, 13–14 Feb. 2015. doi:10.1109/CICT.2015.163
Abstract: Recently, a novel form of web services had seen the light under the name of Cloud Computing which presents the dematerialisation of software, systems and infrastructures. However, in a world where digital information is everywhere, finding the desired information has become a crucial problem. In other hand, the users of cloud services starting asking about their privacy protection, particularly when they lose control of their data during the treatment and even some of them think about counting the service providers themselves as honest attackers. For that, new approaches had been published in every axis of the privacy preserving domain. One of these axis consists of a special retrieval models which allow both finding and hiding sensitive desired information at the same time. The substance of our work is a new system of private information retrieval protocol (PIR) composed of four steps the authentication to ensure the identification of authorised users. The encryption of stored documents by the server using the boosting algorithm based on the life of bees and multi-filter cryptosystems. The information retrieval step using a combination of distances by social bees where a document must pass through three dams controlled with three types of worker bees, the bee queen represents the query and the hive represents the class of relevant documents. Finally, a visualization step that permits the presentation of the results in graphical format understandable by humans as a 3D cube. Our objectives is to amend the response to users’ demands.
Keywords: Web services; cloud computing; cryptography; data protection; data visualisation; information retrieval; 3D cube; PIR; authentication; authorised user identification; bee hive; bee queen; boosting algorithm; cloud computing; cloud services; digital information; graphical format; multifilter cryptosystems; privacy preserving domain; privacy protection; private information retrieval protocol; sensitive desired information hiding; service providers; social bee lifestyle; software, dematerialisation; stored documents encryption; user demands; visualization step; web services; worker bees; Boosting; Cloud computing; Encryption; Information retrieval; Protocols; Boosting Cryptosystem; Cloud Computing; Private Information Retrieval; Social bees; Visualisation (ID#: 15-7129)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078687&isnumber=7078645

 

Ulusoy, H.; Kantarcioglu, M.; Thuraisingham, B.; Khan, L., “Honeypot Based Unauthorized Data Access Detection in MapReduce Systems,” in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, vol., no.,
pp. 126–131, 27–29 May 2015. doi:10.1109/ISI.2015.7165951
Abstract: The data processing capabilities of MapReduce systems pioneered with the on-demand scalability of cloud computing have enabled the Big Data revolution. However, the data controllers/owners worried about the privacy and accountability impact of storing their data in the cloud infrastructures as the existing cloud computing solutions provide very limited control on the underlying systems. The intuitive approach — encrypting data before uploading to the cloud — is not applicable to MapReduce computation as the data analytics tasks are ad-hoc defined in the MapReduce environment using general programming languages (e.g, Java) and homomorphic encryption methods that can scale to big data do not exist. In this paper, we address the challenges of determining and detecting unauthorized access to data stored in MapReduce based cloud environments. To this end, we introduce alarm raising honeypots distributed over the data that are not accessed by the authorized MapReduce jobs, but only by the attackers and/or unauthorized users. Our analysis shows that unauthorized data accesses can be detected with reasonable performance in MapReduce based cloud environments.
Keywords: Big Data; cloud computing; cryptography; data analysis; data privacy; parallel processing; Big Data revolution; MapReduce systems; data analytics tasks; data encryption; data processing capabilities; general programming languages; homomorphic encryption methods; honeypot; on-demand scalability; privacy; unauthorized data access detection; Big data; Cloud computing; Computational modeling; Cryptography; Data models; Distributed databases (ID#: 15-7130)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165951&isnumber=7165923

 

Nemati, A.; Feizi, S.; Ahmadi, A.; Haghiri, S.; Ahmadi, M.; Alirezaee, S., “An Efficient Hardware Implementation of FeW Lightweight Block Cipher,” in Artificial Intelligence and Signal Processing (AISP), 2015 International Symposium on, vol., no.,
pp. 273–278, 3–5 March 2015. doi:10.1109/AISP.2015.7123493
Abstract: Radio-frequency identification (RFID) are becoming a part of our everyday life with a wide range of applications such as labeling products and supply chain management and etc. These smart and tiny devices have extremely constrained resources in terms of area, computational abilities, memory, and power. At the same time, security and privacy issues remain as an important problem, thus with the large deployment of low resource devices, increasing need to provide security and privacy among such devices, has arisen. Resource-efficient cryptographic incipient become basic for realizing both security and efficiency in constrained environments and embedded systems like RFID tags and sensor nodes. Among those primitives, lightweight block cipher plays a significant role as a building block for security systems. In 2014 Manoj Kumar et al. proposed a new Lightweight block cipher named as FeW, which are suitable for extremely constrained environments and embedded systems. In this paper, we simulate and synthesize the FeW block cipher. Implementation results of the FeW cryptography algorithm on a FPGA are presented. The design target is efficiency of area and cost.
Keywords: cryptography; field programmable gate arrays; radiofrequency identification; FPGA; FeW cryptography algorithm; FeW lightweight block cipher; RFID; hardware implementation; radio-frequency identification; resource-efficient cryptographic incipient; security system; sensor node; Algorithm design and analysis; Ciphers; Encryption; Hardware; Schedules; Block Cipher; FeW Algorithm; Feistel structure; Field Programmable Gate Array (FPGA); High Level Synthesis (ID#: 15-7131)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7123493&isnumber=7123478

 

He-Ming Ruan; Ming-Hwa Tsai; Yen-Nun Huang; Yen-Hua Liao; Chin-Laung Lei, “Discovery of De-identification Policies Considering Re-identification Risks and Information Loss, in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, vol., no., pp. 69–76, 24-26 May 2015. doi:10.1109/AsiaJCIS.2015.23
Abstract: In data analysis, it is always a tough task to strike the balance between the privacy and the applicability of the data. Due to the demand for individual privacy, the data are being more or less obscured before being released or outsourced to avoid possible privacy leakage. This process is so called de-identification. To discuss a de-identification policy, the most important two aspects should be the re-identification risk and the information loss. In this paper, we introduce a novel policy searching method to efficiently find out proper de-identification policies according to acceptable re-identification risk while retaining the information resided in the data. With the UCI Machine Learning Repository as our real world dataset, the re-identification risk can therefore be able to reflect the true risk of the de-identified data under the de-identification policies. Moreover, using the proposed algorithm, one can then efficiently acquire policies with higher information entropy.
Keywords: data analysis; data privacy; entropy; learning (artificial intelligence); risk analysis; UCI machine learning repository; deidentification policies; deidentified data; information entropy; information loss; privacy leakage; reidentification risks; Computational modeling; Data analysis; Data privacy; Lattices; Privacy; Synthetic aperture sonar; Upper bound; De-identification; HIPPA; Safe Harbor (ID#: 15-7132)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153938&isnumber=7153836

 

Yinzhi Cao; Junfeng Yang, “Towards Making Systems Forget with Machine Unlearning, in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 463–480, 17-21 May 2015. doi:10.1109/SP.2015.35
Abstract: Today’s systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data’s lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations — asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.
Keywords: data privacy; learning (artificial intelligence); recommender systems; security of data; complex data propagation network; data lineage; feature modeling; feature selection; forgetting systems; machine learning algorithms; machine unlearning; privacy risks; recommendation engine; security perspective; statistical query learning; summation form; usability perspective; Computational modeling; Data models; Data privacy; Feature extraction; Learning systems; Machine learning algorithms; Training data; Adversarial Machine Learning; Forgetting System; Machine Unlearning (ID#: 15-7133)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163042&isnumber=7163005

 

Mishra, V.; Choudhary, K.; Maheshwari, S., “Video Streaming Using Dual-Channel Dual-Path Routing to Prevent Packet Copy Attack,” in Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, vol., no.,
pp. 645–650, 13-14 Feb. 2015. doi:10.1109/CICT.2015.142
Abstract: The video streaming between the sender and the receiver involves multiple unsecured hops where the video data can be illegally copied if the nodes run malicious forwarding logic. This paper introduces a novel method to stream video data through dual channels using dual data paths. The frames’ pixels are also scrambled. The video frames are divided into two frame streams. At the receiver side video is re-constructed and played for a limited time period. As soon as small chunk of merged video is played, it is deleted from video buffer. The approach has been tried to formalize and initial simulation has been done over MATLAB. Preliminary results are optimistic and a refined approach may lead to a formal designing of network layer routing protocol with corrections in transport layer.
Keywords: IPTV; computer network security; cryptography; image reconstruction; routing protocols; video coding; video streaming; Matlab; dual-channel dual-path routing; illegally copied video data; malicious forwarding logic; multiple unsecured hops; network layer routing protocol; optimistic refined approach; packet copy attack prevention; receiver side; scrambled frame pixels; sender side; transport layer; video buffer; video merging; video reconstruction; video streaming; Communications technology; Computational intelligence; Conferences; dual channel; multi hop; multi path; routing; scrambling; video encryption; video transmission (ID#: 15-7134)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078783&isnumber=7078645

 

Jinxin Ma; Tao Zhang; Puhan Zhang, “Enhancing Symbolic Execution Method with a Taint Layer,in Advanced Computational Intelligence (ICACI), 2015 Seventh International Conference on, vol., no., pp. 27–31, 27–29 March 2015. doi:10.1109/ICACI.2015.7184737
Abstract: Symbolic execution is one of the most important computational intelligence methods in vulnerability detection, delivering high code coverage. The bottleneck of dynamic symbolic execution is its running speed, and few existing works focus on research of the problem. In the paper, we present a taint-based symbolic execution method to improve its efficiency. The property of our method includes: (1) it works on the binary level directly, translating binary into a well-defined intermediate representation; (2) it employs a taint layer to perform data flow analysis and quickly locate the first instruction related with symbolic inputs. (3) Three optimization strategies are utilized in symbolic execution to further speed enhancing, including white list, state elimination and path search optimization. We have implemented a prototype based our method, and evaluated it with several sample programs. The experimental results shows that our method could perform faster symbolic execution and has the ability of vulnerability detection.
Keywords: data flow analysis; optimisation; search problems; binary level; code coverage; computational intelligence methods; data flow analysis; dynamic symbolic execution; optimization strategies; path search optimization; state elimination; symbolic inputs; taint layer; taint-based symbolic execution method; vulnerability detection; Security (ID#: 15-7135)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184737&isnumber=7184712

 

Patrascu, A.; Velciu, M.-A.; Patriciu, V.V., “Cloud Computing Digital Forensics Framework for Automated Anomalies Detection,” in Applied Computational Intelligence and Informatics (SACI), 2015 IEEE 10th Jubilee International Symposium on, vol., no., pp. 505–510, 21–23 May 2015. doi:10.1109/SACI.2015.7208257
Abstract: Cloud Computing is one of the most important paradigms used in today’s digital environment because they offer to the user benefits such as virtual machine renting, digital information backup, ease of access to stored data and many other. Together with the increased usage of these technologies, at the datacenter level we need to know in detail the information flux between the computing nodes. More exactly, on which server the data is processed, how it is manipulated and stored at the physical or virtual level. To have a full picture of what it is going on we need to have a centralized system that can collect data regarding about the datacenters status and correlate them with known anomalies and other usage patterns and in case of a security breach to act accordingly. In this paper we present a new way to monitor running virtual machines existing at a datacenter level. We will talk about the architecture, and how we use the information collected to train our automated anomalies machine learning modules. We also present some implementation details and results taken from the experimental setup.
Keywords: cloud computing; computer centres; digital forensics; learning (artificial intelligence); automated anomalies detection; automated anomalies machine learning modules; centralized system; cloud computing digital forensics framework; computing nodes; datacenter level; datacenters status; digital environment; information flux; security breach; virtual machines; Cloud computing; Computer architecture; Containers; Forensics; Servers; Virtual machining; Virtualization; anomaly detection framework; cloud computing; data forensics; distributed computing (ID#: 15-7136)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208257&isnumber=7208165

 

Saroj, S.K.; Chauhan, S.K.; Sharma, A.K.; Vats, S., “Threshold Cryptography Based Data Security in Cloud Computing, in Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, vol., no., pp. 202–207, 13–14 Feb. 2015. doi:10.1109/CICT.2015.149
Abstract: Cloud computing is very popular in organizations and institutions because it provides storage and computing services at very low cost. However, it also introduces new challenges for ensuring the confidentiality, integrity and access control of the data. Some approaches are given to ensure these security requirements but they are lacked in some ways such as violation of data confidentiality due to collusion attack and heavy computation (due to large no keys). To address these issues we propose a scheme that uses threshold cryptography in which data owner divides users in groups and gives single key to each user group for decryption of data and, each user in the group shares parts of the key. In this paper, we use capability list to control the access. This scheme not only provides the strong data confidentiality but also reduces the number of keys.
Keywords: cloud computing; cryptography; data integrity; organisational aspects; computing services; data access control; data confidentiality; data decryption; data integrity; data owner; storage services; threshold cryptography based data security; Access control; Cloud computing; Permission; Public key; Vectors; Outsourced data; access control; authentication; capability list; malicious outsiders; threshold cryptography (ID#: 15-7137)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078695&isnumber=7078645

 

Siyao Han; Yan Xu, “A Comparative Study on Machine Leaning Techniques in Chinese Spam, in Advanced Computational Intelligence (ICACI), 2015 Seventh International Conference on  vol., no., pp. 390–395, 27–29 March 2015. doi:10.1109/ICACI.2015.7184736
Abstract: Anti-spam is of great importance to IT security, and has attracted considerable attention during recent years in China. Machine learning techniques have been widely used to solve this problem and achieved promising results. This article compares several popular machine leaning methods in Chinese spam classification, and tries to find out a suitable combination of techniques for Chinese anti-spam work.
Keywords: learning (artificial intelligence); pattern classification; unsolicited e-mail; Chinese anti-spam work; Chinese spam classification; machine learning methods; Cryptography; Databases; Filtering; Positron emission tomography; Support vector machines; Unsolicited electronic mail (ID#: 15-7138)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184736&isnumber=7184712

 

Schafer, C., “Detection of Compromised Email Accounts Used for Spamming in Correlation with Mail User Agent Access Activities Extracted from Metadata, in Computational Intelligence for Security and Defense Applications (CISDA), 2015 IEEE Symposium on, vol., no., pp. 1–6, 26–28 May 2015. doi:10.1109/CISDA.2015.7208641
Abstract: Every day over 29 billion spam and phishing messages are sent. Commonly the spammers use compromised email accounts to send these emails, which accounted for 57.9 percent of the global email traffic in September 2014. Previous research has primarily focused on the fast detection of abused accounts to prevent the fraudulent use of servers. State-of-the-art spam detection methods generally need the content of the email to classify it as either spam or a regular message. This content is not available within the new type of encrypted phishing emails that have become prevalent since the middle of 2014. The object of the presented research is to detect the anomaly with Mail User Agent Access Activities, which is based on the special behaviour of how to send emails without the knowledge of the email content. The proposed method detects the abused account in seconds and therefore reduces the sent spam per compromised account to less than one percent.
Keywords: authorisation; computer crime; cryptography; meta data; unsolicited e-mail; abused account detection; compromised e-mail account detection; encrypted phishing e-mails; fraudulent server use prevention; global e-mail traffic; mail user agent access activity extraction; meta data; phishing messages; spamming; Authentication; Cryptography; IP networks; Postal services; Servers; Unsolicited electronic mail; MUAAA; Mail User Agent Access Activities; compromised email account; encrypted phishing; hacked; phishing; spam (ID#: 15-7139)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208641&isnumber=7208613

 

Murtaza, S.S.; Khreich, W.; Hamou-Lhadj, A.; Gagnon, S., “A Trace Abstraction Approach for Host-Based Anomaly Detection,” in Computational Intelligence for Security and Defense Applications (CISDA), 2015 IEEE Symposium on, vol., no.,
pp. 1–8, 26–28 May 2015. doi:10.1109/CISDA.2015.7208644
Abstract: High false alarm rates and execution times are among the key issues in host-based anomaly detection systems. In this paper, we investigate the use of trace abstraction techniques for reducing the execution time of anomaly detectors while keeping the same accuracy. The key idea is to represent system call traces as traces of kernel module interactions and use the resulting abstract traces as input to known anomaly detection techniques, such as STIDE (the Sequence Time-Delay Embedding) and HMM (Hidden Markov Models). We performed experiments on three datasets, namely, the traditional UNM dataset as well as two modern datasets, Firefox and ADFA-LD. The results show that kernel module traces can lead to similar or fewer false alarms and considerably smaller execution times compared to raw system call traces for host-based anomaly detection systems.
Keywords: embedded systems; hidden Markov models; safety-critical software; ADFA-LD; Firefox; HMM; STIDE; UNM dataset; execution time; hidden Markov model; high false alarm rate; host-based anomaly detection; sequence time-delay embedding; trace abstraction approach; Accuracy; Detectors; Hidden Markov models Kernel; Linux; Testing; Training; Host-based Anomaly Detection System; Software Dependability; Software Security; System Call Traces; Trace Analysis and Abstraction (ID#: 15-7140)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208644&isnumber=7208613

 

Chatterjee, S.; Chatterjee, P.S., “A Comparison Based Clustering Algorithm to Counter SSDF Attack in CWSN,” in Computational Intelligence and Networks (CINE), 2015 International Conference on, vol., no., pp.194–195, 12-13 Jan. 2015. doi:10.1109/CINE.2015.46
Abstract: Cognitive Wireless Sensor Networks follow IEEE 802.22 standard which is based on the concept of cognitive radio. In this paper we have studied the Denial of Service (DOS) attack. Spectrum Sensing Data Falsification (SSDF) attack is one such type of DOS attack. In this attack the attackers modify the sensing report in order to compel the Secondary User (SU) to take a wrong decision regarding the vacant spectrum band in other’s network. In this paper we have proposed a similarity-based clustering of sensing data to counter the above attack.
Keywords: cognitive radio; computer network security; radio spectrum management; wireless sensor networks; CWSN; DOS attack; IEEE 802.22 standard; SSDF attack; cognitive wireless sensor networks; comparison based clustering algorithm; denial of service attack; secondary user; similarity-based clustering; spectrum sensing data falsification attack; vacant spectrum band; Clustering algorithms; Cognitive radio; Complexity theory; Computer crime; Educational institutions; Sensors; Wireless sensor networks; Cognitive Wireless Sensor Network; Denial of Service attack; Spectrum Sensing Data Falsification attack (ID#: 15-7141)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053829&isnumber=7053782

 

Dutta, C.B.; Biswas, U., “Intrusion Detection System for Power-Aware OLSR, in Computational Intelligence and Networks (CINE), 2015 International Conference on, vol., no., pp. 142–147, 12–13 Jan. 2015. doi:10.1109/CINE.2015.35
Abstract: Optimized Link State Routing (OLSR) is a standard proactive routing protocol for Wireless Sensor Network (WSN). OLSR uses two kinds of the control messages: Hello and Topology Control (TC). As these messages are un-authenticated, OLSR is prone to several attacks namely, black hole, wormhole, Gray hole etc. This paper is focused at Sleep Deprivation Torture Attack on OLSR. Sleep deprivation attack is one of the most interesting attack in layer 2 where the attacker tries to use a low energy node until all its energy is exhausted and the node goes into permanent sleep. This attack is also possible in routing level. In OLSR low energy node declare their status through willingness property of HELLO message. Using this information an attacker node can choose that low energy node deliberately and forward all traffic through that node. This leads to low energy node in a permanent sleep mode. In this paper we propose a specification based Intrusion Detection System (IDS) for that type of attack. The performance of the propose algorithm is studied by Network Simulator (NS2) and effectiveness of the propose scheme, along with a comparison with existing techniques is demonstrated.
Keywords: routing protocols; security of data; telecommunication network topology; telecommunication power management; wireless sensor networks; control messages; intrusion detection system; low energy node; network simulator; optimized link state routing; permanent sleep mode; power-aware OLSR; proactive routing protocol; sleep deprivation torture attack; topology control; wireless sensor network; Batteries; Energy efficiency; Monitoring; Routing; Routing protocols; Sensors; Wireless sensor networks; Intrusion Detection System (IDS); Optimized Link State routing (OLSR); Sleep Deprivation Torture Attack; Wireless Sensor Network (WSN)  (ID#: 15-7142)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053818&isnumber=7053782

 

Moyun Li; Cheng Yang; Jiayin Tian, “Video Selective Encryption Based on Hadoop Platform, in Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, vol., no., pp. 208–212, 13–14 Feb. 2015. doi:10.1109/CICT.2015.122
Abstract: The information security technology is one of the key technologies support for the new media of radio and television industry. Video encryption is a computationally intensive and data intensive work. The traditional centralized data encryption system is not enough to cope with the huge amounts of data encryption. It’s capacity also difficult to achieve linear growth of data. Famous for allocation of resources, cloud computing becomes better choice of data processing. In this article AES algorithm is used to encrypt video slice layer. We establish an encryption system of Hadoop cluster based on MapReduce framework, to improve the video encryption speed and optimize video encryption strategies.
Keywords: cloud computing; cryptography; data handling; parallel processing; pattern clustering; resource allocation; video signal processing; AES algorithm; Hadoop cluster; MapReduce framework; data encryption; data processing; information security technology; radio industry; resource allocation; television industry; video encryption speed; video selective encryption; video slice layer encryption; Cloud computing; Computers; Encryption; File systems; Programming; Streaming media; Hadoop; MapReduce; video encryption (ID#: 15-7143)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078696&isnumber=7078645

 

Jian Zhang, “An Image Encryption Scheme Based on Cat Map and Hyperchaotic Lorenz System, in Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, vol., no., pp. 78–82, 13–14 Feb. 2015. doi:10.1109/CICT.2015.134
Abstract: In recent years, chaos-based image cipher has been widely studied and a growing number of schemes based on permutation-diffusion architecture have been proposed. However, recent studies have indicated that those approaches based on low-dimensional chaotic maps/systems have the drawbacks of small key space and weak security. In this paper, a security improved image cipher which utilizes cat map and hyper chaotic Lorenz system is reported. Compared with ordinary chaotic systems, hyper chaotic systems have more complex dynamical behaviors and number of system variables, which demonstrate a greater potential for constructing a secure cryptosystem. In diffusion stage, a plaintext related key stream generation strategy is introduced, which further improves the security against known/chosen-plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme.
Keywords: cryptography; image processing; statistical analysis; cat map; chaos-based image cipher; complex dynamical behaviors; cryptosystem; hyperchaotic Lorenz system; image encryption scheme; key sensitivity analysis; key space analysis; key stream generation strategy; low-dimensional chaotic maps; permutation-diffusion architecture; security analysis; statistical analysis; Chaotic communication; Ciphers; Correlation; Encryption; image cipher; permutation-diffusion (ID#: 15-7144)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078671&isnumber=7078645

 

Debnath, D.; Deb, S.; Kar, N., “An Advanced Image Encryption Standard Providing Dual Security: Encryption Using Hill Cipher &amp; RGB Image Steganography,” in Computational Intelligence and Networks (CINE), 2015 International Conference on, vol., no., pp. 178–183, 12–13 Jan. 2015. doi:10.1109/CINE.2015.41
Abstract: In this paper, a new steganography method for spatial domain has been proposed, which includes a new mapping technique for the secret messages. The algorithm coverts any kinds of message to text using bit manipulation tables, applies hill cipher techniques to it and finally hides the message into red, green and blue images of a selected image. Therefore, the proposed algorithm is a combination of encryption of the message first then hiding the message into the cover image which provides double security. The result of the proposed algorithm is analyzed and discussed using MSE, PSNR, SC, AD, MD and NAE. The histogram of the cover and stegano image is also shown.
Keywords: cryptography; image coding; image colour analysis; mean square error methods; steganography; AD; MD; MSE; NAE; PSNR; SC; average difference; bit manipulation tables; blue images; cover image histogram; double security; dual security; green images; hill cipher techniques; image encryption standard; image steganography; mapping technique; maximum difference; mean square error; message encryption; message hiding; normalized absolute error; peak signal-to-noise ratio; red images; secret messages; spatial domain; stegano image histogram; structural content; Algorithm design and analysis; Ciphers; Encryption; Histograms; Least Significant Bit; RGB images; Steganography; modified Hill cipher Spatial Domain (ID#: 15-7145)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053824&isnumber=7053782

 

Arote, P.; Arya, K.V., “Detection and Prevention Against ARP Poisoning Attack Using Modified ICMP and Voting,” in Computational Intelligence and Networks (CINE), 2015 International Conference on, vol., no., pp. 136–141, 12–13 Jan. 2015. doi:10.1109/CINE.2015.34
Abstract: Address Resolution Protocol (ARP) poisoning is the leading point for refined LAN attacks like denial-of-service (DOS) and Man-In-The-Middle (MITM). Weak point of ARP that is being Stateless, directly affects security standards of Network and specially Ethernet. In proposed mechanism of detection, initially traffic over the network is sniffed by Central Server (CS). Then, CS sends trap ICMP ping packet, analyze the response in terms of ICMP reply and successfully detects attacker. In order to prevent ARP poisoning over centralized system, voting process is used to elect legitimate CS. Validating and Correcting <; IP, MAC > pair entries residing in hosts cache tables, CS successfully prevents ARP poisoning while maintaining performance of the system. Our technique is based on ICMP and Voting such mechanism with Backward Compatibility, Less Cost, Minimal Traffic and Easily Deployable is proposed to detect and prevent MITM based ARP poisoning which is effectual version overcoming weaknesses of ARP.
Keywords: Internet; access protocols; computer network security; local area networks; ARP poisoning attack detection; ARP poisoning attack prevention; DOS attack; Ethernet; ICMP ping packet; Internet control message protocol; LAN attack; MAC address; MITM; address resolution protocol; backward compatibility; denial-of-service attack; man-in-the-middle; modified ICMP; IP networks; Local area networks; Logic gates; Protocols; Security; Servers; Unicast; ARP poisoning; Address Resolution Protocol; Attack; Man-In-The-Middle (ID#: 15-7146)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053817&isnumber=7053782

 

Panja, B.; Oros, J.; Britton, J.; Meharia, P.; Pati, S., “Intelligent Gateway for SCADA System Security: A Multi-Layer Attack Prevention Approach, in Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 2015 IEEE International Conference on, vol., no., pp.1–6, 12–14 June 2015. doi:10.1109/CIVEMSA.2015.7158627
Abstract: This paper proposes an intelligent gateway system for SCADA networks to avoid DOS attacks. The proposed approach provides details about an SIG system architecture which establishes the connection between Master SIGs and Perimeter SIGs, the traffic flow, major alerts, and minor alerts. Simulation experiments are an indispensable phase to analyze and assess the security of SCADA (Supervisory Control and Data Acquisition) systems. Although numerous experiments have taken place, limitations are still not been shrunk. The SCADA Intelligence Gateway concept proposed in this paper is experimentally proved and showed its ability to secure the SCADA system from attacks. The intelligence learning mechanism established the ability to identify malicious traffic through bot production and simulation environments.
Keywords: SCADA systems; computer network security; learning (artificial intelligence); telecommunication traffic; DOS attack avoidance; SCADA intelligence gateway concept; SCADA system security; SIG system architecture; bot production; intelligence learning mechanism; intelligent gateway system; malicious traffic identification; master SIGs; multilayer attack prevention approach; perimeter SIGs; simulation environments; supervisory control and data acquisition systems; traffic flow; Artificial intelligence; Bandwidth; Monitoring; SCADA systems; Security; Temperature sensors; Bandwidth Allowance; DOS; Master SIG; Perimeter SIG; SCADA (ID#: 15-7147)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158627&isnumber=7158585
 


Note:

Articles listed on these pages have been found on publicly available Internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Confinement 2015

 

 
SoS Logo

Confinement

2015


In photonics, confinement is important to loss avoidance. In quantum theory, it relates to energy levels. The articles cited here cover both concepts and were presented or published in 2015. Containment is important in the contexts of cyber-physical systems, privacy, resiliency, and composability. 



Ed Novak, Yutao Tang, Zijiang Hao, Qun Li, Yifan Zhang; “Physical Media Covert Channels on Smart Mobile Devices,” UbiComp ’15, Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, September 2015, Pages 367–378. doi:10.1145/2750858.2804253
Abstract: In recent years mobile smart devices such as tablets and smartphones have exploded in popularity. We are now in a world of ubiquitous smart devices that people rely on daily and carry everywhere. This is a fundamental shift for computing in two ways. Firstly, users increasingly place unprecedented amounts of sensitive information on these devices, which paints a precarious picture. Secondly, these devices commonly carry many physical world interfaces. In this paper, we propose information leakage malware, specifically designed for mobile devices, which uses covert channels over physical “real-world” media, such as sound or light. This malware is stealthy; able to circumvent current, and even state-of-the-art defenses to enable attacks including privilege escalation, and information leakage. We go on to present a defense mechanism, which balances security with usability to stop these attacks.
Keywords: covert channel, physical media, privacy, security, sensors, smart mobile device (ID#: 15-6954)
URL: http://doi.acm.org/10.1145/2750858.2804253

 

Danfeng Zhang, Yao Wang, G. Edward Suh, Andrew C. Myers; “A Hardware Design Language for Timing-Sensitive Information-Flow Security,” ASPLOS ’15, Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, March 2015, Pages 503–516. doi:10.1145/2775054.2694372
Abstract: Information security can be compromised by leakage via low-level hardware features. One recently prominent example is cache probing attacks, which rely on timing channels created by caches. We introduce a hardware design language, SecVerilog, which makes it possible to statically analyze information flow at the hardware level. With SecVerilog, systems can be built with verifiable control of timing channels and other information channels. SecVerilog is Verilog, extended with expressive type annotations that enable precise reasoning about information flow. It also comes with rigorous formal assurance: we prove that SecVerilog enforces timing-sensitive noninterference and thus ensures secure information flow. By building a secure MIPS processor and its caches, we demonstrate that SecVerilog makes it possible to build complex hardware designs with verified security, yet with low overhead in time, space, and HW designer effort.
Keywords: dependent types, hardware description language, information flow control, timing channels (ID#: 15-6955)
URL: http://doi.acm.org/10.1145/2775054.2694372

 

C. Louison, F. Ferlay, D. Keller, D. Mestre; “Vibrotactile Feedback for Collision Awareness,” British HCI ’15, Proceedings of the 2015 British HCI Conference, July 2015, Pages 277–278. doi:10.1145/2783446.2783609
Abstract: Magnetic Confinement Fusion machines called tokamak (e.g. ITER and WEST projects), as well as many industrial projects, require a high integration level in a confined volume. The feasibility of installation and maintenance by an operator has to be considered in the early stages of the design. Virtual reality technologies have opened new perspectives and solutions to take into account assembly and maintenance constraints, using virtual mock-ups. In our applications, the human factor takes an important role. Since the operator interacts in a very tight and confined environment, he has to pay attention to his whole body relative to the virtual environment, in the absence of haptic feedback. In this context, enriched sensorial information, called “collision awareness feedback”, must be defined, to favour an appropriate operator’s spatial behavior with respect to the environment. In this paper, we present a preliminary study, testing the effect of vibrotactile feedback in a simple tracking task, compared to a pure visual feedback.
Keywords: assembly task, collision awareness, tactile sense, vibrotactile, virtual human, virtual reality (ID#: 15-6956)
URL: http://doi.acm.org/10.1145/2783446.2783609

 

Petr Hosek, Cristian Cadar; “VARAN the Unbelievable: An Efficient N-version Execution Framework,” ASPLOS ’15, Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, March 2015, Pages 339–353. doi:10.1145/2775054.2694390
Abstract: With the widespread availability of multi-core processors, running multiple diversified variants or several different versions of an application in parallel is becoming a viable approach for increasing the reliability and security of software systems. The key component of such N-version execution (NVX) systems is a runtime monitor that enables the execution of multiple versions in parallel. Unfortunately, existing monitors impose either a large performance overhead or rely on intrusive kernel-level changes. Moreover, none of the existing solutions scales well with the number of versions, since the runtime monitor acts as a performance bottleneck.  In this paper, we introduce Varan, an NVX framework that combines selective binary rewriting with a novel event-streaming architecture to significantly reduce performance overhead and scale well with the number of versions, without relying on intrusive kernel modifications.  Our evaluation shows that Varan can run NVX systems based on popular C10k network servers with only a modest performance overhead, and can be effectively used to increase software reliability using techniques such as transparent failover, live sanitization and multi-revision execution.
Keywords: N-version execution, event streaming, live sanitization, multi-revision execution, record-replay, selective binary rewriting, transparent failover (ID#: 15-6957)
URL: http://doi.acm.org/10.1145/2775054.2694390

 

Adam Procter, William L. Harrison, Ian Graves, Michela Becchi, Gerard Allwein; “Semantics Driven Hardware Design, Implementation, and Verification with ReWire,” LCTES’15, Proceedings of the 16th ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems 2015, CD-ROM, June 2015, Article No. 13. doi:10.1145/2670529.2754970
Abstract: There is no such thing as high assurance without high assurance hardware. High assurance hardware is essential, because any and all high assurance systems ultimately depend on hardware that conforms to, and does not undermine, critical system properties and invariants. And yet, high assurance hardware development is stymied by the conceptual gap between formal methods and hardware description languages used by engineers. This paper presents ReWire, a functional programming language providing a suitable foundation for formal verification of hardware designs, and a compiler for that language that translates high-level, semantics-driven designs directly into working hardware. ReWire’s design and implementation are presented, along with a case study in the design of a secure multicore processor, demonstrating both ReWire’s expressiveness as a programming language and its power as a framework for formal, high-level reasoning about hardware systems.
Keywords: (not provided) (ID#: 15-6958)
URL: http://doi.acm.org/10.1145/2670529.2754970

 

Po-Hsun Wu, Mark Po-Hung Lin, Xin Li, Tsung-Yi Ho; “Common-Centroid FinFET Placement Considering the Impact of Gate Misalignment,” ISPD ’15, Proceedings of the 2015 Symposium on International Symposium on Physical Design, March 2015, Pages 25–31. doi:10.1145/2717764.2717769
Abstract: The FinFET technology has been regarded as a better alternative among different device technologies at 22nm node and beyond due to more effective channel control and lower power consumption. However, the gate misalignment problem resulting from process variation based on the FinFET technology becomes even severer compared with the conventional planar CMOS technology. Such misalignment may increase the threshold voltage and decrease the drain current of a single transistor. When applying the FinFET technology to analog circuit design, the variation of drain currents will destroy the current matching among transistors and degrade the circuit performance. In this paper, we present the first FinFET placement technique for analog circuits considering the impact of gate misalignment together with systematic and random mismatch. Experimental results show that the proposed algorithms can obtain an optimized common-centroid FinFET placement with much better current matching.
Keywords: analog placement, common centroid, finfet, gate misalignment (ID#: 15-6959)
URL: http://doi.acm.org/10.1145/2717764.2717769

 

Alejandro Russo; “Functional Pearl: Two Can Keep a Secret, if One of Them Uses Haskell,” ICFP 2015, Proceedings of the 20th ACM SIGPLAN International Conference on Functional Programming, August 2015, Pages 280–288. doi:10.1145/2784731.2784756
Abstract: For several decades, researchers from different communities have independently focused on protecting confidentiality of data. Two distinct technologies have emerged for such purposes: Mandatory Access Control (MAC) and Information-Flow Control (IFC)—the former belonging to operating systems (OS) research, while the latter to the programming languages community. These approaches restrict how data gets propagated within a system in order to avoid information leaks. In this scenario, Haskell plays a unique privileged role: it is able to protect confidentiality via libraries. This pearl presents a monadic API which statically protects confidentiality even in the presence of advanced features like exceptions, concurrency, and mutable data structures. Additionally, we present a mechanism to safely extend the library with new primitives, where library designers only need to indicate the read and write effects of new operations.
Keywords: information-flow control, library, mandatory access control, security (ID#: 15-6960)
URL: http://doi.acm.org/10.1145/2784731.2784756

 

Mehdi Bagherzadeh, Hridesh Rajan; “Panini: A Concurrent Programming Model for Solving Pervasive and Oblivious Interference,” MODULARITY 2015, Proceedings of the 14th International Conference on Modularity, March 2015, Pages 93–108. doi:10.1145/2724525.2724568
Abstract: Modular reasoning about concurrent programs is complicated by the possibility of interferences happening between any two instructions of a task (pervasive interference), and these interferences not giving out any information about the behaviors of potentially interfering concurrent tasks (oblivious interference). Reasoning about a concurrent program would be easier if a programmer modularly and statically (1) knows precisely the program points at which interferences may happen (sparse interference), and (2) has some insights into behaviors of potentially interfering tasks at these points (cognizant interference). In this work we present Panini, a core concurrent calculus which guarantees sparse interference, by controlling sharing among concurrent tasks, and cognizant interference, by controlling dynamic name bindings and accessibility of states of tasks. Panini promotes capsule-oriented programming whose concurrently running capsules own their states, communicate by asynchronous invocations of their procedures and dynamically transfer ownership. Panini limits sharing among two capsules to other capsules and futures, limits accessibility of a capsule states to only through its procedures and dispatches a procedure invocation on the static type of its receiver capsule. We formalize Panini, present its semantics and illustrate how its interference model, using behavioral contracts, enables Hoare-style modular reasoning about concurrent programs with interference.
Keywords: Pervasive interference, capsule-oriented programming, message passing, modular reasoning, oblivious interference (ID#: 15-6961)
URL: http://doi.acm.org/10.1145/2724525.2724568

 

Yuanzhong Xu, Emmett Witchel; “Maxoid: Transparently Confining Mobile Applications with Custom Views of State,” EuroSys ’15, Proceedings of the Tenth European Conference on Computer Systems, April 2015, Article No. 26. doi:10.1145/2741948.2741966
Abstract: We present Maxoid, a system that allows an Android app to process its sensitive data by securely invoking other, untrusted apps. Maxoid provides secrecy and integrity for both the invoking app and the invoked app. For each app, Maxoid presents custom views of private and public state (files and data in content providers) to transparently redirect unsafe data flows and minimize disruption. Maxoid supports unmodified apps with full security guarantees, and also introduces new APIs to improve usability. We show that Maxoid can improve security for popular Android apps with minimal performance overheads.
Keywords: (not provided) (ID#: 15-6962)
URL: http://doi.acm.org/10.1145/2741948.2741966

 

Stefan Haar, Salim Perchy, Camilo Rueda, Frank Valencia; “An Algebraic View of Space/Belief and Extrusion/Utterance for Concurrency/Epistemic Logic,” PPDP ’15, Proceedings of the 17th International Symposium on Principles and Practice of Declarative Programming, July 2015, Pages 161–172. doi:10.1145/2790449.2790520
Abstract: We enrich spatial constraint systems with operators to specify information and processes moving from a space to another. We shall refer to these news structures as spatial constraint systems with extrusion. We shall investigate the properties of this new family of constraint systems and illustrate their applications. From a computational point of view the new operators provide for process/information extrusion, a central concept in formalisms for mobile communication. From an epistemic point of view extrusion corresponds to a notion we shall call utterance; a piece of information that an agent communicates to others but that may be inconsistent with the agent’s beliefs. Utterances can then be used to express instances of epistemic notions, which are common place in social media, such as hoaxes or intentional lies. Spatial constraint systems with extrusion can be seen as complete Heyting algebras equipped with maps to account for spatial and epistemic specifications.
Keywords: extrusion, lies, mobility, social networks, space, utterance (ID#: 15-6963)
URL: http://doi.acm.org/10.1145/2790449.2790520

 

Qi Hu, Peng Liu, Michael C. Huang, Xiang-Hui Xie; “Exploiting Transmission Lines on Heterogeneous Networks-on-Chip to Improve the Adaptivity and Efficiency of Cache Coherence,” NOCS ’15, Proceedings of the 9th International Symposium on Networks-on-Chip, September 2015, Article No. 14. doi:10.1145/2786572.2786576
Abstract: Emerging heterogeneous interconnects have shown lower latency and higher throughput, which can improve the efficiency of communication and create new opportunities for memory system designs. In this paper, transmission lines are employed as a latency-optimized network and combined with a packet-switched network to create heterogeneous interconnects improving the efficiencies of on-chip communication and cache coherence. We take advantage of this heterogeneous interconnect design, and keep cache coherence adaptively based on data locality. Different type of messages are adaptively directed through selected medium of the heterogeneous interconnects to enhance cache coherence effectiveness. Compared with a state-of-the-art coherence mechanism, the proposed technique can reduce the coherence overhead by 24%, reduce the network energy consumption by 35%, and improve the system performance by 25% on a 64-core system.
Keywords: Cache coherence, heterogeneous networks-on-chip (ID#: 15-6964)
URL: http://doi.acm.org/10.1145/2786572.2786576

 

Yajin Zhou, Kunal Patel, Lei Wu, Zhi Wang, Xuxian Jiang; “Hybrid User-level Sandboxing of Third-party Android Apps,” ASIACCS ’15, Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 19–30. doi:10.1145/2714576.2714598
Abstract: Users of Android phones increasingly entrust personal information to third-party apps. However, recent studies reveal that many apps, even benign ones, could leak sensitive information without user awareness or consent. Previous solutions either require to modify the Android framework thus significantly impairing their practical deployment, or could be easily defeated by malicious apps using a native library.  In this paper, we propose AppCage, a system that thoroughly confines the run-time behavior of third-party Android apps without requiring framework modifications or root privilege. AppCage leverages two complimentary user-level sandboxes to interpose and regulate an app’s access to sensitive APIs. Specifically, dex sandbox hooks into the app’s Dalvik virtual machine instance and redirects each sensitive framework API to a proxy which strictly enforces the user-defined policies, and native sandbox leverages software fault isolation to prevent the app’s native libraries from directly accessing the protected APIs or subverting the dex sandbox. We have implemented a prototype of AppCage. Our evaluation shows that AppCage can successfully detect and block attempts to leak private information by third-party apps, and the performance overhead caused by AppCage is negligible for apps without native libraries and minor for apps with them.
Keywords: android, dalvik hooking, software fault isolation (ID#: 15-6965)
URL: http://doi.acm.org/10.1145/2714576.2714598

 

Elena Cardillo, Maria Teresa Chiaravalloti, Erika Pasceri; “Assessing ICD-9-CM and ICPC-2 Use in Primary Care. An Italian Case Study,” DH ’15, Proceedings of the 5th International Conference on Digital Health 2015, May 2015, Pages 95–102. doi:10.1145/2750511.2750525
Abstract: Controlled vocabularies and standardized coding systems play a fundamental role in the healthcare domain. The International Classification of Diseases (ICD) is one of the most widely used classification systems for clinical problems and procedures. In Italy the 9th revision of the standard is used and recommended in primary care for encoding prescription documents. This paper describes a statistical and terminological study to assess ICD-9-CM use in primary care and its comparison to the International Classification of Primary Care (ICPC), specifically designed for primary care. The study has been conducted by analyzing the clinical records of about 199,000 patients provided by a set of 166 General Practitioners (GPs) in different Italian areas. The analysis has been based on several techniques for detecting coding practice and errors, like natural language processing and text-similarity comparison. Results showed that the selected GPs do not fully exploit the diseases and procedures descriptive capabilities of ICD-9-CM due to its complexity. Furthermore, compared to ICPC-2, it resulted less feasible in the primary care setting, particularly for the high granularity of the structure and for the lack of reasons for encounters.
Keywords: classification systems, e-health, primary care, terminology icd icpc (ID#: 15-6966)
URL: http://doi.acm.org/10.1145/2750511.2750525

 

Sophia Drossopoulou, James Noble, Mark S. Miller; “Swapsies on the Internet: First Steps towards Reasoning about Risk and Trust in an Open World,” PLAS’15, Proceedings of the 10th ACM Workshop on Programming Languages and Analysis for Security, July 2015, Pages 2–15. doi:10.1145/2786558.2786564
Abstract: Contemporary open systems use components developed by many different parties, linked together dynamically in unforeseen constellations. Code needs to live up to strict security specifications: it has to ensure the correct functioning of its objects when they collaborate with external objects which may be malicious.  In this paper we propose specifications that model risk and trust in such open systems. We specify Miller, Van Cutsem, and Tulloh’s escrow exchange example, and discuss the meaning of such a specification. We argue informally that the code satisfies its specification.
Keywords: (not provided) (ID#: 15-6967)
URL: http://doi.acm.org/10.1145/2786558.2786564

 

Florian Floyd Mueller, Joe Marshall, Rohit Ashok Khot, Stina Nylander, Jakob Tholander; “Understanding Sports-HCI by Going Jogging at CHI,” CHI EA ’15, Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, April 2015, Pages 869–872. doi:10.1145/2702613.2727688
Abstract: More and more technologies are emerging that aim to support sports activities, for example there are jogging apps, cycling computers and quadcopters for sportspeople to videorecord their actions. These new technologies appear to become more and more popular, yet interaction design knowledge how to support the associated exertion experiences is still limited. In order to bring practitioners and academics interested in sports-HCI together and examine the topic “in the wild”, we propose to go outside and jog around the CHI venue while using and discussing some of these new technologies. The goal is to investigate and shape the future of the field of sports-HCI.
Keywords: exercise, exertion, sport (ID#: 15-6968)
URL: http://doi.acm.org/10.1145/2702613.2727688

 

H.-S. Philip Wong, He Yi, Maryann Tung, Kye Okabe; “Physical Layout Design of Directed Self-Assembly Guiding Alphabet for IC Contact Hole/via Patterning,” ISPD ’15, Proceedings of the 2015 Symposium on International Symposium on Physical Design, March 2015, Pages 65–66.  doi:10.1145/2717764.2723574
Abstract: The continued scaling of feature size has brought increasingly significant challenges to conventional optical lithography.[1-3] The rising cost and limited resolution of current lithography technologies have opened up opportunities for alternative patterning approaches. Among the emerging patterning approaches, block copolymer self-assembly for device fabrication has been envisioned for over a decade. Block copolymer DSA is a result of spontaneous microphase separation of block copolymer films, forming periodic microdomains including cylinders, spheres, and lamellae, in the same way that snowflakes and clamshells are formed in nature - by self-assembly due to forces of nature (Fig. 1a). DSA can generate closely packed and well controlled sub-20 nm features with low cost and high throughput, therefore stands out among other emerging lithographic solutions, including extreme ultraviolet lithography (EUV), electron beam lithography (e-beam), and multiple patterning lithography (MPL).[2;6]  Previous research has shown a high degree of dimensional control of the self-assembled features over large areas with long range ordering and periodic structures.[5; 6] The exquisite dimensional control at nanometer-scale feature sizes is one of the most attractive properties of block copolymer self-assembly. At the same time, device and circuit fabrication for the semiconductor industry requires accurate placement of desired features at irregular positions on the chip. The need to coax the self-assembled features into circuit layout friendly location is a roadblock for introducing self-assembly into semiconductor manufacturing. Directed self-assembly (DSA) and the use of topography to direct the self-assembly (graphoepitaxy) have shown great potential in overcoming the current lithography limits.[4]  Recognizing that typical circuit layouts do not require long range order, we adopt a lithography sub-division approach akin to double-patterning and spacer patterning, using small guiding topographical templates. Guiding topographical templates with sizes of the order of the natural pitch of the block copolymer can effectively guide the self-assembly of block polymer (Fig. 1b-c). Therefore, circuit contact hole patterns can be placed at arbitrary location by first patterning a coarse guiding template using conventional lithography.[7; 8] This procedure enables generating a higher resolution feature at a location determined by the coarse lithographic pattern. The size and registration of the features are determined by parameters of the template as well as the block copolymer itself.  Using this technique, we have proposed a general template design strategy that relates the block copolymer material properties to the target technology node requirements, and demonstrated contact hole patterning at the technology node from 22 nm to 7 nm, for both memory circuits and random logic circuits.[11] The critical dimension of DSA patterns is highly uniform, with their position controlled precisely. As technology scales down, the contact/via density scales up, which simultaneously opens the possibility of using multiple-hole DSA patterns for contact hole patterning and brings in the challenge of printing guiding templates at a small pitch. Using DSA for patterning IC contacts requires further knowledge of the placement of contacts in an IC layout, as the placement of contacts in the IC layout determines the shape and size of the required templates.  We hypothesize that there exists a limited set of guiding templates analogous to the letters of an alphabet that can cover all the possible contact hole patterns of a full chip contact layer.[12] This alphabet approach would significantly simplify DSA contact hole patterning when the total number of letters of the alphabet is small and would allow us to focus on fully characterizing only the design spaces for the letters of the alphabet. By positioning these letters in various locations we would be able to pattern the full chip contact layer in the same way that the 26 letters of the English alphabet is sufficient to compose an English newspaper. Some of the most basic letters, such as circular templates for 1-hole DSA patterns and elliptical templates for 2- and 3-hole DSA patterns, have been studied extensively.[12] To establish a complete alphabet, though, requires the examination of the entire standard cell library, as well as the optimization of the layout to further reduce the number of letters in the alphabet.[5]  The broad community of DSA researchers has made tremendous progress in the past few years. However to make DSA fully qualified for large-scale semiconductor manufacturing, technical issues such as defectivity reduction and overlay optimization must be solved. While many researchers are developing new block copolymer materials for better chemical properties, there remains more works to be accomplished from the circuit and system design level, including IC layouts optimization for the improvement of DSA process yield and DSA full-chip hotspot detection. Challenges such as optimizing and tuning the template design based on overlay, defectivity, and lithography requirements will need to be further investigated in order for practical implementation in industry.
Keywords: block copolymer, contact hole, directed self-assembly, layout design, lithography (ID#: 15-6969)
URL: http://doi.acm.org/10.1145/2717764.2723574

 

Yehuda Afek, Anat Bremler-Barr, Shir Landau Feibish, Liron Schiff; “Sampling and Large Flow Detection in SDN,” SIGCOMM ’15, Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, August 2015, Pages 345–346. doi:10.1145/2785956.2790009
Abstract: (not provided)
Keywords: heavy hitters, network monitoring, software defined networks (ID#: 15-6970)
URL: http://doi.acm.org/10.1145/2785956.2790009

 

Yasmine Badr, Andres Torres, Puneet Gupta; “Mask Assignment and Synthesis of DSA-MP Hybrid Lithography for sub-7nm Contacts/Vias,” DAC ’15, Proceedings of the 52nd Annual Design Automation Conference, June 2015, Article No. 70. doi:10.1145/2744769.2744868
Abstract: Integrating Directed Self Assembly (DSA) and Multiple Patterning (MP) is an attractive option for printing contact and via layers for sub-7nm process nodes. In the DSA-MP hybrid process, an optimized decomposition algorithm is required to perform the MP mask assignment while considering the DSA advantages and limitations. In this paper, we present an optimal Integer Linear Programming (ILP) formulation for the simultaneous DSA grouping and MP decomposition problem for contacts and vias. Then we propose a heuristic and develop an efficient algorithm for solving the same problem. In comparison to the optimal ILP results, the proposed algorithm is 197x faster and results in 16.3% more violations. The proposed algorithm produces 56% fewer violations than the sequential approaches which perform DSA grouping followed by MP decomposition and vice versa.
Keywords: DSA, MP, Moore’s law, decomposition, directed self assembly, multiple patterning, technology (ID#: 15-6971)
URL: http://doi.acm.org/10.1145/2744769.2744868

 

Xin Chen, Sencun Zhu; “DroidJust: Automated Functionality-Aware Privacy Leakage Analysis for Android Applications,” WiSec ’15, Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 5. doi:10.1145/2766498.2766507
Abstract: Android applications (apps for short) can send out users’ sensitive information against users’ intention. Based on the stats from Genome and Mobile-Sandboxing, 55.8% and 59.7% Android malware families feature privacy leakage. Prior approaches to detecting privacy leakage on smartphones primarily focused on the discovery of sensitive information flows. However, Android apps also send out users’ sensitive information for legitimate functions. Due to the fuzzy nature of the privacy leakage detection problem, we formulate it as a justification problem, which aims to justify if a sensitive information transmission in an app serves any purpose, either for intended functions of the app itself or for other related functions. This formulation makes the problem more distinct and objective, and therefore more feasible to solve than before. We propose DroidJust, an automated approach to justifying an app’s sensitive information transmission by bridging the gap between the sensitive information transmission and application functions. We also implement a prototype of DroidJust and evaluate it with over 6000 Google Play apps and over 300 known malware collected from VirusTotal. Our experiments show that our tool can effectively and efficiently analyze Android apps w.r.t their sensitive information flows and functionalities, and can greatly assist in detecting privacy leakage.
Keywords: Android security, privacy leakage detection, static taint analysis (ID#: 15-6972)
URL: http://doi.acm.org/10.1145/2766498.2766507
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Cyber-Physical Security Expert Systems 2015

 

 
SoS Logo

Cyber-Physical Security Expert Systems

2015


Expert systems based on fuzzy logic hold promise for solving many problems. The research presented here addresses the use of expert systems to solve security problems in cyber-physical systems including the Internet of Things, black hole attacks in wireless sensor networks, plants, the Smart Grid, and vehicular and transportation networks. For the Science of Security community, the hard problems of resiliency, metrics, composability, and privacy are addressed. These works were presented in 2015.



Dieter Gollmann, Pavel Gurikov, Alexander Isakov, Marina Krotofil, Jason Larsen, Alexander Winnicki; “Cyber-Physical Systems Security: Experimental Analysis of a Vinyl Acetate Monomer Plant,” CPSS ’15, Proceedings of the 1st ACM Workshop on Cyber-Physical System Security, April 2015, Pages 1–12. doi:10.1145/2732198.2732208
Abstract: We describe an approach for analysing and attacking the physical part (a process) of a cyber-physical system. The stages of this approach are demonstrated in a case study, a simulation of a vinyl acetate monomer plant. We want to demonstrate in particular where security has to rely on expert knowledge in the domain of the physical components and processes of a system and that there are major challenges for converting cyber attacks into successful cyber-physical attacks.
Keywords: (not provided) (ID#: 15-7023)
URL: http://doi.acm.org/10.1145/2732198.2732208

 

Bharathan Balaji, Mohammad Abdullah Al Faruque, Nikil Dutt, Rajesh Gupta, Yuvraj Agarwal; “Models, Abstractions, and Architectures: The Missing Links in Cyber-Physical Systems,” DAC’15, Proceedings of the 52nd Annual Design Automation Conference, June 2015, Article No. 82. doi:10.1145/2744769.2747936
Abstract: Bridging disparate realms of physical and cyber system components requires models and methods that enable rapid evaluation of design alternatives in cyber-physical systems (CPS). The diverse intellectual traditions of physical and mathematical sciences makes this task exceptionally hard. This paper seeks to explore potential solutions by examining specific examples of CPS applications in automobiles and smart buildings. Both smart buildings and automobiles are complex systems with embedded knowledge across several domains. We present our experiences with development of CPS applications to illustrate the challenges that arise when expertise across domains is integrated into the system, and show that creation of models, abstractions, and architectures that address these challenges are key to next generation CPS applications.
Keywords: abstractions, architectures, automobiles, cyber-physical systems, models, smart buildings (ID#: 15-7024)
URL: http://doi.acm.org/10.1145/2744769.2747936

 

Catia Trubiani, Anne Koziolek, Lucia Happe; “Exploiting Software Performance Engineering Techniques to Optimise the Quality of Smart Grid Environments,” ICPE’15, Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering, January 2015, Pages 199–202. doi:10.1145/2668930.2695532
Abstract: This paper discusses the challenges and opportunities of Software Performance Engineering (SPE) research in smart-grid (SG) environments. We envision to use SPE techniques to optimise the quality of information and communications technology (ICT) applications, and thus optimise the quality of the overall SG. The overall process of Monitoring, Analysing, Planning, and Executing (MAPE) is discussed to highlight the current open issues of the domain and the expected benefits.
Keywords: quality optimisation, smart grid environment, software performance engineering (ID#: 15-7025)
URL: http://doi.acm.org/10.1145/2668930.2695532

 

Marina Krotofil, Jason Larsen, Dieter Gollmann; “The Process Matters: Ensuring Data Veracity in Cyber-Physical Systems,” ASIACCS ’15, Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 133–144. doi:10.1145/2714576.2714599
Abstract: Cyber-physical systems are characterized by an IT infrastructure controlling effects in the physical world. Attacks are intentional actions trying to cause undesired physical effects. When process data originating in the physical world is manipulated before being handed to the IT infrastructure, the data security property called “veracity” or trustworthiness will be violated. There is no canonical IT security solution guaranteeing that the inputs from a sensor faithfully represent reality. However, the laws of physics may help the defender to detect impossible or implausible sensor readings.  This paper proposes a process-aware approach to detect when a sensor signal is being maliciously manipulated. We present a set of lightweight real-time algorithms for spoofing sensor signals directly at the microcontroller of the field device. The detection of spoofed measurements takes the form of plausibility and consistency checks with the help of the correlation entropy in a cluster of related sensors. We use the Tennessee Eastman challenge process to demonstrate the performance of our approach and to highlight aspects relevant to the detection effectiveness.
Keywords: cluster entropy, cyber-physical systems, plausibility checks, signal spoofing, veracity (ID#: 15-7026)
URL: http://doi.acm.org/10.1145/2714576.2714599

 

Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry; “Formal Methods for Semi-Autonomous Driving,” DAC ’15, Proceedings of the 52nd Annual Design Automation Conference, June 2015, Article No. 148. doi:10.1145/2744769.2747927
Abstract: We give an overview of the main challenges in the specification, design, and verification of human cyber-physical systems, with a special focus on semi-autonomous vehicles. We identify unique characteristics of formal modeling, specification, verification and synthesis in this domain. Some initial results and design principles are presented along with directions for future work.
Keywords: automotive systems, control, cyber-physical systems, formal verification, learning, semi-autonomous driving, synthesis (ID#: 15-7027)
URL: http://doi.acm.org/10.1145/2744769.2747927

 

Christoph Schmittner, Zhendong Ma, Erwin Schoitsch, Thomas Gruber; “A Case Study of FMVEA and CHASSIS as Safety and Security Co-Analysis Method for Automotive Cyber-Physical Systems,” CPSS’15, Proceedings of the 1st ACM Workshop on Cyber-Physical System Security, April 2015, Pages 69–80. doi:10.1145/2732198.2732204
Abstract: The increasing integration of computational components and physical systems creates cyber-physical system, which provide new capabilities and possibilities for humans to control and interact with physical machines. However, the correlation of events in cyberspace and physical world also poses new safety and security challenges. This calls for holistic approaches to safety and security analysis for the identification of safety failures and security threats and a better understanding of their interplay. This paper presents the application of two promising methods, i.e. Failure Mode, Vulnerabilities and Effects Analysis (FMVEA) and Combined Harm Assessment of Safety and Security for Information Systems (CHASSIS), to a case study of safety and security co-analysis of cyber-physical systems in the automotive domain. We present the comparison, discuss their applicabilities, and identify future research needs.
Keywords: automotive, cyber-physical system, safety and security co-analysis, systems engineering (ID#: 15-7028)
URL: http://doi.acm.org/10.1145/2732198.2732204

 

Rafael Capilla, Mike Hinchey, Francisco J. Díaz; “Collaborative Context Features for Critical Systems,” VaMoS ’15, Proceedings of the Ninth International Workshop on Variability Modelling of Software-intensive Systems, January 2015, Pages 43–51. doi:10.1145/2701319.2701322
Abstract: Feature models and their extensions have been proposed and used over the past 20 years for modeling the commonality and variability of software systems. However, the increasing runtime demands and post-deployment configuration procedures of self-adaptive, context-aware and pervasive systems has brought the need for modeling context features. In addition, many critical systems that demand stringent collaborative features at runtime need also to share information dynamically. In this research-in-progress paper, we sketch our vision of where feature modeling should go to support collaborative aspects of systems. Our proposal suggests identifying and annotating context features models with collaborative information that becomes particularly useful for critical and swarm-based systems that require information exchange at runtime.
Keywords: Feature modeling, adaptation, context features, context-aware systems, runtime (ID#: 15-7029)
URL: http://doi.acm.org/10.1145/2701319.2701322

 

Robert K. Abercrombie, Frederick T. Sheldon, Bob G. Schlicher; “Risk and Vulnerability Assessment Using Cybernomic Computational Models: Tailored for Industrial Control Systems,” CISR ’15, Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 18. doi:10.1145/2746266.2746284
Abstract: In cybersecurity, there are many influencing economic factors to weigh. This paper considers the defender-practitioner stakeholder points-of-view that involve cost combined with development and deployment considerations. Some examples include the cost of countermeasures, training and maintenance as well as the lost opportunity cost and actual damages associated with a compromise. The return on investment (ROI) from countermeasures comes from saved impact costs (i.e., losses from violating availability, integrity, confidentiality or privacy requirements). A measured approach that informs cybersecurity practice is pursued toward maximizing ROI. To this end for example, ranking threats based on their potential impact focuses security mitigation and control investments on the highest value assets, which represent the greatest potential losses. The traditional approach uses risk exposure (calculated by multiplying risk probability by impact). To address this issue in terms of security economics, we introduce the notion of Cybernomics. Cybernomics considers the cost/benefits to the attacker/defender to estimate risk exposure. As the first step, we discuss the likelihood that a threat will emerge and whether it can be thwarted and if not what will be the cost (losses both tangible and intangible). This impact assessment can provide key information for ranking cybersecurity threats and managing risk.
Keywords: Availability, Dependability, Integrity, Security Measures/Metrics, Security Requirements, Threats and Vulnerabilities (ID#: 15-7030)
URL: http://doi.acm.org/10.1145/2746266.2746284

 

Antonio Filieri, Henry Hoffmann, Martina Maggio; “Automated Multi-Objective Control for Self-Adaptive Software Design,” ESEC/FSE 2015, Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, August 2015, Pages 13–24. doi:10.1145/2786805.2786833
Abstract: While software is becoming more complex everyday, the requirements on its behavior are not getting any easier to satisfy. An application should offer a certain quality of service, adapt to the current environmental conditions and withstand runtime variations that were simply unpredictable during the design phase. To tackle this complexity, control theory has been proposed as a technique for managing software’s dynamic behavior, obviating the need for human intervention. Control-theoretical solutions, however, are either tailored for the specific application or do not handle the complexity of multiple interacting components and multiple goals. In this paper, we develop an automated control synthesis methodology that takes, as input, the configurable software components (or knobs) and the goals to be achieved. Our approach automatically constructs a control system that manages the specified knobs and guarantees the goals are met. These claims are backed up by experimental studies on three different software applications, where we show how the proposed automated approach handles the complexity of multiple knobs and objectives.
Keywords: Adaptive software, control theory, dynamic systems, non-functional requirements, run-time verification (ID#: 15-7031)
URL: http://doi.acm.org/10.1145/2786805.2786833

 

Chieh-Jan Mike Liang, Börje F. Karlsson, Nicholas D. Lane, Feng Zhao, Junbei Zhang, Zheyi Pan, Zhao Li, Yong Yu; “SIFT: Building an Internet of Safe Things,” IPSN’15, Proceedings of the 14th International Conference on Information Processing in Sensor Networks, April 2015, Pages 298–309. doi:10.1145/2737095.2737115
Abstract: As the number of connected devices explodes, the use scenarios of these devices and data have multiplied. Many of these scenarios, e.g., home automation, require tools beyond data visualizations, to express user intents and to ensure interactions do not cause undesired effects in the physical world. We present SIFT, a safety-centric programming platform for connected devices in IoT environments. First, to simplify programming, users express high-level intents in declarative IoT apps. The system then decides which sensor data and operations should be combined to satisfy the user requirements. Second, to ensure safety and compliance, the system verifies whether conflicts or policy violations can occur within or between apps. Through an office deployment, user studies, and trace analysis using a large-scale dataset from a commercial IoT app authoring platform, we demonstrate the power of SIFT and highlight how it leads to more robust and reliable IoT apps.
Keywords: (not provided) (ID#: 15-7032)
URL: http://doi.acm.org/10.1145/2737095.2737115

 

Aron Laszka, Yevgeniy Vorobeychik, Xenofon Koutsoukos; “Integrity Assurance in Resource-Bounded Systems through Stochastic Message Authentication,” HotSoS ’15, Proceedings of the 2015 Symposium and Bootcamp on the Science of Security, April 2015, Article No. 1. doi:10.1145/2746194.2746195
Abstract: Assuring communication integrity is a central problem in security. However, overhead costs associated with cryptographic primitives used towards this end introduce significant practical implementation challenges for resource-bounded systems, such as cyber-physical systems. For example, many control systems are built on legacy components which are computationally limited but have strict timing constraints. If integrity protection is a binary decision, it may simply be infeasible to introduce into such systems; without it, however, an adversary can forge malicious messages, which can cause significant physical or financial harm. We propose a formal game-theoretic framework for optimal stochastic message authentication, providing provable integrity guarantees for resource-bounded systems based on an existing MAC scheme. We use our framework to investigate attacker deterrence, as well as optimal design of stochastic message authentication schemes when deterrence is impossible. Finally, we provide experimental results on the computational performance of our framework in practice.
Keywords: economics of security, game theory, message authentication (ID#: 15-7033)
URL: http://doi.acm.org/10.1145/2746194.2746195

 

Doris Aschenbrenner, Michael Fritscher, Felix Sittner, Klaus Schilling; “Design Process for User Interaction with Robotic Manipulators in Industrial Internet Applications,” SIGDOC ’15, Proceedings of the 33rd Annual International Conference on the Design of Communication, July 2015, Article No. 18. doi:10.1145/2775441.2775474
Abstract: In the paper we want to share our experiences in developing a new telemaintenance system for industrial robots in an active production environment. This has been achieved within a three-year research project. In this article we describe the design methods we have used, and our evaluation approaches.  The challenge of developing user interfaces for those prototypes lies in the special requirements of the industrial work domain. Highly sophisticated technical tasks need to be carried out under time pressure and in a noisy environment. The human machine interaction of the remote tasks is especially difficult. There's no experience with those remote tasks, as they are only possible with the developed technology.  The scope of the paper lies in the design process, not in the evaluation results, which will be published separately.
Keywords: design process, experience report, human robot interaction, industrial internet, industrial robotics, maintenance, telemaintenance, telematics (ID#: 15-7034)
URL: http://doi.acm.org/10.1145/2775441.2775474

 

Shiguang Wang, Lu Su, Shen Li, Shaohan Hu, Tanvir Amin, Hongwei Wang, Shuochao Yao, Lance Kaplan, Tarek Abdelzaher; “Scalable Social Sensing of Interdependent Phenomena,” IPSN ’15, Proceedings of the 14th International Conference on Information Processing in Sensor Networks, April 2015, Pages 202–213. doi:10.1145/2737095.2737114
Abstract: The proliferation of mobile sensing and communication devices in the possession of the average individual generated much recent interest in social sensing applications. Significant advances were made on the problem of uncovering ground truth from observations made by participants of unknown reliability. The problem, also called fact-finding commonly arises in applications where unvetted individuals may opt in to report phenomena of interest. For example, reliability of individuals might be unknown when they can join a participatory sensing campaign simply by downloading a smartphone app. This paper extends past social sensing literature by offering a scalable approach for exploiting dependencies between observed variables to increase fact-finding accuracy. Prior work assumed that reported facts are independent, or incurred exponential complexity when dependencies were present. In contrast, this paper presents the first scalable approach for accommodating dependency graphs between observed states. The approach is tested using real-life data collected in the aftermath of hurricane Sandy on availability of gas, food, and medical supplies, as well as extensive simulations. Evaluation shows that combining expected correlation graphs (of outages) with reported observations of unknown reliability, results in a much more reliable reconstruction of ground truth from the noisy social sensing data. We also show that correlation graphs can help test hypotheses regarding underlying causes, when different hypotheses are associated with different correlation patterns. For example, an observed outage profile can be attributed to a supplier outage or to excessive local demand. The two differ in expected correlations in observed outages, enabling joint identification of both the actual outages and their underlying causes.
Keywords: data reliability, expectation maximization, maximum likelihood estimators, social sensing (ID#: 15-7035)
URL: http://doi.acm.org/10.1145/2737095.2737114

 

Anthony J. Clark, Philip K. McKinley, Xiaobo Tan; “Enhancing a Model-Free Adaptive Controller through Evolutionary Computation,” GECCO ’15, Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, July 2014, Pages 137–144. doi:10.1145/2739480.2754762
Abstract: Many robotic systems experience fluctuating dynamics during their lifetime. Variations can be attributed in part to material degradation and decay of mechanical hardware. One approach to mitigating these problems is to utilize an adaptive controller. For example, in model-free adaptive control (MFAC) a controller learns how to drive a system by continually updating link weights of an artificial neural network (ANN). However, determining the optimal control parameters for MFAC, including the structure of the underlying ANN, is a challenging process. In this paper we investigate how to enhance the online adaptability of MFAC-based systems through computational evolution. We apply the proposed methods to a simulated robotic fish propelled by a flexible caudal fin. Results demonstrate that the robot is able to effectively respond to changing fin characteristics and varying control signals when using an evolved MFAC controller. Notably, the system is able to adapt to characteristics not encountered during evolution. The proposed technique is general and can be applied to improve the adaptability of other cyber-physical systems.
Keywords: adaptive control, differential evolution, flexible materials, model-free control, robotic fish (ID#: 15-7036)
URL: http://doi.acm.org/10.1145/2739480.2754762

 

Assaad Moawad, Thomas Hartmann, Francois Fouquet, Jacques Klein, Yves Le Traon; “Adaptive Blurring of Sensor Data to Balance Privacy and Utility for Ubiquitous Services,” SAC ’15, Proceedings of the 30th Annual ACM Symposium on Applied Computing, April 2015, Pages 2271–2278. doi:10.1145/2695664.2695855
Abstract: Given the trend towards mobile computing, the next generation of ubiquitous “smart” services will have to continuously analyze surrounding sensor data. More than ever, such services will rely on data potentially related to personal activities to perform their tasks, e.g. to predict urban traffic or local weather conditions. However, revealing personal data inevitably entails privacy risks, especially when data is shared with high precision and frequency. For example, by analyzing the precise electric consumption data, it can be inferred if a person is currently at home, however this can empower new services such as a smart heating system. Access control (forbid or grant access) or anonymization techniques are not able to deal with such trade-off because whether they completely prohibit access to data or lose source traceability. Blurring techniques, by tuning data quality, offer a wide range of trade-offs between privacy and utility for services. However, the amount of ubiquitous services and their data quality requirements lead to an explosion of possible configurations of blurring algorithms. To manage this complexity, in this paper we propose a platform that automatically adapts (at runtime) blurring components between data owners and data consumers (services). The platform searches the optimal trade-off between service utility and privacy risks using multi-objective evolutionary algorithms to adapt the underlying communication platform. We evaluate our approach on a sensor network gateway and show its suitability in terms of i) effectiveness to find an appropriate solution, ii) efficiency and scalability.
Keywords: blurring, component-based architecture, optimization, privacy, sensors, software-platform, trade-off (ID#: 15-7037)
URL: http://doi.acm.org/10.1145/2695664.2695855

 

Kutalmış Akpınar, Kien A. Hua, Kai Li; “ThingStore: A Platform for Internet-of-Things Application Development and Deployment,” DEBS ’15, Proceedings of the 9th ACM International Conference on Distributed Event-Based Systems, June 2015, Pages 162–173. doi:10.1145/2675743.2771833
Abstract: An advanced app-store concept, called ThingStore, is introduced in this paper. It provides a “market place” environment to facilitate collaboration on Internet-of-Things (IoT) applications development, and a platform to host their deployment. ThingStore services three categories of users: (1) Thing Provider — “Things” (such as online cameras and sensors) can be made more intelligent through event detection software routines called smart services. A thing provider may deploy “things” and advertise their smart services at ThingStore market place. (2) Software Developer — Software developers can develop apps that query relevant smart services using EQL (Event Query Language) much like the way traditional database applications are conveniently developed atop a standard database management system today. (3) End User — An end user may subscribe to a particular app for event notification and management. In this IoT architecture, ThingStore is a computation hub that links together human, “things,” and computer software in a cyber-physical lifecycle to enable fusion of human and machine intelligence to accomplish some common goal. Not only human, but also “things,” may adjust the physical world. New changes in the physical world may, in turn, incur new event detections and therefore initiate another cycle of this ecology-inspired computational lifecycle.
Keywords: ThingStore, complex event processing, data stream processing, event query language, internet of things, service-oriented architecture (ID#: 15-7038)
URL: http://doi.acm.org/10.1145/2675743.2771833

 

Manuel Oriol, Jan Carlson, Michael Wahler; “SANCS 2015: 1st International Workshop on Software Architectures for Next-Generation Cyber-Physical Systems,” ECSAW ’15, Proceedings of the 2015 European Conference on Software Architecture Workshops, September 2015, Article No. 14. doi:10.1145/2797433.2797447
Abstract: Cyber-physical systems have become complex and pervasive over time. They evolved from simple, single-task systems to systems with a large set of functionalities, connected to the Internet, distributed, multi-core, and with user-centric intuitive interfaces. Such an evolution advocates for better software architecture adapted to such systems. The SANCS 2015 workshop aims at gathering both practitioners and researchers on these topics to explore the next generation of cyber-physical systems.
Keywords: (not provided) (ID#: 15-7039)
URL: http://doi.acm.org/10.1145/2797433.2797447

 

Reza Matinnejad, Shiva Nejati, Lionel C. Briand, Thomas Bruckmann; “Effective Test Suites for Mixed Discrete-Continuous Stateflow Controllers,” ESEC/FSE 2015, Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, August 2015, Pages 84–95. doi:10.1145/2786805.2786818
Abstract: Modeling mixed discrete-continuous controllers using Stateflow is common practice and has a long tradition in the embedded software system industry. Testing Stateflow models is complicated by expensive and manual test oracles that are not amenable to full automation due to the complex continuous behaviors of such models. In this paper, we reduce the cost of manual test oracles by providing test case selection algorithms that help engineers develop small test suites with high fault revealing power for Stateflow models. We present six test selection algorithms for discrete-continuous Stateflows: An adaptive random test selection algorithm that diversifies test inputs, two white-box coverage-based algorithms, a black-box algorithm that diversifies test outputs, and two search-based black-box algorithms that aim to maximize the likelihood of presence of continuous output failure patterns. We evaluate and compare our test selection algorithms, and find that our three output-based algorithms consistently outperform the coverage- and input-based algorithms in revealing faults in discrete-continuous Stateflow models. Further, we show that our output-based algorithms are complementary as the two search-based algorithms perform best in revealing specific failures with small test suites, while the output diversity algorithm is able to identify different failure types better than other algorithms when test suites are above a certain size.
Keywords: Stateflow testing, failure-based testing, mixed discrete-continuous behaviors, output diversity, structural coverage (ID#: 15-7040)
URL: http://doi.acm.org/10.1145/2786805.2786818 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Deep Packet Inspection 2014

 

 
SoS Logo

Deep Packet Inspection

2014


Deep Packet Inspection offers providers a new range of use cases, some with the potential to eavesdrop on non-public communication. Current research is almost exclusively concerned with raising the capability on a technological level, but critics question it with regard to privacy, net neutrality, and other implications. These latter issues are not being raised within research communities so much as by politically interested groups. The research cited here was represented in 2014.



Najam, M.; Younis, U.; Rasool, R.U., “Multi-byte Pattern Matching Using Stride-K DFA for High Speed Deep Packet Inspection,” Computational Science and Engineering (CSE), 2014 IEEE 17th International Conference on, vol., no., pp. 547, 553, 19–21 Dec. 2014. doi:10.1109/CSE.2014.125
Abstract: Deep packet inspection (DPI) is one of the crucial tasks in modern intrusion detection and intrusion prevention systems. It allows the inspection of packet payload using patterns. Modern DPI based systems use regular expressions to define these patterns. Deterministic finite automata (DFA) is considered to be an ideal choice for performing regular expression matching due to its O(1) processing complexity. However, DFAs consume large memory to store its state transition table, but this problem gets worsened when stride level of the DFA is increased. Though, increasing stride level brings significant increase in the overall speedup of the matching engine but as a trade off it consumes large memory. In this paper, we present stride-k speculative parallel pattern matching (SPPM), a technique in which a packet is first split up into two chunks and then multiple bytes per chunk are inspected at a time using stride-k DFA. Furthermore, we present a stride-k DFA compression technique using alphabet compression table (ACT) to reduce the memory requirements of stride-k DFA. We have implemented the single threaded algorithm for stride-2 SPPM. Results show that the use of stride-2 SPPM can overall increase the pattern matching speed by up to 30% as compared to traditional DFA matching, and a significant reduction of over 70% in the number iterations required for packet processing. Secondly, over 65% reduction in the number of transitions has been achieved using ACT for stride-2 DFA implementation.
Keywords: computational complexity; deterministic automata; finite automata; pattern matching; security of data; ACT; alphabet compression table; deterministic finite automata; high speed deep packet inspection; intrusion detection system; intrusion prevention system; multibyte pattern matching; processing complexity; regular expression matching; stride-2 SPPM; stride-k DFA compression technique; stride-k speculative parallel pattern matching; Automata; Educational institutions; Indexes; Inspection; Memory management; Parallel processing; Pattern matching; DFA; alphabet compression; deep packet inspection; multi-byte matching; regular expressions (ID#: 15-6697)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023635&isnumber=7023510

 

Jayashree, S.; Shivashankarappa, N., “Deep Packet Inspection Using Ternary Content Addressable Memory,” Circuits, Communication, Control and Computing (I4C), 2014 International Conference on, vol., no., pp. 441, 447, 21–22 Nov. 2014. doi:10.1109/CIMCA.2014.7057841
Abstract: With the increasing internet service complexity, providing secured quality service has become a major concern. In the earlier systems, data was believed to be safer on internet without being intercepted. Now these internet vulnerabilities cannot be ignored as these weaknesses are used by many to carry out malicious activities. In order to tackle these problems, internet service providers are trying to find better options. One such technique gaining popularity in the recent decade is Deep Packet Inspection (DPI), which can be provided using software or hardware methods. It is reported that hardware is providing better solution than software. In this review article, we have introduced one such hardware, Ternary Content Addressable Memory (TCAM), which could perform complete packet inspection (packet header and payload inspection). In the first section, we have focused on evolution of packet filtering system. Later the discussion is divided into two parts: (i) packet classification using TCAM, and (ii) payload inspection (pattern matching and regular expression) using TCAM.
Keywords: content-addressable storage; inspection; pattern matching; TCAM; deep packet inspection; packet classification; packet filtering system; packet header; pattern matching; payload inspection; regular expression; ternary content addressable memory; Classification algorithms; Hardware; IP networks; Indexes; Inspection; Pattern matching; Payloads; DPI; Multi-Match Packet Classification; Pattern matching; Regular Expression; TCAM (ID#: 15-6698)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7057841&isnumber=7057738

 

Shankar, S.S.; Lin PinXing; Herkersdorf, A., “Deep Packet Inspection in Residential Gateways and Routers: Issues and Challenges,” Integrated Circuits (ISIC), 2014 14th International Symposium on, vol., no., pp. 560, 563, 10-12 Dec. 2014. doi:10.1109/ISICIR.2014.7029481
Abstract: Several industry trends and new applications have brought the residential gateway router (RGR) to the center of digital home with direct connectivity to the service provider’s network. Increasing risks of network attacks have necessitated the need for deep packet inspection in network processor (NP) used by RGR to match traffic at multiple gigabit throughput. Traditional deep packet inspection (DPI) implementations primarily focus on end hosts like servers, personal / handheld computers. Existing DPI signature matching techniques cannot be directly implemented in RGR due to various issues and challenges pertaining to processing capacity of the NP and associated memory constraints. So 4 key factors, regular expression support, gigabit throughput, scalability and ease of signature updates has been proposed through which best signature matching system could be designed for efficient DPI implementation in RGR.
Keywords: computer network security; digital signatures; internetworking; telecommunication network routing; telecommunication traffic; DPI implementation; DPI signature matching techniques; NP processing capacity; RGR; deep-packet inspection; digital home; ease-of-signature update factor; gigabit throughput factor; memory constraints; network attack risks; network processor; network traffic; regular expression support factor; residential gateway router; scalability factor; service provider network; Algorithm design and analysis; Automata; Inspection; Memory management; Pattern matching; Software; Throughput; Deep Packet Inspection; Network Security; Regular Expressions; Residential Gateway and Router (ID#: 15-6699)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7029481&isnumber=7029433

 

Parvat, T.J.; Chandra, P., “Performance Improvement of Deep Packet Inspection for Intrusion Detection,” Wireless Computing and Networking (GCWCN), 2014 IEEE Global Conference on, vol., no., pp. 224, 228, 22–24 Dec. 2014. doi:10.1109/GCWCN.2014.7030883
Abstract: The development in anomaly and misuse detection in this decade is crucial as web services grow vast. Managing secure network is a challenge today. The objectives vary according to the infrastructure management and security policy. There are various ways to check stateful packet inspection and Deep Packet inspection (DPI). Identify payload traffic using DPI, Network security, Privacy and QoS. The functions of DPI are protocol detection, anti-virus, anti-malware and Intrusion Detection System (IDS). The detection engine may support by a signatures or heuristics. Most of the algorithms do training and testing, it takes approximately double time. The paper suggests a new model to improve performance of Intrusion detection system by using in/out based attributes of records. It takes a comparative less time and good accuracy than the existing classifiers.
Keywords: computer network management; computer network performance evaluation; computer network security; computer viruses; data privacy; program testing; protocols; quality of service; DPI; IDS; QoS; Web services; anomaly detection; anti-malware; anti-virus; deep packet inspection; heuristics; in/out based attributes; infrastructure management; intrusion detection system; misuse detection; network privacy; network security; payload traffic; protocol detection; secure network management; security policy; Accuracy; Computational modeling; Hidden Markov models; Inspection; Intrusion detection; Training; Accuracy; Deep Packet Inspection; Intrusion Detection; Performance; Security (ID#: 15-6700)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7030883&isnumber=7030833

 

Yunchun Li; Rong Fu, “An Parallelized Deep Packet Inspection Design in Software Defined Network,” Information Technology and Electronic Commerce (ICITEC), 2014 2nd International Conference on, vol., no., pp. 6, 10, 20-21 Dec. 2014. doi:10.1109/ICITEC.2014.7105560
Abstract: Deep packet inspection (DPI) is a key technology in software defined network (SDN) which can centralize network policy control and accelerate packet transmission. In this paper, we propose a new SDN architecture with DPI module. Base on the centralization idea of SDN, we deploying a parallel DPI to the control layer. We present DPI interface in the SDN controller and discuss OpenFlow protocol extension. Paralleling the DPI algorithm effectively reduces the time of detecting packets and sending flow tables. We also describe an Adaptive Highest Random Weight with an additional feedback corresponding to queue length and string length matching at each processor. The original Highest Random Weight (HRW) hash ensures the connection locality. Treating all tasks as the same weight just balances the workload over the number of different task. By adding the adjustment multiplier and combined with the characteristics of the fixed hash function, the system can allocate resource dynamically and achieve connection-level parallelism in consideration of the processing time for per packet.
Keywords: parallel processing; protocols; software defined networking; string matching; DPI module interface; HRW hash function; OpenFlow protocol extension; SDN architecture; adaptive highest random weight; centralize network policy control; connection-level parallelism; packet transmission acceleration; parallelized deep packet inspection design; queue length; resource allocation; software defined network; string length; Algorithm design and analysis; Computer architecture; Pattern matching; Servers; Software algorithms; Switches; Throughput; Adaptive Highest Random Weight; Deep Packet Inspection; SDN Controller; Software Defined Network (ID#: 15-6701)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7105560&isnumber=7105555

 

Niang, B., “Bandwidth Management — A Deep Packet Inspection Mathematical Model,” Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2014 6th International Congress on, vol., no., pp. 169, 175, 6–8 Oct. 2014. doi:10.1109/ICUMT.2014.7002098
Abstract: New technologies and services provided by Internet Service Providers (M2M, clouds, online-games, HD-video), lead to the constant growth of traffic per a single subscriber. This forces operators to increase network capacity even for processing traffic generated by existing subscribers. Under the circumstances of an overloaded radio network (common situation in large cities) subscribers often don’t obtain the bandwidth guaranteed by the contract. The Deep Packet Inspection helps operator to distinguish type of service in aggregate traffic and assign bitrate for each service separately. Our research works based on proposal of mathematical models of calculating the different flow characteristics and the number of tasks handling by a very speedy system dealing with Giga Ethernet. The Deep Packet Inspection is a software based engine; it allows analyzing the real-time traffic, enforcing rules and instruction received from other network component named Policy Controller and Rule Function. Therefore, the Software will deal with different kinds of traffic like Web based content, social networks traffic, Peer-to-Peer, Streaming, IPTV, Voice etc.
Keywords: Internet; bandwidth allocation; local area networks; radio networks; telecommunication traffic; Giga Ethernet; bandwidth management; bitrate assignment; deep packet inspection mathematical model; flow characteristic; internet service provider; mathematical model; network capacity; policy controller; radio network; real-time traffic; rule function; software based engine; task handling; traffic growth; Control systems; Hardware; Mathematical model; Protocols; Servers; Software; Telecommunications; Bandwidth; DPI; Mathematical models; Mobile Networks; Traffic (ID#: 15-6702)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7002098&isnumber=7002065

 

Watson, B.W.; Blox, I., “Elastic Deep Packet Inspection,” Cyber Conflict (CyCon 2014), 2014 6th International Conference on, vol., no., pp. 241, 253, 3–6 June 2014. doi:10.1109/CYCON.2014.6916406
Abstract: Deep packet inspection (DPI) systems are required to perform at or near network line-rate speeds, matching thousands of rules against the network traffic. The engineering performance and price trade-offs are such that DPI is difficult to virtualize, either because of very high memory consumption or the use of custom hardware; similarly, a running DPI instance is difficult to ‘move’ cheaply to another part of the network. Algorithmic constraints make it costly to update the set of rules, even with minor edits. In this paper, we present Elastic DPI. Thanks to new algorithms and data-structures, all of these performance and flexibility constraints can be overcome — an important development in an increasingly virtualized network environment. The ability to incrementally update rule sets is also a potentially interesting use-case in next generation firewall appliances that rapidly update their rule sets.
Keywords: computer network security; data structures; inspection; telecommunication traffic; virtualisation; DPI systems; data structures; elastic DPI; elastic deep packet inspection; engineering performance; firewall appliances; flexibility constraints; network traffic; performance constraints; rule set updating; virtualized network environment; Engines; Hardware; Inspection; Memory management; Optimization; Sensors; Virtual machining; deep packet inspection (DPI); incremental defense; speed/memory performance (ID#: 15-6703)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916406&isnumber=6916383

 

Roth, C.; Schillinger, R., “Detectability of Deep Packet Inspection in Common Provider/Consumer Relations,” Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on, vol., no., pp. 283, 287, 1–5 Sept. 2014. doi:10.1109/DEXA.2014.64
Abstract: Payload examination using Deep Packet Inspection (DPI) offers (infrastructure) providers a whole new range of use cases, many of them with a potential to eavesdrop on non-public communication. Current research is almost exclusively concerned with raising this capabilities on a technological level. Critical voices about DPI’s impact on the Internet with regard to privacy, net neutrality, and its other implications are raised, however often not within research communities but rather by politically interested groups. In fact, no definite method allowing detection of DPI is known. In this paper we present five different approaches targeting this problem. While starting points for DPI detection are given, including leakage of internal data or software errors, not all of of the presented approaches can be simulated or verified at all and none so far has been tested in real world settings.
Keywords: Internet; DPI detection; Internet; deep packet inspection; internal data; payload examination; provider-consumer relations; software errors; IP networks; Inspection; Internet; Payloads; Protocols; Security; Software (ID#: 15-6704)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974863&isnumber=6974758

 

Deri, L.; Martinelli, M.; Bujlow, T.; Cardigliano, A., “nDPI: Open-Source High-Speed Deep Packet Inspection,” Wireless Communications and Mobile Computing Conference (IWCMC), 2014 International, vol., no., pp. 617, 622, 4–8 Aug. 2014. doi:10.1109/IWCMC.2014.6906427
Abstract: Network traffic analysis was traditionally limited to packet header, because the transport protocol and application ports were usually sufficient to identify the application protocol. With the advent of port-independent, peer-to-peer, and encrypted protocols, the task of identifying application protocols became increasingly challenging, thus creating a motivation for creating tools and libraries for network protocol classification. This paper covers the design and implementation of nDPI, an open-source library for protocol classification using both packet header and payload. nDPI was extensively validated in various monitoring projects ranging from Linux kernel protocol classification, to analysis of 10 Gbit traffic, reporting both high protocol detection accuracy and efficiency.
Keywords: Linux; cryptographic protocols; operating system kernels; peer-to-peer computing; telecommunication traffic; transport protocols; Linux kernel protocol classification; application protocol identification; encrypted protocols; monitoring projects; nDPI; network protocol classification; network traffic analysis; open-source high-speed deep packet inspection; open-source library; packet header; payload; peer-to-peer protocols; port-independent protocols; protocol detection accuracy; protocol detection efficiency; transport protocol; IP networks; Libraries; Monitoring; Open source software; Payloads; Ports (Computers); Protocols; Deep Packet Inspection; Passive traffic classification; network traffic monitoring (ID#: 15-6705)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906427&isnumber=6906315

 

Luchaup, D.; De Carli, L.; Jha, S.; Bach, E., “Deep Packet Inspection with DFA-Trees and Parametrized Language Overapproximation,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 531, 539, April 27 2014–May 2 2014. doi:10.1109/INFOCOM.2014.6847977
Abstract: IPSs determine whether incoming traffic matches a database of vulnerability signatures defined as regular expressions. DFA representations are popular, but suffer from the state-explosion problem. We introduce a new matching structure: a tree of DFAs where the DFA associated with a node over-approximates those at its children, and the DFAs at the leaves represent the signature set. Matching works top-down, starting at the root of the tree and stopping at the first node whose DFA does not match. In the common case (benign traffic) matching does not reach the leaves. DFA-trees are built using Compact Overapproximate DFAs (CODFAs). A CODFA D’ for D over-approximates the language accepted by D, has a smaller number of states than D, and has a low false-match rate. Although built from approximate DFAs, DFA-trees perform exact matching faster than a commonly used method, have a low memory overhead and a guaranteed good worst case performance.
Keywords: computational complexity; deterministic automata; digital signatures; finite automata; formal languages; pattern matching; tree data structures; CODFAs; DFA-trees; IPSs; NP-hard problem; benign traffic matching; compact overapproximate DFAs; deep packet inspection; deterministic finite automata; intrusion prevention system; low false-match rate; low memory overhead; matching structure; parametrized language overapproximation; regular expressions; state-explosion problem; vulnerability signatures; Approximation error; Automata; Computers; Conferences; DH-HEMTs; Payloads; Training (ID#: 15-6706)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847977&isnumber=6847911

 

Melo, W.; Lopes, P.; Antonello, R.; Fernandes, S.; Sadok, D., “On the Performance of DPI Signature Matching with Dynamic Priority,” Computers and Communication (ISCC), 2014 IEEE Symposium on, vol., no., pp. 1, 6, 23–26 June 2014. doi:10.1109/ISCC.2014.6912553
Abstract: Traffic classification and identification plays an important role for several activities in network traffic management, where DPI (Deep Packet Inspection) is one of the most accurate and used techniques. However, inspection of packet payload is highly computing intensive. Several research studies have evaluated different components of DPI systems for application detection, in order to increase the classification speed. Nonetheless, the arrangement of the signatures in the signature set is an open issue and can degrade performance. Depending on the order of signatures, the overall performance of the DPI system can be degraded, leading to loss of packets and incorrect traffic identification. To the best of our knowledge, no previous research has analyzed the impact of the order of the application signatures and how it could be modified to improve the identification speed in a given DPI. In this work, we evaluate the impact of the ordering of signatures in a list and propose a method to dynamically adapt the signature list according to the traffic dynamics. We show the effectiveness of our approach with the most reactive proposed setup, saving more than 50% of processing time. We demonstrate the importance of the order of signatures and propose an effective method that can be used to save processing time. Finally, our method can be combined with other state-of-the-art techniques to achieve an optimal utilization of DPI features.
Keywords: computer network performance evaluation; computer network security; digital signatures; telecommunication traffic; DPI signature matching performance; DPI system components; application detection; deep-packet inspection; dynamic priority; identification speed improvement; incorrect-traffic identification problem; network traffic management; optimal DPI feature utilization; overall performance degradation; packet loss; packet payload inspection; processing time; signature arrangement; signature order; signature set; traffic classification speed; traffic dynamics; traffic identification speed; Automata; Engines; Graphics processing units; Inspection; Payloads; Radiation detectors; Telecommunication traffic; Deep Packet Inspection; Dynamic Priority; Performance Evaluation; Signatures List (ID#: 15-6707)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912553&isnumber=6912451

 

Yunchun Li; Jingxuan Li, “Multiclassifier: A Combination of DPI and ML for Application-Layer Classification in SDN,” Systems and Informatics (ICSAI), 2014 2nd International Conference on, vol., no., pp. 682, 686, 15–17 Nov. 2014. doi:10.1109/ICSAI.2014.7009372
Abstract: In traditional campus network, application-layer classification is often achieved by using specific devices that support application-layer classification. Since different vendors have different realizations, even the same flow may have different results with different devices. Thus it’s hard to set a global consistent application-layer management policy for the whole network. The idea of separating the control plane and the data plane comes up with Software Defined Network have opened a gate for solving this problem. In the SDN paradigm, the control plane have a global view over the whole network, thus it can do application-layer classification and set policies globally. In this paper, we identify problems with the current application-layer classification in campus network and analyze the advantage of doing application-layer classification with SDN. And based on SDN, we show a new approach to do application-layer classification combining different classifiers: Deep Packet Inspection and Machine Learning based Packet Classification. Our experiments show that with this approach, we can archive a high classification speed while maintain an acceptable accuracy rate.
Keywords: learning (artificial intelligence); pattern classification; software defined networking; DPI; ML; MultiClassifier; SDN; application-layer classification; campus network; deep packet inspection; machine learning; packet classification; Accuracy; Classification algorithms; Computer architecture; Protocols; Reliability; Software defined networking; Throughput; Application-layer classification; Deep Packet Inspection; Machine Learning; Software Defined Network (ID#: 15-6708)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7009372&isnumber=7009247

 

Zoican, S.; Vochin, M., “On Implementing Packet Inspection Using CUDA Enabled Graphical Processing Units,” Communications (COMM), 2014 10th International Conference on, vol., no., pp. 1, 6, 29–31 May 2014. doi:10.1109/ICComm.2014.6866661
Abstract: This work has the goal to study how an efficient deep packet inspection (DPI) algorithm may be implemented using the graphical processing unit (GPU) CUDA (Computer Unified Device Architecture) enabled boards existing in personal computers, and to analyze implementation efficiency. The following tasks have been analyzed: the parallelization of the pattern matching algorithm and the optimization of C code written for Nvidia compiler to obtain the best performance. The conclusion shows that CUDA technology represents a very attractive solution to implement DPI algorithms without the typically memory and complexity constraints.
Keywords: computer networks; graphics processing units; parallel algorithms; parallel architectures; pattern matching; C code optimization; CUDA enabled graphical processing units; Nvidia compiler; computer unified device architecture; deep packet inspection algorithm; pattern matching algorithm parallelization; Algorithm design and analysis; Computer architecture; Graphics processing units; Inspection; Instruction sets; Kernel; Registers; CUDA technology; deep packet inspection; deterministic finite automaton; pattern search; significant character (ID#: 15-6709)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866661&isnumber=6866648

 

Yamaguchi, F.; Nishi, H., “High-Throughput and Low-Cost Hardware Accelerator for Privacy Preserving Publishing,” Field-Programmable Custom Computing Machines (FCCM), 2014 IEEE 22nd Annual International Symposium on, vol., no., pp. 242, 242, 11-13 May 2014. doi:10.1109/FCCM.2014.77
Abstract: Deep Packet Inspection (DPI) has become crucial for providing rich internet services, such as intrusion and phishing protection, but the use of DPI raises concerns for protecting the privacy of internet users. In this paper, a RAM-based hardware anonymizer is proposed for implementation on a Virtex-5 FPGA device. The results of the hardware anonymizer showed that the proposed architecture reduced circuit usage by 40%.
Keywords: Internet; computer crime; data privacy; electronic publishing; field programmable gate arrays; random-access storage; Internet services; RAM-based hardware anonymizer; Virtex-5 FPGA device; circuit usage; hardware anonymizer; high-throughput low-cost hardware accelerator; intrusion protection; phishing protection; privacy preserving publishing; Data privacy; Field programmable gate arrays; Hardware; Internet; Privacy; Random access memory; Table lookup; Anonymization; Deep Packet Inspection; FPGA (ID#: 15-6710)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861639&isnumber=6861562

 

Salcedo Parra, O.J.; Basto Maldonado, E.J.; Reyes Daza, B.S., “Legal Assessment of DPI in Telecommunication Networks in Colombia,” Information Society (i-Society), 2014 International Conference on, vol., no., pp. 228, 233, 10–12 Nov. 2014. doi:10.1109/i-Society.2014.7009048
Abstract: Deep Packet Inspection technology has generated such recent debates and expectations for its operation. If we take as a basis the operators and ISPs interfering network service platforms and equipment capable of analyzing all traffic to Internet subscribers, this fact has led to all kinds of disputes such as who or what agency regulates that interfered and analyzed data are not made public, maintain its integrity and are not marketed in any way, respecting the privacy of users. This article provides an assessment of the legal proceedings and legal framework whose scope is the protection of data and information, with the use of deep packet inspection or DPI, by operators and service providers in Colombia. Subsequent analyzes the composition of the Internet and the responsible authorities of regulating the services offered in the network. Finally a number of suggestions and recommendations to the actors that directly affect the deep packet inspection will be concluded by referencing real cases, laws and models governing other countries.
Keywords: Internet; data privacy; law; telecommunication services; telecommunication traffic; Colombia; DPI; ISP; Internet service provider; Internet subscribers; deep packet inspection technology; legal assessment; network service; telecommunication networks; Government; Inspection; Internet; Law; Privacy; Security; DPI; Firewall; Habeas Data; ISP; Petabyte; TIC (ID#: 15-6711)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7009048&isnumber=7008990

 

Lara, A.; Ramamurthy, B., “OpenSec: A Framework for Implementing Security Policies Using OpenFlow,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 781, 786, 8–12 Dec. 2014. doi:10.1109/GLOCOM.2014.7036903
Abstract: As the popularity of software defined networks (SDN) and OpenFlow increases, policy-driven network management has received more attention. Manual configuration of multiple devices is being replaced by an automated approach where a software-based, network-aware controller handles the configuration of all network devices. Software applications running on top of the network controller provide an abstraction of the topology and facilitate the task of operating the network. We propose OpenSec, an OpenFlow-based security framework that allows a network security operator to create and implement security policies written in human-readable language. Using OpenSec, the user can describe a flow in terms of OpenFlow matching fields, define which security services must be applied to that flow (deep packet inspection, intrusion detection, spam detection, etc) and specify security levels that define how OpenSec reacts if malicious traffic is detected. We implement OpenSec in the GENI testbed to evaluate the flexibility, accuracy and scalability of the framework. The experimental setup includes deep packet inspection, intrusion detection and network quarantining to secure a web server from network scanners. We achieve a constant delay when reacting to security alerts and a detection rate of 98%.
Keywords: Internet; computer network management; computer network security; software defined networking; telecommunication network topology; telecommunication traffic; GENI testbed; OpenFlow matching fields; OpenFlow-based security framework; OpenSec; Web server; deep packet inspection; human-readable language; intrusion detection; malicious traffic; network quarantining; network scanners; network security; network-aware controller; policy-driven network management; security policies; software applications; software defined networks; software-based controller; Communication networks; Inspection; Ports (Computers); Process control; Security; Switches; Network Security; OpenFlow; Software Defined Networking (ID#: 15-6712)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036903&isnumber=7036769

 

Rathod, P.M.; Marathe, N.; Vidhate, A.V., “A Survey on Finite Automata Based Pattern Matching Techniques for Network Intrusion Detection System (NIDS),” Advances in Electronics, Computers and Communications (ICAECC), 2014 International Conference on, vol., no., pp. 1, 5, 10–11 Oct. 2014. doi:10.1109/ICAECC.2014.7002456
Abstract: Many network security applications such as Intrusion Detection System (IDS), Firewall and data loss prevention system (dlps) are based on deep packet inspection, in this packets header as well as payload of the packets are checked with predefined attack signature to identify whether it contains malicious traffic or not. To perform this checking different pattern matching methods are used by NIDS. The most popular method to implement pattern matching is to use of Finite Automata (FA). Generally, regular expressions are used to represent most of the attack signatures defined by NIDS. They are implemented using finite automata, which takes the payload of packet as input string. However, existing approaches of Finite Automata (FA), both deterministic finite automata (DFA) and non-deterministic finite automata (NFA) for pattern matching are having their own advantages and some drawbacks. The DFA based pattern matching methods are fast enough but require more memory. However, NFA based pattern matching methods are comparatively takes less memory but the speed of matching is very slow, to overcome these drawbacks of finite automata there are many approaches have been proposed. This paper discuses comparative study of some Finite Automata (FA) based techniques for pattern matching in network intrusion detection system (NIDS).
Keywords: computer network security; finite automata; pattern matching; telecommunication traffic; DLPS; FA; NIDS; attack signatures; data loss prevention system; deep packet inspection; finite automata based pattern matching techniques; firewall; malicious traffic; network intrusion detection system; network security applications; packets header; packets payload; regular expression matching; Application specific integrated circuits; Automata; Field programmable gate arrays; Intrusion detection; Memory management; Merging; Pattern matching; Finite Automata; NIDS and DLPS; Regular Expression Matching (ID#: 15-6713)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7002456&isnumber=7002373

 

Yuzhi Wang; Ping Ji; Borui Ye; Pengjun Wang; Rong Luo; Huazhong Yang, “GoHop: Personal VPN to Defend from Censorship,” Advanced Communication Technology (ICACT), 2014 16th International Conference on, vol., no., pp. 27, 33, 16–19 Feb. 2014. doi:10.1109/ICACT.2014.6778916
Abstract: Internet censorship threatens people’s online privacy, and in recent years, new technologies such as high-speed Deep Packet Inspection (DPI) and statistical traffic analysis methods had been applied in country scale censorship and surveillance projects. Traditional encryption protocols cannot hide statistical flow properties and new censoring systems can easily detect and block them “in the dark”. Recent work showed that traffic morphing and protocol obfuscation are effective ways to defend from statistical traffic analysis. In this paper, we proposed a novel traffic obfuscation protocol, where client and server communicate on random port. We implemented our idea as an open-source VPN tool named GoHop, and developed several obfuscation method including pre-shared key encryption, traffic shaping and random port communication. Experiments has shown that GoHop can successfully bypass internet censoring systems, and can provide high-bandwidth network throughput.
Keywords: Internet; cryptographic protocols; data protection; public domain software; statistical analysis; telecommunication traffic; transport protocols; DPI; GoHop; TCP protocol; bypass Internet censoring systems; country scale censorship; encryption protocols; high-bandwidth network throughput; high-speed deep packet inspection; open-source VPN tool; people online privacy; personal VPN; pre-shared key encryption; privacy protection; random port communication; statistical flow property; statistical traffic analysis methods; surveillance projects; traffic morphing; traffic obfuscation protocol method; traffic shaping; Cryptography; Internet; Ports (Computers); Protocols; Servers; Throughput; Virtual private networks; VPN; censorship circumvention; privacy protection; protocol obfuscation; random port; traffic morphing (ID#: 15-6714)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778916&isnumber=6778899

 

An Yang; Liang Zhang, “MP-DPI: A Network Processing Platform Based on the Many-Core Processor,” Communication Problem-Solving (ICCP), 2014 IEEE International Conference on, vol., no., pp. 435, 438, 5–7 Dec. 2014. doi:10.1109/ICCPS.2014.7062315
Abstract: Deep packet inspection or DPI is now a fast growing application technology in the field of network security, which requires the network security platform has a higher speed to handle a large number of session connections, and track the status of these connections quickly. This paper proposed the MP-DPI, a many-core based network processing platform, which uses the ATCA standard modular design, makes use of the integrated many-core network process accelerate engine, and integrates a popular open source DPI system named SNORT. The experiment result shows that in the same power consumption, the throughput of MP-DPI platform is three times as large as traditional X86 servers.
Keywords: computer network security; multiprocessing systems; ATCA standard modular design; MP-DPI network processing platform; SNORT system; deep packet inspection; many-core network process accelerate engine; many-core processor; multiprocessing system; network security; open source DPI system; session connection; Blades; Hardware; Power demand; Servers; Switches; Throughput; Uniform resource locators (ID#: 15-6715)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7062315&isnumber=7062199

 

Takano, Y.; Ohta, S.; Takahashi, T.; Ando, R.; Inoue, T., “Mindyourprivacy: Design and Implementation of a Visualization System for Third-Party Web Tracking,” Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, vol., no., pp. 48, 56, 23–24 July 2014. doi:10.1109/PST.2014.6890923
Abstract: Third-party Web tracking is a serious privacy issue. Advertisement sites and social networking sites stealthily collect users' Web browsing history for purposes such as targeted advertising or predicting trends. Unfortunately, very few Internet users realize this, and their privacy has been infringed upon because they have no means of recognizing the situation. In this paper we present the design and implementation of a system called MindYourPrivacy that visualizes third-party Web tracking and clarifies the entities threatening users' privacy. The implementation adopts deep packet inspection, DNS-SOA-record-based categorization, and HTTP-referred graphical analysis to visualize collectors of Web browsing histories without device dependency. To demonstrate the effectiveness of our proof-of-concept implementation, we conducted an experiment in an IT technology camp, where 129 attendees discussed IT technologies for four days, The experiment's results revealed that visualizing Web tracking effectively influences users' perception of privacy. Analysis of the user data we collected at the camp also revealed that MCODE clustering and some features derived from graph theory are useful for detecting advertising sites that potentially collect user information by Web tracking for their own purposes.
Keywords: Internet; advertising; data privacy; data visualisation; graph theory; pattern clustering; service-oriented architecture; social networking (online); DNS-SOA-record-based categorization; HTTP-referred graphical analysis; IT technology camp; Internet users; MCODE clustering; MindYourPrivacy; Web browsing history; advertisement sites; device dependency; packet inspection; proof-of-concept implementation; social networking sites; third-party Web tracking; user data analysis; user privacy; visualization system; Browsers; Databases; HTML; History; Privacy; Target tracking; Data and Knowledge Visualization; Network Monitoring; Security; Web Mining (ID#: 15-6716)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890923&isnumber=6890911

 

He, Gaofeng; Zhang, Tao; Ma, Yuanyuan; Xu, Bingfeng, “A Novel Method to Detect Encrypted Data Exfiltration,” Advanced Cloud and Big Data (CBD), 2014 Second International Conference on, vol., no., pp. 240, 246, 20–22 Nov. 2014. doi:10.1109/CBD.2014.40
Abstract: Cloud computing’s distributed architecture helps ensure service resilience and robustness. Meanwhile, the big data stored in the cloud are valuable and sensitive and they are becoming attractive targets of attackers. In real life, attackers can carry out attacks such as Advanced Persistent Threat (APT) to invade cloud infrastructure and steal cloud users’ confidential data through encrypted transmission. Unfortunately, the most commonly used methods, e.g., Deep Packet Inspection (DPI), cannot detect encrypted data leakage efficiently. In this paper, we propose a novel method to detect encrypted data exfiltration for cloud. Generally speaking, the proposed method is composed of two steps. First, cloud providers analyze all outgoing network traffic and find out encrypted traffic. Second, cloud providers determine whether the encrypted traffic is launched by cloud users expectedly. If not, the encrypted traffic will be considered as data exfiltration. Specially, in the first step, DPI and entropy technology are used together to find out encrypted traffic efficiently and in the second step, we determine whether the encryption is expected or not through building cloud users’ network behavior profile. We have carried out extensive experiments in real-world network environment and the experimental results validate the feasibility and effectiveness of our method.
Keywords: Encryption; Entropy; Estimation; Feature extraction; IP networks; Protocols; cloud; data exfiltration; network behavior profile; sample entropy; security (ID#: 15-6717)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176100&isnumber=7176054

 

Udechukwu, R.; Dutta, R., “Extending Openflow for Service Insertion and Payload Inspection,” Network Protocols (ICNP), 2014 IEEE 22nd International Conference on, vol., no., pp. 589, 595, 21–24 Oct. 2014. doi:10.1109/ICNP.2014.94
Abstract: Software Defined Networking (SDN) offers traffic characterization and resource allocation policies to change dynamically, while avoiding the obsolescence of specialized forwarding equipment. Open Flow, a SDN standard, is currently the only standard that explicitly focuses on multi-vendor openness. Unfortunately, it only provides for traffic engineering on an integrated basis for L2–L4. The obvious approaches to expand Open Flow's reach to L7, would be to enhance the data path flow table, or to utilize the controller for deep packet inspection, both introduces significant scalability barriers. We propose and prototype an enhancement to Open Flow based on the idea of an External Processing Box (EPB) optionally attached to forwarding engines, however, we use existing protocol extension constructs to control the EPB as an integrated part of the Open Flow data path. This provides network operators with the ability to use L7-based policies to control service insertion and traffic steering, without breaking the open paradigm. This novel yet eminently practical augmentation of Open Flow provides added value critical for realistic networking practice. Retention of multi-vendor openness for such an approach has not been previously reported in literature to the best of our knowledge. We report numerical results from our prototype, characterizing the performance and practicality of this prototype by implementing a video reconditioning application on this platform.
Keywords: protocols; resource allocation; software defined networking; telecommunication traffic; L7-based policies; Open Flow data path; SDN standard; data path flow table; deep packet inspection; external processing box; forwarding engines; forwarding equipment; multivendor openness; network operators; open paradigm; payload inspection; protocol extension; resource allocation policies; scalability barriers; service insertion; software defined networking; traffic characterization; traffic engineering; traffic steering; video reconditioning application; Delays; Engines; Hardware; Process control; Prototypes; Streaming media; Video recording (ID#: 15-6718)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980433&isnumber=6980338 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Discrete and Continuous Optimization 2015

 

 
SoS Logo

Discrete and Continuous Optimization

2015


Discrete and continuous optimization are mathematical approaches to problem solving. The research works cited here are primarily focused on continuous optimization. For Science of Security, they relate to cyber-physical systems, resilience, and composability. Some of the most important work is being done in control systems. They appeared in 2015.



Cao, Cen; Ni, Qingjian; Zhai, Yuqing, “A Novel Community Detection Method Based on Discrete Particle Swarm Optimization Algorithms in Complex Networks,” in Evolutionary Computation (CEC), 2015 IEEE Congress on, vol., no., pp. 171–178, 25–28 May 2015. doi:10.1109/CEC.2015.7256889
Abstract: The community structure is one of the most common and important attributes in complex networks. Community detection in complex networks has attracted much attention in recent years. As an effective evolutionary computation technique, particle swarm optimization (PSO) algorithm has become a candidate for many optimization applications. However, PSO algorithm was originally designed for continuous optimization. In this paper, an improved simple discrete particle swarm optimization (ISPSO) algorithm and a discrete particle swarm optimization with redefined operator (IDPSO-RO) algorithm are proposed in the discrete context of community detection problem. Furthermore, a community correcting strategy is used to optimize the results. The performance of the two algorithms is tested on three real networks with known community structures. The experiment results show that ISPSO and IDPSO-RO algorithms using community correcting strategy can detect community structures more efficiently without prior knowledge about the size of communities and the number of communities.
Keywords: Algorithm design and analysis; Complex networks; Image edge detection; Optimization; Particle swarm optimization; Sociology; Statistics (ID#: 15-7105)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7256889&isnumber=7256859

 

Weiliang Zeng; Zheng, Y.R.; Schober, R., “Online Resource Allocation for Energy Harvesting Downlink MIMO Systems with Finite-Alphabet Inputs,” in Communications (ICC), 2015 IEEE International Conference on, vol., no., pp. 2142–2147, 8–12 June 2015. doi:10.1109/ICC.2015.7248642
Abstract: This paper proposes an online resource allocation algorithm for weighted sum rate maximization in energy harvesting downlink multiuser multiple-input multiple-output (MIMO) systems. Taking into account the discrete nature of the modulation and coding rates (MCRs) used in practice, we formulate a stochastic dynamic programming (SDP) problem to jointly design the MIMO precoders, select the MCRs, assign the subchannels, and optimize the energy consumption over multiple time slots with causal and statistical energy arrival information and statistical channel state information. Solving this high-dimensional SDP entails several difficulties: the SDP has a nonconcave objective function, the optimization variables are of mixed binary and continuous types, and the number of optimization variables is on the order of thousands. We propose a new method to solve this NP-hard SDP by decomposing the high-dimensional SDP into an equivalent three-layer optimization problem and show that efficient algorithms can be used to solve each layer separately. The decomposition reduces the computational burden and breaks the curse of dimensionality.
Keywords: MIMO communication; dynamic programming; energy consumption; energy harvesting; resource allocation; stochastic programming; MIMO precoders; energy consumption; energy harvesting downlink MIMO systems; finite-alphabet inputs; modulation and coding rates; multiple time slots; nonconcave objective function; online resource allocation; statistical channel state information; stochastic dynamic programming problem; three-layer optimization problem; weighted sum rate maximization; Downlink; Energy consumption; Joints; MIMO; Optimization; Transmitters; Wireless communication (ID#: 15-7106)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7248642&isnumber=7248285

 

Fainekos, G., “Automotive Control Design Bug-Finding with the S-Taliro Tool,” in American Control Conference (ACC), 2015, vol., no., pp. 4096-4096, 1–3 July 2015. doi:10.1109/ACC.2015.7171969
Abstract: One of the important challenges in the Model Based Development (MBD) of automotive systems is the problem of verifying functional system properties. In its general form, the verification problem is undecidable due to the interplay between continuous and discrete system dynamics [1]. In this tutorial, we present the bounded-time temporal logic testing and verification problem for Cyber-Physical Systems (CPS) [2]. Temporal logics [3] can formally capture both state-space and real-time system requirements. For example, temporal logics can mathematically state requirements like “whenever the system switches to first gear, then it should not switch to second gear within 2.5 sec”. Our approach in tackling this challenging problem is to convert the verification problem into an optimization problem through a notion of robustness for temporal logics [4]. The robust interpretation of a temporal logic specification over a system trajectory quantifies “how much” the system trajectory satisfies or does not satisfy the specification. In general, the resulting optimization problem is non-convex and non-linear, the utility function is not known in closed-form and the search space is uncountable. Thus, stochastic search techniques are employed in order to solve the resulting optimization problem. We have implemented our testing and verification framework into a MATLAB (TM) toolbox called S-TaLiRo (System’s TemporAl LogIc Robustness) [5], [6]. In this tutorial, we will demonstrate how S-TaLiRo can provide answers to challenge problems from the automotive industry [7]-[10].
Keywords: automobile industry; automotive engineering; concave programming; control engineering computing; control system synthesis; embedded systems; formal specification; formal verification; nonlinear programming; search problems; state-space methods; stochastic programming; temporal logic; MBD; Matlab toolbox; S-TaLiRo tool; automotive control design bug finding; automotive industry; automotive systems; bounded-time temporal logic testing; continuous system dynamics; cyber-physical system; discrete system dynamics; functional system properties verification; model based development; nonconvex optimization problem; nonlinear optimization problem; real-time system requirements; state-space system requirements; stochastic search techniques; temporal logic specification; Automotive engineering; Cyber-physical systems; Mathematical model; Optimization; Real-time systems; Robustness; Tutorials (ID#: 15-7107)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7171969&isnumber=7170700

 

Ying Cui; Medard, M.; Yeh, E.; Leith, D.; Duffy, K., “Optimization-Based Linear Network Coding for General Connections of Continuous Flows,” in Communications (ICC), 2015 IEEE International Conference on, vol., no., pp. 4492–4498, 8–12 June 2015. doi:10.1109/ICC.2015.7249030
Abstract: For general connections, the problem of finding network codes and optimizing resources for those codes is intrinsically difficult and little is known about its complexity. Most of the existing solutions rely on very restricted classes of network codes in terms of the number of flows allowed to be coded together, and are not entirely distributed. In this paper, we consider a new method for constructing linear network codes for general connections of continuous flows to minimize the total cost of edge use based on mixing. We first formulate the minimum-cost network coding design problem. To solve the optimization problem, we propose two equivalent alternative formulations with discrete mixing and continuous mixing, respectively, and develop distributed algorithms to solve them. Our approach allows fairly general coding across flows and guarantees no greater cost than any solution without inter-flow network coding.
Keywords: distributed algorithms; linear codes; minimisation; network coding; continuous mixing; cost minimization; discrete mixing; distributed algorithm; equivalent alternative formulation; optimization-based linear network coding; Complexity theory; Distributed algorithms; Linear codes; Minimization; Network coding; Optimization (ID#: 15-7108)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249030&isnumber=7248285

 

Ali, U.; Yuan Yan; Mostofi, Y.; Wardi, Y., “An Optimal Control Approach for Communication and Motion Co-Optimization in Realistic Fading Environments,” in American Control Conference (ACC), 2015, vol., no., pp. 2930–2935, 1–3 July 2015. doi:10.1109/ACC.2015.7171180
Abstract: We consider an energy co-optimization problem of minimizing the total communication and motion energy of a robot tasked with transmitting a given number of information bits while moving along a fixed path. The data is transmitted to a remote station over a fading channel, which changes along the trajectory of the robot. While a previous approach to the problem used a speed-based motion-energy model, this paper uses acceleration both as an input to the system and as a basis for the motion energy which is more realistic. Furthermore, while previous approaches posed the problem in discrete time, we formulate it in continuous time. This enables us to pose the problem in an optimal control framework amenable to the use of maximum principle. We then compute the optimal control input via an effective algorithm recently developed by us that converges very fast. We use practical models for channel fading and energy consumption: the channel quality is predicted based on actual measurements, and the energy models are based on physical principles. Simulation is used to solve a specific problem and demonstrate the efficacy of our proposed approach.
Keywords: fading channels; maximum principle; motion control; robots; trajectory control; energy co-optimization; fading channel; motion co-optimization; motion energy minimization; optimal control approach; optimal control framework; realistic fading environment; robot trajectory; total communication minimization; Acceleration; Fading; Optimal control; Probabilistic logic; Robot sensing systems; Trajectory (ID#: 15-7109)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7171180&isnumber=7170700

 

Shareef, Ali; Shareef, Aliha; Yifeng Zhu, “Optrix: Energy Aware Cross Layer Routing Using Convex Optimization in Wireless Sensor Networks,” in Networking, Architecture and Storage (NAS), 2015 IEEE International Conference on, vol., no., pp. 141–150, 6–7 Aug. 2015. doi:10.1109/NAS.2015.7255235
Abstract: Energy minimization is of great importance in wireless sensor networks in extending the battery lifetime. One of the key activities of nodes in a WSN is communication and the routing of their data to a centralized base-station or sink. Routing using the shortest path to the sink is not the best solution since it will cause nodes along this path to fail prematurely. We propose a cross-layer energy efficient routing protocol Optrix that utilizes a convex formulation to maximize the lifetime of the network as a whole. We further propose, Optrix-BW, a novel convex formulation with bandwidth constraint that allows the channel conditions to be accounted for in routing. By considering this key channel parameter we demonstrate that Optrix-BW is capable of congestion control. Optrix is implemented in TinyOS, and we demonstrate that a relatively large topology of 40 nodes can converge to within 91 % of the optimal routing solution. We describe the pitfalls and issues related with utilizing a continuous form technique such as convex optimization with discrete packet based communication systems as found in WSNs. We propose a routing controller mechanism that allows for this transformation. We compare Optrix against the Collection Tree Protocol (CTP) and we found that Optrix performs better in terms of convergence to an optimal routing solution, for load balancing and network lifetime maximization than CTP.
Keywords: Algorithm design and analysis; Energy efficiency; Maintenance engineering; Optimized production technology; Routing; Routing protocols; Wireless sensor networks; Convex Optimization; Energy Aware Routing; Network Routing; Network Topology; Routing Algorithm; Simulation; TOSSIM; TinyOS; Wireless Sensor Networks (ID#: 15-7110)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7255235&isnumber=7255186

 

Csiszar, A., “A Combinatorial Optimization Approach to the Denavit-Hartenberg Parameter Assignment,” in Advanced Intelligent Mechatronics (AIM), 2015 IEEE International Conference on, vol., no., pp. 1451–1456, 7–11 July 2015. doi:10.1109/AIM.2015.7222745
Abstract: Assigning a Denavit Hartenberg parameter table to a serial robot or a kinematic chain is a cumbersome task which confuses many novice roboticists. In this paper a combinatorial optimization approach to Denavit Hartenberg parameter assignment is proposed. The proposed combinatorial optimization approach eliminates the reliance on human experience when assigning Denavit-Hartenberg parameters. Using practical insights the parameter assignment problem is transferred from a continuous search space to a discrete search space hence enabling the use of combinatorial optimization techniques. The search space is reduced below a limit which makes the application of exhaustive search and branch and bound optimization techniques practicable. An algorithm is described which is capable of generating all practical Denavit Hartenberg parametrizations of a kinematic chain (including dummy frames) within an acceptable time frame. The obtained results are ranked based on a proposed criterion.
Keywords: combinatorial mathematics; optimisation; robot kinematics; tree searching; Denavit Hartenberg parameter table; Denavit Hartenberg parametrizations; Denavit-Hartenberg parameter assignment; branch and bound optimization techniques; combinatorial optimization; continuous search space; discrete search space; exhaustive search; kinematic chain; parameter assignment problem; roboticists; serial robot; DH-HEMTs; Joints; Kinematics; Optimization; Robot kinematics; Search problems (ID#: 15-7111)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7222745&isnumber=7222494

 

Hongwei, Wang, “Adaptive Control Based on Multiple Sub-Models and Auxiliary Variables for Non-Uniformly Sampled Data Systems,” in Control Conference (CCC), 2015 34th Chinese, vol., no., pp. 2994–2999, 28–30 July 2015. doi:10.1109/ChiCC.2015.7260100
Abstract: For the control of non-uniformly sampled systems (NUSS), a new adaptive control method based on sub-models and auxiliary variables is proposed. First of all, the lifted state space model for a class of multi-rates systems non-uniformly sampled from their continuous-time systems are derived, and the corresponding discrete transfer function model is acquired by using mathematical theory derivation to analyze the state space model of NUSS. An auxiliary variables based identification algorithm is employed to confirm the discrete transfer function model by using the non-uniformly sampled data. The model of NUSS acquired from the identification algorithm is decomposed into the sub-models in accordance to optimization control principle. On this basis, the adaptive control method is obtained by designing the adaptive controller of each sub-model based on auxiliary variables. The parameter estimated-based adaptive control algorithm can virtually achieve optimal control and ensure that the closed-loop system is stable and globally convergent. Finally, the simulation example is studied to demonstrate the effectiveness of the proposed method.
Keywords: Adaptation models; Adaptive control; Algorithm design and analysis; Data models; Linear programming; Mathematical model; Noise; adaptive control; auxiliary variables; identification; multi-rates; non-uniformly sampling (ID#: 15-7112)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260100&isnumber=7259602

 

Tawfeeq, A.-B.L.; Eremia, M., “A New Technique of Reactive Power Optimization Random Selection Paths RSP,” in Advanced Topics in Electrical Engineering (ATEE), 2015 9th International Symposium on, vol., no., pp. 848–853, 7–9 May 2015. doi:10.1109/ATEE.2015.7133944
Abstract: This paper presents a new technique of Reactive Power Optimization called Random Selection Paths RSP, where a new implementation depending on limit the region search and choosing the candidates of the control variables randomly has been applied. The objective function of the proposed algorithm is to minimize the system active power loss. The control variables are generator bus voltages, transformer tap positions and switch-able shunt capacitor banks. The new technique can easily treat with the both of continuous and discrete control variables and has been applied for practical IEEE 6, IEEE 14 and IEEE 30 bus systems. The proposed algorithm shows better results as compared to previous work.
Keywords: continuous systems; discrete systems; optimisation; power capacitors; power system control; reactive power; transformers; IEEE 14 bus systems; IEEE 30 bus systems; IEEE 6 bus systems; RSP; continuous control variables; discrete control variables; generator bus voltages; random selection paths; reactive power optimization; switchable shunt capacitor banks; system active power loss; transformer tap positions; Generators; Genetic algorithms; Linear programming; Load flow; Optimization; Reactive power; Voltage control; Active power loss reduction; New optimization technique; Random Selection Path; Reactive Power Optimization (ID#: 15-7113)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133944&isnumber=7133661

 

Miyakawa, M.; Takadama, K.; Sato, H., “Directed Mating Using Inverted PBI Function for Constrained Multi-Objective Optimization,” in Evolutionary Computation (CEC), 2015 IEEE Congress on, vol., no., pp. 2929–2936, 25–28 May 2015. doi:10.1109/CEC.2015.7257253
Abstract: In evolutionary constrained multi-objective optimization, the directed mating utilizing useful infeasible solutions having better objective function values than feasible solutions significantly contributes to improving the search performance. This work tries to further improve the effectiveness of the directed mating by focusing on the search directions in the objective space. Since the conventional directed mating picks useful infeasible solutions based on Pareto dominance, all solutions are given the same search direction regardless of their locations in the objective space. To improve the diversity of the obtained solutions in evolutionary constrained multi-objective optimization, we propose a variant of the directed mating using the inverted PBI (IPBI) scalarizing function. The proposed IPBI-based directed mating gives unique search directions to all solutions depending on their locations in the objective space. Also, the proposed IPBI-based directed mating can control the strength of directionality for each solution’s search direction by the parameter θ. We use discrete m-objective k-knapsack problems and continuous mCDTLZ problems with 2-4 objectives and compare the search performances of TNSDM algorithm using the conventional directed mating and the proposed TNSDM-IPBI using IPBI-based directed mating. The experimental results shows that the proposed TNSDM-IPBI using the appropriate θ* achieves higher search performance than the conventional TNSDM in all test problems used in this work by improving the diversity of solutions in the objective space.
Keywords: Pareto optimisation; evolutionary computation; knapsack problems; search problems; IPBI-based directed mating; Pareto dominance; TNSDM algorithm; continuous mCDTLZ problems; discrete m-objective k-knapsack problems; evolutionary constrained multiobjective optimization; inverted PBI scalarizing function; objective function values; search directions; search performance; Search problems (ID#: 15-7114)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7257253&isnumber=7256859

 

Gu Xinxin; Wen Jiwei; Peng Li, “Model Predictive Control for Continuous-Time Markov Jump Linear Systems,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 2071–2074, 23–25 May 2015. doi:10.1109/CCDC.2015.7162262
Abstract: This paper mainly studies the continuous-time Markov Jump Linear Systems (MJLSs) problem based on model predictive control (MPC). Sufficient conditions of the optimization problem, which could guarantee the mean square stability of the close-loop MJLS, are given at every sample time. Since the MPC strategy is aggregated into continuous-time MJLSs, a discrete-time controller is employed to deal with a continuous-time plant and the adopted cost function not only refers to the knowledge of system state but also considers the sampling period. In addition, the feasibility of MPC scheme and the mean square stability of the MJLS are deeply discussed by using the invariant ellipsoid. Finally, the main results are verified by a numerical example.
Keywords: Markov processes; closed loop systems; continuous time systems; discrete time systems; linear systems; optimisation; predictive control; stability; stochastic systems; MPC strategy; close-loop MJLS; continuous-time MJLS; continuous-time Markov jump linear systems; continuous-time plant; cost function; discrete-time controller; invariant ellipsoid; mean square stability; model predictive control; optimization problem; sampling period; sufficient conditions; system state; Ellipsoids; Linear systems; Markov processes; Optimization; Predictive control; Robustness; Stability analysis; Continuous-time Markov jump linear systems; Invariant ellipsoid; Model predictive control (ID#: 15-7115)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162262&isnumber=7161655

 

Mini, V.; Sunil Kumar, T.K., “Diversity Enhanced Particle Swarm Optimization Based Optimal Reactive Power Dispatch,” in Advancements in Power and Energy (TAP Energy), 2015 International Conference on, vol., no., pp. 170–175, 24–26 June 2015. doi:10.1109/TAPENERGY.2015.7229612
Abstract: Reactive Power Dispatch (RPD) problem is a complex nonlinear problem involving integer, discrete and continuous types of control variables. This paper proposes a novel algorithm for solving the RPD problem using Diversity Enhanced Particle Swarm Optimization (DEPSO) technique. The proposed method offers an effective technique for solving Mixed Integer Discrete Continuous (MIDC) problems; hence suitable for the RPD problem. The effectiveness of the proposed method is reflected on the rounding off of control variables to the nearest integer or nearest available discrete values. With the implementation of the solution obtained in real time applications, the system becomes less prone to voltage instability. In this paper, DEPSO is applied to standard IEEE 30-bus test system. The results obtained are compared with those of basic PSO method.
Keywords: IEEE standards; integer programming; load dispatching; particle swarm optimisation; reactive power control; DEPSO technique; IEEE 30-bus test system; MIDC problem; RPD problem; diversity enhanced particle swarm optimization based optimal reactive power dispatch; mixed integer discrete continuous problem; Niobium; Planning; Diversity Enhanced PSO; Diversity factor; Mixed Integer Discrete Continuous problem; Reactive Power Dispatch (ID#: 15-7116)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7229612&isnumber=7229578

 

Sogo, T.; Utsuno, T., “Simple Algebraic Structure of Taylor Expansion of Sampled-Data Systems,” in Control Conference (ASCC), 2015 10th Asian, vol., no., pp. 1–6, May 31 2015–June 3 2015. doi:10.1109/ASCC.2015.7244806
Abstract: The relation between the continuous-time model and the corresponding discrete-time model of sampled-data system has not been believed to be very simple. However, from the view point of Taylor expansion with respect to sample time, the relation is approximated by unexpectedly simple polynomials. In this paper, we show that there is a simple regularity in Taylor expansion for any sampled-data systems. Next, it is demonstrated that the regularity reduces symbolic calculation of the Taylor expansion. Finally, we apply the result to identification of continuous-time model from discrete-time input-output data of sampled-data systems based on optimization techniques.
Keywords: continuous time systems; discrete time systems; optimisation; polynomials; sampled data systems; Taylor expansion; algebraic structure; continuous-time model; discrete-time input-output data; discrete-time model; optimization technique; polynomial; sampled-data system; symbolic calculation; Control systems; Mathematical model; Polynomials; Sampled data systems; Taylor series; Transfer functions (ID#: 15-7117)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7244806&isnumber=7244373

 

Kuang, Li; Wang, Feng; Li, Yuanxiang; Mao, Haiqiang; Lin, Min; Yu, Fei, “A Discrete Particle Swarm Optimization Box-Covering Algorithm for Fractal Dimension on Complex Networks,” in Evolutionary Computation (CEC), 2015 IEEE Congress on, vol., no., pp. 1396–1403, 25–28 May 2015. doi:10.1109/CEC.2015.7257051
Abstract: Researchers have widely investigated the fractal property of complex networks, in which the fractal dimension is normally evaluated by box-covering method. The crux of box-covering method is to find the solution with minimum number of boxes to tile the whole network. Here, we introduce a particle swarm optimization box-covering (PSOBC) algorithm based on discrete framework. Compared with our former algorithm, the new algorithm can map the search space from continuous to discrete one, and reduce the time complexity significantly. Moreover, because many real-world networks are weighted networks, we also extend our approach to weighted networks, which makes the algorithm more useful on practice. Experiment results on multiple benchmark networks compared with state-of-the-art algorithms show that this PSOBC algorithm is effective and promising on various network structures.
Keywords: Benchmark testing; Clustering algorithms; Complex networks; Fractals; Greedy algorithms; Optimization; Time complexity (ID#: 15-7118)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7257051&isnumber=7256859

 

Dongjin Song; Wei Liu; Meyer, D.A.; Dacheng Tao; Rongrong Ji, “Rank Preserving Hashing for Rapid Image Search,” in Data Compression Conference (DCC), 2015, vol., no., pp. 353–362, 7–9 April 2015. doi:10.1109/DCC.2015.85
Abstract: In recent years, hashing techniques are becoming overwhelmingly popular for their high efficiency in handling large-scale computer vision applications. It has been shown that hashing techniques which leverage supervised information can significantly enhance performance, and thus greatly benefit visual search tasks. Typically, a modern hashing method uses a set of hash functions to compress data samples into compact binary codes. However, few methods have developed hash functions to optimize the precision at the top of a ranking list based upon Hamming distances. In this paper, we propose a novel supervised hashing approach, namely Rank Preserving Hashing (RPH), to explicitly optimize the precision of Hamming distance ranking towards preserving the supervised rank information. The core idea is to train disciplined hash functions in which the mistakes at the top of a Hamming-distance ranking list are penalized more than those at the bottom. To find such hash functions, we relax the original discrete optimization objective to a continuous surrogate, and then design an online learning algorithm to efficiently optimize the surrogate objective. Empirical studies based upon two benchmark image datasets demonstrate that the proposed hashing approach achieves superior image search accuracy over the state-of-the-art approaches.
Keywords: Hamming codes; binary codes; computer vision; cryptography; data compression; learning (artificial intelligence); optimisation; Hamming distance ranking; RPH technique; binary codes; data compression; discrete optimization; hash function; large-scale computer vision applications; online learning algorithm; rank preserving hashing technique; rapid image search; state-of-the-art approaches; supervised hashing approach; supervised rank information preserving; Accuracy; Algorithm design and analysis; Benchmark testing; Binary codes; Encoding; Hamming distance; Optimization; Hashing; Image Retrieval; Image Search (ID#: 15-7119)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149292&isnumber=7149089

 

Fresnedo, O.; Gonzalez-Coma, J.P.; Castedo, L., “Design of Analog Joint Source-Channel Coding Systems for Broadcast Channels with MSE Balancing,” in Signal Processing Advances in Wireless Communications (SPAWC), 2015 IEEE 16th International Workshop on, vol., no., pp. 595–599, June 28 2015–July 1 2015. doi:10.1109/SPAWC.2015.7227107
Abstract: We consider the transmission of discrete-time continuous amplitude source information symbols over Multiple-Input Multiple-Output (MIMO) Broadcast Channels (BCs) using analog Joint Source Channel Coding (JSCC). We propose a distributed scheme that consists of single-user analog mappings concatenated with a BC access scheme specifically designed for the BC. Two different access methods are considered depending on the level of channel knowledge at transmission: Code Division Multiple Access (CDMA) and linear Minimum Mean Square Error (MMSE). The resulting analog JSCC systems are optimized to satisfy the requirements corresponding to the user MSEs and the power constraint. The idea of MSE balancing is also introduced to guarantee the feasibility of the optimization problems. Computer simulations show that CDMA provides good performance, although better results are obtained when the linear MMSE codes are employed instead of the CDMA ones.
Keywords: MIMO communication; broadcast channels; channel coding; code division multiple access; least mean squares methods; source coding; CDMA; MIMO channel; MMSE; MSE balancing; analog joint source-channel coding systems; broadcast channels; code division multiple access; discrete time continuous amplitude source information symbols; linear minimum mean square error; multiple input multiple output channel; single user analog mappings; Channel coding; Distortion; Joints; Multiaccess communication; Transmitters (ID#: 15-7120)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7227107&isnumber=7226983

 

Xuelai Sheng; Yu-Hong Wang, “The Research and Optimization of Hybrid System MLD Model Parameters Based on HYSDEL,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 2395–2400, 23–25 May 2015. doi:10.1109/CCDC.2015.7162322
Abstract: The MLD model can be created by HYSDEL toolbox automatically. It provides great convenience for the MLD research. But the number of binary variables, auxiliary continuous variables and inequality constraints in the MLD model cannot satisfy our requirements. In this paper, we proposed three steps optimization to optimize the model parameters. It can effectively reduce the amount of binary variables, auxiliary continuous variables and inequality constraints in that MLD model established by HYSDEL. The proposed algorithm was illustrated in saturation characteristic function and promising results were achieved.
Keywords: continuous systems; discrete event systems; optimisation; specification languages; CVDS; DEDS; HYSDEL; MLD model parameter optimization; continuous variable dynamic system; discrete event dynamic system; hybrid system description language; Computational modeling; Linear matrix inequalities; Manuals; Mathematical model; Optimization; Predictive models; Standards; HYSDEL; MLD; Model Optimization; Saturation Characteristic (ID#: 15-7121)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162322&isnumber=7161655

 

Al-Khaleel, M.; Shu-Lin Wu, “Neumann-Neumann Waveform Relaxation Methods for Fractional RC Circuits,” in Information and Communication Systems (ICICS), 2015 6th International Conference on, vol., no., pp. 73–78, 7–9 April 2015. doi:10.1109/IACS.2015.7103205
Abstract: The Waveform Relaxation (WR) methods are recognized as efficient solvers for large scale circuits and attract a lot of attention in recent years due to their favorable advantages where they are ideally suited for the use of multiple parallel processors for problems with multiple time scales. However, applying classical WR techniques to strongly coupled systems leads to non-uniform convergence. Therefore, more uniform WR methods have been developed. This paper is concerned to generalize the Neumann-Neumann waveform relaxation (NN-WR) method invented recently for time-dependent PDEs to time-fractional circuits which seems to be a promising method in circuit simulations. By choosing the RC circuit in infinite size as the model, we perform a convergence analysis for the NN-WR method and this corresponds to the analysis of this method for PDEs at the semi-discrete level. The NN-WR method contains a free parameter, namely β, which has a significant effect on the convergence rate. For PDEs, the analysis at the space-time continuous level shows β = 1 over 4, while the analysis in this paper shows that, at the semi-discrete level, i.e., for the circuit problem, we can have a better choice which leads to much faster convergence in practical computing. A comparison with the so-called Robin WR is also included.
Keywords: RC circuits; large scale integration; partial differential equations; waveform analysis; Neumann-Neumann waveform relaxation; fractional RC circuits; large scale circuits; space-time continuous level; time-dependent PDEs; Capacitors; Communication systems; Convergence; Integrated circuit modeling; Mathematical model; Space heating; Fractional RC circuits; Parallel computing; Parameter optimization; Waveform relaxation (WR) (ID#: 15-7122)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7103205&isnumber=7103173

 

Bin, Liu; Feng, Liu; Shengwei, Mei, “AC-Constrained Optimal Power Flow Considering Wind Power Generation Uncertainty in Radial Power Networks,” in Control Conference (CCC), 2015 34th Chinese, vol., no., pp. 2797–2801, 28–30 July 2015. doi:10.1109/ChiCC.2015.7260066
Abstract: Wind power generation uncertainty has imposed great challenge on power systems’ operation and should be considered when making an operation strategy. In this paper, we focus on optimal power flow (OPF) problem in radial power networks while considering wind power generation uncertainty. To cope with this uncertainty, the stochastic AC-constrained OPF (ACOPF) problem considering both continuous and discrete devices is formulated based on our previous work. The second order cone relaxation is employed to relax the original mixed integer nonconvex nonlinear programming (MINNLP) problem to a mixed integer second order cone programming (MISOCP) problem which can be solved by commercial solvers with high computation efficiency. The modified IEEE 33 bus distribution system is studied in this paper which validate the effectiveness of the proposed model.
Keywords: Computational modeling; Load flow; Optimization; Reactive power; Uncertainty; Wind power generation; optimal power flow; radial power networks; stochastic optimization; uncertainty set; wind power (ID#: 15-7123)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260066&isnumber=7259602

 

Leandro, M.A.C.; Colombo Junior, J.R.; Kienitz, K.H., “Robust D-Stability via Discrete Controllers for Continuous Time Uncertain Systems Using LMIs with a Scalar Parameter,” in Control and Automation (MED), 2015 23th Mediterranean Conference on, vol., no., pp. 644–649, 16–19 June 2015. doi:10.1109/MED.2015.7158819
Abstract: This paper addresses an alternative for the synthesis of a discrete-time stabilizing controller, taking into account requirements of D-stability via Linear Matrix Inequalities (LMIs) with a certain scalar parameter. Considering continuous time systems with polytopic uncertainty, this paper contributes with an alternative to incorporate D-stability requirements in the ∋2 and ∋ discrete-time controller synthesis from continuous-time D-stable regions via Euler’s approximation. From these design requirements, robust controllers were designed and implemented for a case study system of two cars connected through a spring.
Keywords: approximation theory; continuous time systems; control system synthesis; discrete time systems; linear matrix inequalities; robust control; uncertain systems; Euler approximation; LMIs; cars; continuous time uncertain systems; continuous-time D-stable regions; discrete controllers; discrete-time stabilizing controller synthesis; linear matrix inequalities; polytopic uncertainty; robust D-stability; robust controller design; scalar parameter; Approximation methods; Continuous time systems; Optimization; Robustness; Springs; Stability analysis; Uncertainty; D-stability; Linear Matrix Inequalities; Linear Systems; Robust Control (ID#: 15-7124)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158819&isnumber=7158720

 

Bin Liu; Feng Liu; Shengwei Mei; Yue Chen, “AC-Constrained Economic Dispatch in Radial Power Networks Considering Both Continuous and Discrete Controllable Devices,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 6249–6254, 23–25 May 2015. doi:10.1109/CCDC.2015.7161937
Abstract: Economic dispatch (ED) is widely studied in power system optimization and is a typical application of optimal power flow (OFF). As more distributed generation resources integrated, e.g. micro-turbines and renewable generators, the AC-constrained ED (ACED) of distribution power networks (a typical radial power networks) is more controllable and the operation optimization of which is essentially a complicated Mixed Integer Nonconvex Nonlinear Programming (MLNNLP) problem with discrete controllable devices, e.g. transformers and compensating capacitors. In this paper, we studied ACED problem of radial power networks based on Branch Flow model. To make this problem tractable, the piecewise linear (PWE) and latest second-order cone (SOC) relaxation techniques are employed to relaxe ACED to a Mixed Integer Second-order Cone Programming (MISOCP) problem which can be efficiently solved by commercial solvers. Besides, we also discussed the exactness of SOC relaxation for ACED problem of radial power networks based on recent research achievements in this area. The modified IEEE 33 bus system is studied which validated the effectiveness and high computation efficiency of the proposed method.
Keywords: integer programming; load dispatching; nonlinear programming; power system economics; AC constrained economic dispatch; branch flow model; continuous controllable device; discrete controllable device; distributed power generation; distribution power network; latest second-order cone relaxation technique; mixed integer second-order cone programming; optimal power flow; piecewise linear technique; power system optimization; typical radial power networks; Economics; OPF; economic dispatch; piecewise linear; radial power networks; second-order cone relaxation (ID#: 15-7125)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161937&isnumber=7161655

 

Shuqin Li; Ziyu Shao; Jianwei Huang, “ARM: Anonymous Rating Mechanism for Discrete Power Control,” in Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), 2015 13th International Symposium on, vol., no., pp. 199-206,
25–29 May 2015. doi:10.1109/WIOPT.2015.7151073
Abstract: Wireless interference management through continuous power control has been extensively studied in the literature. However, practical systems often adopt discrete power control with a limited number of power levels and MCSs (Modulation Coding Schemes). In general, discrete power control is NP-hard due to its combinatorial nature. To tackle this challenge, we propose an innovative approach of interference management: ARM (Anonymous Rating Mechanism). Inspired by the successes of the simple Anonymous Rating Mechanism in Internet and E-commerce, we develop ARM as distributed near-optimal algorithm for solving the discrete power control problem (i.e., the joint scheduling, power allocation, and modulation coding adaption problem) under the physical interference model. We show that ARM achieves a close-to-optimal network throughput with a very low control overhead. We also characterize the performance gap of ARM due to the loss of rating information, and study the trade-off between such gap and the convergence time of ARM. Through comprehensive simulations under various network scenarios, we find that the optimality gap of ARM is small and such a small gap can be achievable with only a small number of power levels. Furthermore, the performance degradation is marginal if only limited local network information is available.
Keywords: computational complexity; discrete systems; power control; radio networks; radiofrequency interference; telecommunication control; ARM; NP-hard problem; anonymous rating mechanism; continuous power control; discrete power control; distributed near-optimal algorithm; modulation coding scheme; physical interference model; wireless interference management; Algorithm design and analysis; Interference; Level set; Markov processes; Optimization; Power control; Signal to noise ratio (ID#: 15-7126)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7151073&isnumber=7151020
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 


Edge Detection and Metrics 2015

 

 
SoS Logo

Edge Detection and Metrics

2015


Edge detection is an important issue in image and signal processing. The work cited here looks at the development of metrics. These works were presented or published in 2015.



Jaiswal, A.; Garg, B.; Kaushal, V.; Sharma, G.K., “SPAA-Aware 2D Gaussian Smoothing Filter Design Using Efficient Approximation Techniques,” in VLSI Design (VLSID), 2015 28th International Conference on, vol., no., pp. 333–338, 3–7 Jan. 2015. doi:10.1109/VLSID.2015.62
Abstract: The limited battery lifetime and rapidly increasing functionality of portable multimedia devices demand energy-efficient designs. The filters employed mainly in these devices are based on Gaussian smoothing, which is slow and, severely affects the performance. In this paper, we propose a novel energy-efficient approximate 2D Gaussian smoothing filter (2D-GSF) architecture by exploiting “nearest pixel approximation” and rounding-off Gaussian kernel coefficients. The proposed architecture significantly improves Speed-Power-Area-Accuracy (SPAA) metrics in designing energy-efficient filters. The efficacy of the proposed approximate 2D-GSF is demonstrated on real application such as edge detection. The simulation results show 72%, 79% and 76% reduction in area, power and delay, respectively with acceptable 0.4dB loss in PSNR as compared to the well-known approximate 2D-GSF.
Keywords: Gaussian processes; approximation theory; edge detection; smoothing methods; S PAA metric; SPAA-aware 2D Gaussian smoothing filter design; approximation technique; edge detection; energy-efficient approximate 2D GSF architecture; energy-efficient approximate 2D Gaussian smoothing filter architecture; energy-efficient designs; limited battery lifetime; nearest pixel approximation; portable multimedia devices; rounding-off Gaussian kernel coefficient; speed-power-area-accuracy; Adders; Approximation methods; Complexity theory; Computer architecture; Image edge detection; Kernel; Smoothing methods; Approximate design; Edge-detection; Energy-efficiency; Error Tolerant Applications; Gaussian Smoothing Filter (ID#: 15-7061)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7031756&isnumber=7031671

 

Rui Xu; Naman, A.T.; Mathew, R.; Rufenacht, D.; Taubman, D., “Motion Estimation with Accurate Boundaries,” in Picture Coding Symposium (PCS), 2015, vol., no., pp. 184–188, May 31 2015–June 3 2015. doi:10.1109/PCS.2015.7170072
Abstract: This paper investigates several techniques that increase the accuracy of motion boundaries in estimated motion fields of a local dense estimation scheme. In particular, we examine two matching metrics, one is MSE in the image domain and the other one is a recently proposed multiresolution metric that has been shown to produce more accurate motion boundaries. We also examine several different edge-preserving filters. The edge-aware moving average filter, proposed in this paper, takes an input image and the result of an edge detection algorithm, and outputs an image that is smooth except at the detected edges. Compared to the adoption of edge-preserving filters, we find that matching metrics play a more important role in estimating accurate and compressible motion fields. Nevertheless, the proposed filter may provide further improvements in the accuracy of the motion boundaries. These findings can be very useful for a number of recently proposed scalable interactive video coding schemes.
Keywords: edge detection; image filtering; image matching; image resolution; motion estimation; moving average processes; MSE; compressible motion field; edge detection algorithm; edge-aware moving average filter; edge-preserving filters; image domain; local dense estimation scheme; matching metrics; motion boundary; motion estimation; multiresolution metric; scalable interactive video coding scheme; Accuracy; Image coding; Image edge detection; Joints; Measurement; Motion estimation; Video coding (ID#: 15-7062)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7170072&isnumber=7170026

 

Haj-Hassan, H.; Chaddad, A.; Tanougast, C.; Harkouss, Y., “Comparison of Segmentation Techniques for Histopathological Images,” in Digital Information and Communication Technology and its Applications (DICTAP), 2015 Fifth International Conference on, vol., no., pp. 80–85, April 29 2015–May 1 2015. doi:10.1109/DICTAP.2015.7113175
Abstract: Image segmentation is a widely used in medical imaging applications by detecting anatomical structures and regions of interest. This paper concerns a survey of numerous segmentation model used in biomedical field. We organized segmentation techniques by four approaches, namely, thresholding, edge-based, region-based and snake. These techniques have been compared with simulation results and demonstrated the feasibility of medical image segmentation. Snake was demonstrated a capability with a high performance metrics to detect irregular shape as carcinoma cell type. This study showed the advantage of the deformable segmentation technique to segment abnormal cells with Dice similarity value over 83%.
Keywords: biomedical optical imaging; cellular biophysics; edge detection; gradient methods; image segmentation; medical image processing; object detection; vectors; anatomical structure detection; biomedical field; carcinoma cell type; dice similarity value; edge-based approach; gradient vector; histopathological image segmentation techniques; irregular shape detection; medical image segmentation; medical imaging applications; region-based approach; regions-of-interest detection; snake approach; thresholding approach; Anatomical structure; Biological system modeling; Decision support systems; Image edge detection; Image segmentation; Simulation; Segmentation; biomedical; edge; region; snake; thresholding (ID#: 15-7063)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113175&isnumber=7113160

 

Mukherjee, M.; Edwards, J.; Kwon, H.; La Porta, T.F., “Quality of Information-Aware Real-Time Traffic Flow Analysis and Reporting,” in Pervasive Computing and Communication Workshops (PerCom Workshops), 2015 IEEE International Conference on, vol., no., pp. 69–74, 23–27 March 2015. doi:10.1109/PERCOMW.2015.7133996
Abstract: In this paper we present a framework for Quality of Information (QoI)-aware networking. QoI quantifies how useful a piece of information is for a given query or application. Herein, we present a general QoI model, as well as a specific example instantiation that carries throughout the rest of the paper. In this model, we focus on the tradeoffs between precision and accuracy. As a motivating example, we look at traffic video analysis. We present simple algorithms for deriving various traffic metrics from video, such as vehicle count and average speed. We implement these algorithms both on a desktop workstation and less-capable mobile device. We then show how QoI-awareness enables end devices to make intelligent decisions about how to process queries and form responses, such that huge bandwidth savings are realized.
Keywords: mobile computing; traffic information systems; video signal processing; QoI; average speed; bandwidth savings; desktop workstation; end devices; form responses; information-aware real-time traffic flow analysis; mobile device; quality of information-aware networking; traffic metrics; traffic video analysis; vehicle count; Accuracy; Cameras; Image edge detection; Quality of service; Sensors; Streaming media; Vehicles (ID#: 15-7064)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133996&isnumber=7133953

 

Lokhande, S.S.; Dawande, N.A., “A Survey on Document Image Binarization Techniques,” in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no., pp. 742–746, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.148
Abstract: Document image binarization is performed to segment foreground text from background text in badly degraded documents. In this paper, a comprehensive survey has been conducted on some state-of-the-art document image binarization techniques. After describing these document images binarization techniques, their performance have been compared with the help of various evaluation performance metrics which are widely used for document image analysis and recognition. On the basis of this comparison, it has been found out that the adaptive contrast method is the best performing method. Accordingly, the partial results that we have obtained for the adaptive contrast method have been stated and also the mathematical model and block diagram of the adaptive contrast method has been described in detail.
Keywords: document image processing; image recognition; image segmentation; text analysis; adaptive contrast method; background text segmentation; document image analysis; document image binarization techniques; document image recognition; foreground text segmentation; mathematical model; performance evaluation metrics; Distortion; Image edge detection; Image segmentation; Mathematical model; Measurement; Text analysis; degraded document image; document image binarization; image contrast; segmentation (ID#: 15-7065)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155946&isnumber=7155781

 

Balodi, A.; Dewal, M.L.; Rawat, A., “Comparison of Despeckle Filters for Ultrasound Images,” in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, vol., no., pp. 1919–1924, 11–13 March 2015. doi: (not provided)
Abstract: A comparative study of despeckle filters for ultrasound images have been presented in this paper. We know that the ultrasound images are corrupted by speckle noise, which has limited the growth of automatic diagnosis for ultrasound images. This paper compiles twelve despeckling filters for speckle noise reduction. A comparative study has been done in terms of preserving the texture features and edges. Six stabilized evaluation metrics, namely, signal to noise ratio (SNR), root mean square error (RMSE), peak signal to noise ratio (PSNR), structural similarity (SSIM) index, beta metric (β) and figure of merit (FoM) are calculated to investigate the performance of the despeckle filters.
Keywords: biomedical ultrasonics; image filtering; image texture; mean square error methods; medical image processing; speckle; ultrasonic imaging; FoM; PSNR; RMSE; SSIM; automatic diagnosis; beta metric; despeckle filters; despeckling filters; figure of merit; peak signal to noise ratio; root mean square error; speckle noise reduction; structural similarity index; ultrasound images; Image edge detection; Measurement; Optical filters; Signal to noise ratio; Speckle; Wiener filters; Beta metric; Despeckle; FoM; PSNR; RMSE; SNR; SSIM (ID#: 15-7066)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100578&isnumber=7100186

 

Marburg, A.; Hayes, M.P., “SMARTPIG: Simultaneous Mosaicking and Resectioning Through Planar Image Graphs,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on  vol., no., pp. 5767–5774, 26–30 May 2015. doi:10.1109/ICRA.2015.7140007
Abstract: This paper describes Smartpig, an algorithm for the iterative mosaicking of images of a planar surface using a unique parameterization which decomposes inter-image projective warps into camera intrinsics, fronto-parallel projections, and inter-image similarities. The constraints resulting from the inter-image alignments within an image set are stored in an undirected graph structure allowing efficient optimization of image projections on the plane. Camera pose is also directly recoverable from the graph, making Smartpig a feasible solution to the problem of simultaneous location and mapping (SLAM). Smartpig is demonstrated on a set of 144 high resolution aerial images and evaluated with a number of metrics against ground control.
Keywords: SLAM (robots); cameras; directed graphs; image segmentation; iterative methods; robot vision; SLAM; Smartpig algorithm; camera pose; fronto-parallel projections; ground control; high resolution aerial images; image iterative mosaicking; image projection optimization; image set; inter-image alignments; inter-image projective decomposition; inter-image similarity; intrinsic camera; planar image graphs; planar surface; simultaneous location and mapping; undirected graph structure; Cameras; Cost function; Image edge detection; Measurement; Silicon; Three-dimensional displays (ID#: 15-7067)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140007&isnumber=7138973

 

Kerouh, F.; Serir, A., “A No Reference Perceptual Blur Quality Metric in the DCT Domain,” in Control, Engineering & Information Technology (CEIT), 2015 3rd International Conference on, vol., no., pp. 1–6, 25–27 May 2015. doi:10.1109/CEIT.2015.7233043
Abstract: Blind objective metrics to automatically quantify perceived image quality degradation introduced by blur, is highly beneficial for current digital imaging systems. We present, in this paper, a perceptual no reference blur assessment metric developed in the frequency domain. As blurring affects specially edges and fine image details, that represent high frequency components of an image, the main idea turns on analysing, perceptually, the impact of blur distortion on high frequencies using the Discrete Cosine Transform DCT and the Just noticeable blur concept JNB relying on the Human Visual System. Comprehensive testing demonstrates the proposed Perceptual Blind Blur Quality Metric (PBBQM) good consistency with subjective quality scores as well as satisfactory performance in comparison with both the representative non perceptual and perceptual state-of-the-art blind blur quality measures.
Keywords: blind source separation; discrete cosine transforms; image restoration; DCT domain; JNB; PBBQM; blind blur quality measures; blind objective metrics; blur distortion; blurring affects; digital imaging systems; discrete cosine transform; edges; fine image details; frequency domain; human visual system; image high frequency components; just noticeable blur concept; no reference perceptual blur quality metric; perceived image quality degradation; perceptual blind blur quality metric; perceptual no reference blur assessment metric; subjective quality scores; Databases; Discrete cosine transforms; Frequency-domain analysis; Image edge detection; Visual systems; Wavelet transforms; Blurring; Discrete Cosine Transform (DCT); Just Noticeable Blur threshold (JNB); blind quality metric (ID#: 15-7068)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7233043&isnumber=7232976

 

Windisch, G.; Kozlovszky, M., “Image Sharpness Metrics for Digital Microscopy,” in Applied Machine Intelligence and Informatics (SAMI), 2015 IEEE 13th International Symposium on, vol., no., pp. 273–276, 22–24 Jan. 2015. doi:10.1109/SAMI.2015.7061889
Abstract: Image sharpness measurements are important parts of many image processing applications. To measure image sharpness multiple algorithms have been proposed and measured in the past but they have been developed with having out-of-focus photographs in mind and they do not work so well with images taken using a digital microscope. In this article we show the difference between images taken with digital cameras, images taken with a digital microscope and artificially blurred images. The conventional sharpness measures are executed on all these categories to measure the difference and a standard image set taken with a digital microscope is proposed and described to serve as a common baseline for further sharpness measures in the field.
Keywords: cameras; feature extraction; image processing; microscopy; photography; artificially blurred images; digital cameras; digital microscopy; image processing applications; image sharpness measurements; image sharpness metrics; out-of-focus photographs; Digital cameras; Image databases; Image edge detection; Measurement; Microscopy; Noise (ID#: 15-7069)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7061889&isnumber=7061844

 

Khademi, A.; Moody, A.R., “Multiscale Partial Volume Estimation for Segmentation of White Matter Lesions Using Flair MRI,” in Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on, vol., no., pp. 568–571, 16–19 April 2015. doi:10.1109/ISBI.2015.7163937
Abstract: For robust segmentation of white matter lesions (WML), a partial volume fraction (PVF) estimation approach was previously developed for FLAIR MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead the PV fraction was estimated directly from each FLAIR MRI using an adaptively defined global edge map that exploits a novel relationship between edge content and PVA. Although promising, predefined noise filter parameters were needed, and the edge metric is computed on a single scale which limits wide-scale implementations. To handle these challenges, this work defines a novel multiscale PVF estimation approach that is based on scale space derivatives. The result is a scale-invariant representation of edge content which is used to estimate a multiscale (scale-invariant) PV fraction. Validation results show the method is performing better than the previous version.
Keywords: biomedical MRI; image segmentation; medical image processing; FLAIR MRI; edge content scale-invariant representation; edge metrics; global edge map; multiscale PV fraction; multiscale partial volume estimation; multispectral scan; noise filter parameter; partial volume fraction estimation; scale space derivative; white matter lesion segmentation; Image edge detection; Image segmentation; Lesions; Magnetic resonance imaging; Noise; Volume measurement; FLAIR; MRI; WML; partial volume (ID#: 15-7070)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163937&isnumber=7163789

 

Hakim, Aayesha; Talele, K.T.V.; Harsh, Rajesh; Verma, Dharmesh, “Electronic Portal Image Processing for High Precision Radiotherapy,” in Computation of Power, Energy Information and Communication (ICCPEIC), 2015 International Conference on, vol., no., pp. 0007–0012, 22–23 April 2015. doi:10.1109/ICCPEIC.2015.7259436
Abstract: The advent of a-Si Electronic Portal Imaging Device (EPID) has led to an important tool for the clinicians to verify the location of the radiation therapy beam with respect to the patient anatomy. However, the Electronic Portal Images (EPI) are blur and suffer from low contrast due to Compton Scattering. It is difficult to differentiate between the organs and tissues from low contrast images. We need better in-treatment images to extract relevant features of the anatomy for a reliable patient set-up verification. The goal of this research work was to inspect several image processing techniques for contrast enhancement and edge detection/sharpening on EPI in DICOM format and improvise their visual aspects for better diagnosis and intervention. We propose a hybrid approach to enhance the quality of electronic portal images by using CLAHE algorithm and median filtering followed by image sharpening. Results suggest impressive improvement in the image quality by the proposed method. To quantify the degree of enhancement or degradation for various techniques experimentally, metrics like RMSE and PSNR are compared.
Keywords: Cancer; DICOM; Head; Histograms; Image edge detection; Neck; Thorax; DICOM; Electronic Portal Imaging Device (EPID); image enhancement; image processing (ID#: 15-7071)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7259436&isnumber=7259434

 

Mastan Vali, S.K..; Naga Kishore, K.L.; Prathibha, G., “Robust Image Watermarking Using Tetrolet Transform,” in Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, vol., no., pp. 1–5, 24–25 Jan. 2015. doi:10.1109/EESCO.2015.7253651
Abstract: This paper proposes new watermarking technique based on tetrolet domain. The tetrolet transform is a new adaptive Haar-type wavelet transform based on tetrominoes. It avoids Gibbs oscillation because it applies the Haar function at the edge of the image. Our proposed watermarking algorithm embeds the watermark into the tetrolet coefficients which are selected by considering different of shapes tetrominoes. We evaluated the effectiveness of the watermarking approach by considering the quality metrics like RMSE, PSNR and Robustness parameters. The experimental results reveal that the proposed watermarking scheme is robust against common image processing attacks.
Keywords: Discrete wavelet transforms; Image edge detection; Robustness; Watermarking; Adaptive Haar type wavelets; PSNR; RMSE; Robustness; Tetrolet Transform; Tetrominoes (ID#: 15-7072)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7253651&isnumber=7253613

 

Sandic-Stankovic, D.; Kukolj, D.; Le Callet, P., “DIBR Synthesized Image Quality Assessment Based on Morphological Wavelets,” in Quality of Multimedia Experience (QoMEX), 2015 Seventh International Workshop on, vol., no., pp. 1–6, 26–29 May 2015. doi:10.1109/QoMEX.2015.7148143
Abstract: Most of the Depth Image Based Rendering (DIBR) techniques produce synthesized images which contain nonuniform geometric distortions affecting edges coherency. This type of distortions are challenging for common image quality metrics. Morphological filters maintain important geometric information such as edges across different resolution levels. In this paper, morphological wavelet peak signal-to-noise ratio measure, MW-PSNR, based on morphological wavelet decomposition is proposed to tackle the evaluation of DIBR synthesized images. It is shown that MW-PSNR achieves much higher correlation with human judgment compared to the state-of-the-art image quality measures in this context.
Keywords: edge detection; image filtering; rendering (computer graphics); wavelet transforms; DIBR synthesized image evaluation; DIBR synthesized image quality assessment; MW-PSNR; depth image based rendering; edge coherency; image quality metrics; morphological filters; morphological wavelet decomposition; morphological wavelet peak signal-to-noise ratio measure; morphological wavelets; nonuniform geometric distortions; Distortion; Image quality; Lattices; Measurement; Signal resolution; Wavelet transforms; DIBR synthesized image quality assessment; Multi-scale PSNR; lifting scheme; morphological wavelets; nonseparable morphological wavelet decomposition; quincunx sampling (ID#: 15-7073)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148143&isnumber=7148077

 

Kyoungmin Lee; Kolsch, M., “Shot Boundary Detection with Graph Theory Using Keypoint Features and Color Histograms,” in Applications of Computer Vision (WACV), 2015 IEEE Winter Conference on,  vol., no., pp. 1177–1184, 5–9 Jan. 2015. doi:10.1109/WACV.2015.161
Abstract: The TRECVID report of 2010 [14] evaluated video shot boundary detectors as achieving “excellent performance on [hard] cuts and gradual transitions.” Unfortunately, while re-evaluating the state of the art of the shot boundary detection, we found that they need to be improved because the characteristics of consumer-produced videos have changed significantly since the introduction of mobile gadgets, such as smartphones, tablets and outdoor activity purposed cameras, and video editing software has been evolving rapidly. In this paper, we evaluate the best-known approach on a contemporary, publicly accessible corpus, and present a method that achieves better performance, particularly on soft transitions. Our method combines color histograms with key point feature matching to extract comprehensive frame information. Two similarity metrics, one for individual frames and one for sets of frames, are defined based on graph cuts. These metrics are formed into temporal feature vectors on which a SVM is trained to perform the final segmentation. The evaluation on said “modern” corpus of relatively short videos yields a performance of 92% recall (at 89% precision) overall, compared to 69% (91%) of the best-known method.
Keywords: edge detection; graph theory; image colour analysis; image matching; image segmentation; support vector machines; video signal processing; SVM; color histograms; comprehensive frame information; graph cuts; graph theory; key point feature matching; segmentation; temporal feature vectors; video shot boundary detection; Color; Feature extraction; Histograms; Image color analysis; Measurement; Support vector machines; Vectors (ID#: 15-7074)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7046015&isnumber=7045853

 

Saleh, F.S.; Azmi, R., “Automated Lesion Border Detection of Dermoscopy Images Using Spectral Clustering,” in Pattern Recognition and Image Analysis (IPRIA), 2015 2nd International Conference on, vol., no., pp. 1–6, 11–12 March 2015. doi:10.1109/PRIA.2015.7161640
Abstract: Skin lesion segmentation is one of the most important steps for automated early skin cancer detection since the accuracy of the following steps significantly depends on it. In this paper we present a novel approach based on spectral clustering that provides accurate and effective segmentation for dermoscopy images. In the proposed method, an optimized clustering algorithm has been provided which effectively extracts lesion borders using spectral graph partitioning algorithm in an appropriate color space, considering special characteristics of dermoscopy images. The proposed segmentation method has been applied to 170 dermoscopic images and evaluated with two metrics, by means of the segmentation results provided by an experienced dermatologist as the ground truth. The experiment results of this approach demonstrate that, complex contours are distinguished correctly while challenging features of skin lesions such as topological changes, weak or false contours, and asymmetry in color and shape are handled as might be expected when compared to four state of the art methods.
Keywords: cancer; edge detection; feature extraction; image colour analysis; image segmentation; medical image processing; pattern clustering; shape recognition; skin; automated lesion border detection; color asymmetry; dermoscopy image segmentation; lesion border extraction; shape asymmetry; skin cancer detection; spectral clustering; spectral graph partitioning algorithm; Clustering algorithms; Hair; Image color analysis; Image segmentation; Lesions; Malignant tumors; Skin; Dermoscopic Images; Segmentation; Spectral Clustering; Uniform color space; automated early skin cancer detection (ID#: 15-7075)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161640&isnumber=7161613

 

Moradi, M.; Falahati, A.; Shahbahrami, A.; Zare-Hassanpour, R., “Improving Visual Quality in Wireless Capsule Endoscopy Images wth Contrast-Limited Adaptive Histogram Equalization,” in Pattern Recognition and Image Analysis (IPRIA), 2015 2nd International Conference on, vol., no., pp. 1–5, 11–12 March 2015. doi:10.1109/PRIA.2015.7161645
Abstract: Wireless Capsule Endoscopy (WCE) is a noninvasive device for detection of gastrointestinal problems especially small bowel diseases, such as polyps which causes gastrointestinal bleeding. The quality of WCE images is very important for diagnosis. In this paper, a new method is proposed to improve the quality of WCE images. In our proposed method for improving the quality of WCE images, Removing Noise and Contrast Enhancement (RNCE) algorithm is used. The algorithm have been implemented and tested on some real images. Quality metrics used for performance evaluation of the proposed method is Structural Similarity Index Measure (SSIM), Peak Signal-to-Noise Ratio (PSNR) and Edge Strength Similarity for Image (ESSIM). The results obtained from SSIM, PSNR and ESSIM indicate that the implemented RNCE method improve the quality of WCE images significantly.
Keywords: biological organs; biomedical optical imaging; diseases; endoscopes; image denoising; image enhancement; medical image processing; WCE image quality; bowel diseases; contrast enhancement algorithm; contrast-limited adaptive histogram equalization; diagnosis; edge strength similarity-for-image; gastrointestinal bleeding; gastrointestinal problem detection; noninvasive device; peak signal-to-noise ratio; performance evaluation; polyps; quality metrics; removing noise algorithm; similarity index measure; visual quality; wireless capsule endoscopy images; Diseases; Endoscopes; Gastrointestinal tract; Imaging; PSNR; Wireless communication; Contrast Enhancement; Medical Image Processing; Wireless Capsule Endoscopy (WCE) (ID#: 15-7076)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161645&isnumber=7161613

 

Gomez-Valverde, J.J.; Ortuno, J.E.; Guerra, P.; Hermann, B.; Zabihian, B.; Rubio-Guivernau, J.L.; Santos, A.; Drexler, W.; Ledesma-Carbayo, M.J., “Evaluation of Speckle Reduction with Denoising Filtering in Optical Coherence Tomography for Dermatology,” in Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on, vol., no., pp. 494–497, 16–19 April 2015. doi:10.1109/ISBI.2015.7163919
Abstract: Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we evaluate various denoising filters with high edge-preserving potential for the reduction of speckle noise in 256 dermatological OCT B-scans. Our results show that the Enhanced Sigma Filter and the Block Matching 3-D (BM3D) as 2D denoising filters and the Wavelet Multiframe algorithm considering adjacent B-scans achieved the best results in terms of the enhancement quality metrics used. Our results suggest that a combination of 2D filtering followed by a wavelet based compounding algorithm may significantly reduce speckle, increasing signal-to-noise and contrast-to-noise ratios, without the need of extra acquisitions of the same frame.
Keywords: biomedical optical imaging; diseases; filtering theory; image denoising; image enhancement; image matching; medical image processing; optical tomography; skin; wavelet transforms; 2D denoising filters; 3D block matching; BM3D; OCT images; contrast-to-noise ratios; dermatological OCT B-scans; enhancement quality metrics; high edge-preserving potential; optical coherence tomography; sigma filter; signal-to-noise ratios; skin disease diagnosis; speckle noise; speckle reduction; wavelet multiframe algorithm; wavelet-based compounding algorithm; Adaptive optics; Biomedical optical imaging; Digital filters; Optical filters; Optical imaging; Speckle; Tomography; Optical Coherence Tomography; denoising; dermatology; speckle (ID#: 15-7077)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163919&isnumber=7163789 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Edge Detection and Security 2015

 

 
SoS Logo

Edge Detection and Security

2015


Edge detection is an important issue in image and signal processing. The works cited here look at the development of various security methods and approaches. These works were presented or published in 2015.



Yassin, A.A.; Hussain, A.A.; Mutlaq, K.A.-A., “Cloud Authentication Based on Encryption of Digital Image Using Edge Detection,” in Artificial Intelligence and Signal Processing (AISP), 2015 International Symposium on, vol., no., pp. 1–6, 3–5 March 2015. doi:10.1109/AISP.2015.7123517
Abstract: The security of cloud computing is the most important concerns that may delay its well-known adoption. Authentication is the central part of cloud security, targeting to gain valid users for accessing to stored data in cloud computing. There are several authentication schemes that based on username/password, but they are considered weak methods of cloud authentication. In the other side, image’s digitization becomes highly vulnerable to malicious attacks over cloud computing. Our proposed scheme focuses on two-factor authentication that used image partial encryption to overcome above aforementioned issues and drawbacks of authentication schemes. Additionally, we use a fast partial image encryption scheme using Canny’s edge detection with symmetric encryption is done as a second factor. In this scheme, the edge pixels of image are encrypted using the stream cipher as it holds most of the image’s data and then we applied this way to authenticate valid users. The results of security analysis and experimental results view that our work supports a good balance between security and performance for image encryption in cloud computing environment.
Keywords: cloud computing; cryptography; edge detection; Canny edge detection; cloud authentication; cloud computing security; digital image partial encryption; image digitization; stream cipher; symmetric encryption; two-factor authentication; Authentication; Cloud computing; Digital images; Encryption; Image edge detection; Cloud Computing; Edge Detection; Image encryption; Password; Service Provider (ID#: 15-7108)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7123517&isnumber=7123478

 

Al-Dmour, H.; Al-Ani, A., “Quality Optimized Medical Image Steganography Based on Edge Detection and Hamming Code,” in Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on, vol., no., pp. 1486–1489, 16–19 April 2015. doi:10.1109/ISBI.2015.7164158
Abstract: A Picture Archiving and Communication System (PACS) is a technology designed to store and transmit digitized medical images over a public network for certain uses. One of the main concerns relating to most of the existing systems is that little attention has been paid to the security and protection of patients’ information. Accordingly, there has been an increased interest in recent years to enhance the confidentiality of patients’ information. This paper introduces a high imperceptibility digital steganography method that hides Electronic Patient Records (EPR) into a medical image without modifying its important part. This method utilizes edge detection to identify and embed secret data in sharp regions of the image, as the human visual system (HVS) is less sensitive to changes in high contrast areas of images, compared to their smooth areas. Moreover, a Hamming code that embeds 3 secret message bits into 4 bits of the cover image is utilized as this will help in enhancing the quality of the produced images. We hide EPR into the Region of Non-Interest (RONI) to protect the decision area i.e., Region of Interest (ROI), which is essential for the diagnosis. The effectiveness of the proposed scheme is proven through the well-known of imperceptibility measure of Peak Signal-to-Noise Ratio (PSNR) when considering different message length.
Keywords: Hamming codes; PACS; data protection; edge detection; electronic health records; image coding; image enhancement; medical image processing; security of data; steganography; EPR; Electronic Patient Records; HVS; Hamming code; PACS; PSNR; Peak Signal-to-Noise Ratio; Picture Archiving and Communication System; ROI; RONI; Region of Interest; Region of NonInterest; cover image; decision area; digitized medical images; edge detection; high contrast areas; high imperceptibility digital steganography method; human visual system; image quality enhancement; imperceptibility measure; message length; patient information confidentiality; patient information protection; patient information security; public network; quality optimized medical image steganography; secret data; secret message; smooth areas; Cryptography; Image edge detection; Medical diagnostic imaging; Picture archiving and communication systems; Watermarking; EPR; HVS; Hamming code; MSE; PACS; PSNR; RONI and ROI; cost function; digital steganography; edge detection (ID#: 15-7079)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164158&isnumber=7163789

 

Rad, R.M.; KokSheik Wong, “Digital Image Forgery Detection by Edge Analysis,” in Consumer Electronics - Taiwan (ICCE-TW), 2015 IEEE International Conference on, vol., no., pp. 19–20, 6–8 June 2015. doi:10.1109/ICCE-TW.2015.7216848
Abstract: The advent of user-friendly yet powerful editing softwares has cast doubt on the authenticity of digital images. Therefore, developing reliable detection techniques is of great importance to verify the originality of a given image. In this work, a forgery detection technique based on the analysis of edge information is proposed. Unlike the conventional methods, the proposed technique is not restricted to the traces left by the act of double compression, but instead it allows the input image to be singly compressed or uncompressed. Experimental results confirmed that proposed method is able to localize forged area when the forged image is not double compressed.
Keywords: data compression; fraud; image coding; digital image authenticity; digital image forgery detection; double image compression; user-friendly editing softwares; Digital images; Discrete cosine transforms; Forgery; Image coding; Image edge detection; Security; Splicing (ID#: 15-7080)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7216848&isnumber=7216784

 

Chopade, P.; Zhan, J.; Bikdash, M., “Node Attributes and Edge Structure for Large-Scale Big Data Network Analytics and Community Detection,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–8, 14–16 April 2015. doi:10.1109/THS.2015.7225331
Abstract: Identifying network communities is one of the most important tasks when analyzing complex networks. Most of these networks possess a certain community structure that has substantial importance in building an understanding regarding the dynamics of the large-scale network. Intriguingly, such communities appear to be connected with unique spectral property of the graph Laplacian of the adjacency matrix and we exploit this connection by using modified relationship between Laplacian and adjacency matrix. We propose modularity optimization based on a greedy agglomerative method, coupled with fast unfolding of communities in large-scale networks using Louvain community finding method. Our proposed modified algorithm is linearly scalable for efficient identification of communities in huge directed/undirected networks. The proposed algorithm shows great performance and scalability on benchmark networks in simulations and successfully recovers communities in real network applications. In this paper, we develop communities from node attributes and edge structure. New modified algorithm statistically models the interaction between the network structure and the node attributes which leads to more accurate community detection as well as helps for identifying robustness of the network structure. We also show that any community must contain a dense Erdos-Renyi (ER) subgraph. We carried out comparisons of the Chung and Lu (CL) and Block Two-Level Erdos-Renyi (BTER) models with four real-world data sets. Results demonstrate that it accurately captures the observable properties of many real-world networks.
Keywords: Big Data; complex networks; graph theory; large-scale systems; matrix algebra; optimisation; BTER models; adjacency matrix; block two-level Erdos-Renyi models; community detection; complex networks; dense Erdos-Renyi subgraph; edge structure; graph Laplacian; greedy agglomerative method; large-scale big data network analytics; large-scale network; modularity optimization; network communities; node attributes; unique spectral property; Clustering algorithms; Computer science; Eigenvalues and eigenfunctions; Erbium; Image edge detection; Laplace equations; Optimization; Big data; Community detection; Large-scale network; Statistical analysis (ID#: 15-7081)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225331&isnumber=7190491

 

Abbas, W.; Bhatia, S.; Koutsoukos, X., “Guarding Networks Through Heterogeneous Mobile Guards,” in American Control Conference (ACC), 2015, vol., no., pp. 3428–3433, 1–3 July 2015. doi:10.1109/ACC.2015.7171861
Abstract: In this article, the issue of guarding multi-agent systems against a sequence of intruder attacks through mobile heterogeneous guards (guards with different ranges) is discussed. The article makes use of graph theoretic abstractions of such systems in which agents are the nodes of a graph and edges represent interconnections between agents. Guards represent specialized mobile agents on specific nodes with capabilities to successfully detect and respond to an attack within their guarding range. Using this abstraction, the article addresses the problem in the context of eternal security problem in graphs. Eternal security refers to securing all the nodes in a graph against an infinite sequence of intruder attacks by a certain minimum number of guards. This paper makes use of heterogeneous guards and addresses all the components of the eternal security problem including the number of guards, their deployment and movement strategies. In the proposed solution, a graph is decomposed into clusters and a guard with appropriate range is then assigned to each cluster. These guards ensure that all nodes within their corresponding cluster are being protected at all times, thereby achieving the eternal security in the graph.
Keywords: graph theory; mobile agents; multi-agent systems; network theory (graphs); eternal security problem; graph theoretic abstractions; guarding multiagent systems; guarding networks; heterogeneous mobile guards; intruder attacks; mobile agents; mobile heterogeneous guards; movement strategies; Clustering algorithms; Image edge detection; Mobile communication; Partitioning algorithms; Radiation detectors; Robot sensing systems; Security (ID#: 15-7082)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7171861&isnumber=7170700

 

Huiling Zhang; Alim, M.A.; Thai, M.T.; Nguyen, H.T., “Monitor Placement to Timely Detect Misinformation in Online Social Networks,” in Communications (ICC), 2015 IEEE International Conference on, vol., no., pp. 1152–1157, 8–12 June 2015. doi:10.1109/ICC.2015.7248478
Abstract: Online Social Networks (OSNs), such as Facebook, Twitter and Google+, facilitate the interactions and communications among people. However, they also make it a fertile land for misinformation to rapidly spread out, which may lead to detrimental consequences. Thus it is imperative to detect the misinformation propagating through OSNs by placing monitors. In this paper, we first study a general misinformation detection problem and show its equivalence to the influence maximization problem. Moreover, in order to prevent misinformation from reaching specific users, we define a τ-Monitor Placement problem for cases where the partial knowledge of misinformation sources is available. We prove the #P complexity of this problem and additionally propose an efficient algorithm to solve it. Extensive experiments on real-world data show the effectiveness of our proposed algorithm with respect to minimizing the number of monitors.
Keywords: computational complexity; optimisation; security of data; social networking (online); #P complexity; τ-monitor placement problem; Facebook; Google+; OSN; Twitter; general misinformation detection problem; maximization problem; online social networks; Complexity theory; Image edge detection; Integrated circuit modeling; Monitoring; Polynomials; Twitter (ID#: 15-7083)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7248478&isnumber=7248285

 

Kulchandani, J.S.; Dangarwala, K.J., “Moving Object Detection: Review of Recent Research Trends,” in Pervasive Computing (ICPC), 2015 International Conference on, vol., no., pp. 1–5, 8–10 Jan. 2015. doi:10.1109/PERVASIVE.2015.7087138
Abstract: Moving object detection is the task of identifying the physical movement of an object in a given region or area. Over last few years, moving object detection has received much of attraction due to its wide range of applications like video surveillance, human motion analysis, robot navigation, event detection, anomaly detection, video conferencing, traffic analysis and security. In addition, moving object detection is very consequential and efficacious research topic in field of computer vision and video processing since it forms a critical step for many complex processes like video object classification and video tracking activity. Consequently, identification of actual shape of moving object from a given sequence of video frames becomes pertinent. However, task of detecting actual shape of object in motion becomes tricky due to various challenges like dynamic scene changes, illumination variations, presence of shadow, camouflage and bootstrapping problem. To reduce the effect of these problems, researchers have proposed number of new approaches. This paper provides a brief classification of the classical approaches for moving object detection. Further, paper reviews recent research trends to detect moving object for single stationary camera along with discussion of key points and limitations of each approach.
Keywords: cameras; computer vision; image motion analysis; image sequences; object detection; video signal processing; computer vision; moving object detection; single stationary camera; video frame sequence; video object classification; video processing; video tracking activity; Computer vision; Heuristic algorithms; Image edge detection; Lighting; Object detection; Object recognition; Optical filters; Human Motion Analysis; Moving Object Detection; Object Classification; Tracking; Video Surveillance (ID#: 15-7084)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087138&isnumber=7086957

 

Akay, F.; Akbulut, A.; Telatar, Z., “3-D Video Reconstruction from 2-D Video,” in Signal Processing and Communications Applications Conference (SIU), 2015 23th, vol., no., pp. 2434–2437, 16–19 May 2015. doi:10.1109/SIU.2015.7130374
Abstract: Today, in general, imaging devices used extensively record the images in 2 dimension (2-D). 3 dimensional (3-D) recording is also obtained by using two or more cameras with heavy computational complexity for image registration. Moreover, according to the advancements in technology, in some areas like medical imaging, security imaging systems etc., (images are in 2-D) 3-D images are required for the expert evaluations. In this study, 3-D image sequences are constructed from 2-D image recordings. Edge and color information of 2-D sequence are used in order to obtain depth map for 3-D reconstruction process. The results obtained show the robustnes of the method presented.
Keywords: computational complexity; image colour analysis; image reconstruction; image registration; image sequences; 2 dimension image recording; 2D image recording; 2D image sequence; 2D video reconstruction; 3 dimensional image recording; 3D image recording; 3D image sequence; 3D video reconstruction; camera; color information; computational complexity; edge information; image registration; medical imaging; security imaging system; DH-HEMTs; Image edge detection; Imaging; Information filters; Rendering (computer graphics); Three-dimensional displays; 2-D; 3-D video; 3-D video conversion; bilateral filter; depth image based rendering; linear depth map (ID#: 15-7085)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130374&isnumber=7129794

 

Takai, M., “Measurement of Complex Quantity of Monitoring Area and Detection of High Active Part of Invading Object in Complex Background for Surveillance Camera System,” in Mechatronics (ICM), 2015 IEEE International Conference on, vol., no., pp. 522–528, 6–8 March 2015. doi:10.1109/ICMECH.2015.7084031
Abstract: Surveillance camera system is one of security system. And, general surveillance camera system consists of surveillance camera installed in the monitoring area, and video monitor in the administrative room, and connects each device with communication line. An observer can always watch the monitoring area in the distant place by the networking of the surveillance camera system. Therefore, the observer needs to always watch large amount of image data surveillance camera photographed. It is necessary for it to spend much time and labor that an observer confirms only by naked eyes. This study measures how complex an image is with numerical value from 0.0 to 1.0 using Complex Quantity. The proposal method detects complex background from the image which surveillance camera photographed, and shows enlarges complex background so that an observer can find suspicious individual or unidentified object hiding in complex background easily. And, we measure livingness of the movement of object invading the complex background with Active Quantity so that an observer is able to watch the movement of the subject in monitoring area efficiently. Active Quantity measures how active the movement of the object is with numerical value from 0.0 to 1.0 quantity. And, the proposal surveillance camera system detects high active part consisting of high Active Quantity from the movement of the object in complex background. The observer is possible to watch the quick movement of objects hiding in the complex background in the monitoring area using the proposal surveillance camera system.
Keywords: cameras; image sensors; numerical analysis; optical variables measurement; photography; video surveillance; active quantity; communication line; complex quantity measurement; image data surveillance camera system; object movement measurement; photography; security system; video monitoring; Area measurement; Cameras; Image edge detection; Observers; Proposals; Surveillance (ID#: 15-7086)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7084031&isnumber=7083935

 

Gaofeng, Zhan; Yong, Jiang, “Research of Information System Based on Intranet Security Algorithm,” in Measuring Technology and Mechatronics Automation (ICMTMA), 2015 Seventh International Conference on, vol., no., pp. 827–830, 13–14 June 2015. doi:10.1109/ICMTMA.2015.203
Abstract: The rapid development of science and technology, rapid progress in computer technology, global Intranet information system. How to protect information security, security elite network become one of the research problem. I was just on the basis of predecessors’ research, aiming at specific problems, this paper proposes a multi structure elements, the superposition of morphological filtering algorithm.
Keywords: Filtering algorithms; Frequency-domain analysis; Image edge detection; Information systems; Noise; Security; Servers; Client/Server; Filtering algorithm; Intranet; Morphology (ID#: 15-7087)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7263697&isnumber=7263490

 

Mammeri, Abdelhamid; Lu, Guangqian; Boukerche, Azzedine, “Design of Lane Keeping Assist System for Autonomous Vehicles,” in New Technologies, Mobility and Security (NTMS), 2015 7th International Conference on, vol., no., pp. 1–5, 27–29 July 2015. doi:10.1109/NTMS.2015.7266483
Abstract: Lane detection and tracking and departure warning systems are important components of Intelligent Transportation Systems. They have particularly attracted great interest from industry and academia. Many architectures and commercial systems have been proposed in the literature. In this paper, we discuss the design of such systems regarding the following stages: pre-processing, detection, and tracking. For each stage, a short description of its working principle as well as their advantages and shortcomings are introduced. Our paper may possibly help in designing new systems that overcome and improve the shortcomings of current architectures.
Keywords: Feature extraction; Image color analysis; Image edge detection; Kalman filters; Roads; Transforms; Vehicles (ID#: 15-7088)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266483&isnumber=7266450

 

Eberle, W.; Holder, L., “Streaming Data Analytics for Anomalies in Graphs,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225259
Abstract: Protecting our nation’s infrastructure and securing sensitive information are critical challenges for both industry and government. Due to the complex and diverse nature of the environments which can expose attacks or terrorism activity, one must not only be able to deal with attacks that are dynamic, or constantly changing, but also take into account the structural aspects of the networks and the relationships among communication events. However, analyzing a massive, ever-growing graph will quickly overwhelm currently-available computing resources. One potential solution to the issue of handling very large graphs is to handle data as a “stream”. In this work, we present an approach to processing a stream of changes to the graph in order to efficiently identify any changes in the normative patterns and any changes in the anomalies to these normative patterns without processing all previous data. The overall framework of our approach is called PLADS for Pattern Learning and Anomaly Detection in Streams. We evaluate our approach on a dataset that represents people movements and actions, as well as a scalable, streaming data generator that represents social network behaviors, in order to assess the ability to efficiently detect known anomalies.
Keywords: data analysis; graph theory; learning (artificial intelligence); security of data; PLADS; data handling; graph-based anomaly detection; information security; normative pattern; pattern learning and anomaly detection in streams; streaming data analytics; Accuracy; Browsers; Generators; Image edge detection; Partitioning algorithms; Social network services; Topology; Graph-based; anomaly detection; knowledge discovery; streaming data (ID#: 15-7089)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225259&isnumber=7190491

 

Nguyen, T.D.; Arch-int, S.; Arch-int, N., “A Novel Secure Channel Selection Rule for Spatial Image Steganography,” in Computer Science and Software Engineering (JCSSE), 2015 12th International Joint Conference on, vol., no., pp. 230–235, 22–24 July 2015. doi:10.1109/JCSSE.2015.7219801
Abstract: This paper introduces a novel secure channel selection rule for spatial image steganography. In this scheme, there are two factors considered to identify a pixel, which causes less distortion to cover image, to be modified in data hiding. The first one is an average difference between considered pixel and its neighbors. The value of the considered pixel is the second employed factor. Obtained experimental results reported on 10,000 natural images indicate the higher visual quality and security of our new channel selection rule for spatial image steganography when compared with the previous approaches.
Keywords: image processing; steganography; data hiding; natural images; secure channel selection rule; spatial image steganography; visual quality; Degradation; Distortion; Image edge detection; Noise; Payloads; Security; Visualization; channel selection rule; secure; spatial image (ID#: 15-7090)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219801&isnumber=7219755

 

Chieh-Hsun Huang; Han-Sheng Hsu; Hong-Ren Wang; Ting-Yi Yang; Cheng-Ming Huang, “Design and Management of an Intelligent Parking Lot System by Multiple Camera Platforms,” in Networking, Sensing and Control (ICNSC), 2015 IEEE 12th International Conference on, vol., no., pp. 354–359, 9–11 April 2015. doi:10.1109/ICNSC.2015.7116062
Abstract: Parking in the city has been a major problem in modern days. An efficient way to manage the parking lot and to improve the safety of the driver is very important. Traditional parking lots commonly use security camera, ultrasonic sensors or infrared ray sensors to manage the parking lots. However, these systems are not only expensive but time consuming. Therefore, we present a hybrid intelligent parking system, which is able to inform the drivers where is the empty parking space, lend the drivers to easily record where they parking, provide remote monitoring, and offer the parking spot leading service when drivers forget where they parked. In addition, the security guard of the parking lot could provide the functions of remote monitoring, detection and monitoring of parking in the personal sites, and fire detection. This system also employs the micro aerial vehicle (MAV) as mobile monitoring in the indoor environments instead of monitoring by fixed cameras. Through this paper, we demonstrate our system from both driver’s view and security guard’s view.
Keywords: autonomous aerial vehicles; computerised monitoring; image sensors; microrobots; robot vision; traffic engineering computing; MAV; empty parking space; hybrid intelligent parking system; infrared ray sensors; intelligent parking lot system; microaerial vehicle; multiple camera platforms; parking spot leading service; remote monitoring; security camera; security guard; ultrasonic sensors; Cameras; Fires; Image edge detection; Licenses; Monitoring; Sensors; Vehicles; Arduino; Micro aerial vehicle; NFC tag; Parking lot; QR Code; Raspberry Pi (ID#: 15-7091)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116062&isnumber=7115994

 

Ahmed Biyabani, A.; Al-Salman, S.A.; Alkhalaf, K.S., “Embedded Real-Time Bilingual ALPR,” in Communications, Signal Processing, and their Applications (ICCSPA), 2015 International Conference on, vol., no., pp. 1–6, 17–19 Feb. 2015. doi:10.1109/ICCSPA.2015.7081311
Abstract: Automatic License Plate Recognition (ALPR) systems are useful for various surveillance and security purposes. While ALPR is a mature technology, customization for individual countries plates is ongoing. The utility of such systems may be increased if they provide real-time information and if they can be deployed easily using low-cost embedded hardware. In this paper we describe a FPGA-based real-time ALPR system which may be embedded and which is geared towards plates with either Roman or Arabic characters. We believe it is the first system with this combination of features. We report a modest 84% success rate for our OCR algorithm in field tests and a corresponding hardware response time of 1.3ms reflecting a 200x improvement over software only techniques.
Keywords: embedded systems; field programmable gate arrays; natural language processing; optical character recognition; traffic engineering computing; ALPR systems; Arabic characters; FPGA-based real-time ALPR system; OCR algorithm; Roman characters; automatic license plate recognition systems; embedded real-time bilingual ALPR; hardware response time; individual countries plates customization; low-cost embedded hardware; security purposes; surveillance purposes; Field programmable gate arrays; Hardware; Image edge detection; Image segmentation; Licenses; Optical character recognition software; Real-time systems; Embedded; Image-processing; Real-time (ID#: 15-7092)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081311&isnumber=7081264

 

Sathisha, N.; Babu, K.S.; Raja, K.B.; Venugopal, K.R., “Mantissa Replacement Steganography Using LWT,” in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, vol., no., pp. 1–7, 26–28 March 2015. doi:10.1109/ICSCN.2015.7219862
Abstract: Steganography is an authenticated technique for maintaining secrecy of embedded data. The novel concept of replacing mantissa part of cover image by the generated mantissa part of payload is proposed for higher capacity and security. The Lifting wavelet Transform (LWT) is applied on both cover image and payload of sizes a * a and 3a * 2a respectively. The mantissa values of Vertical band (CV), Horizontal band (CH) and diagonal band (CD) of cover image are removed to convert into real values. The approximation band of payload is considered and the odd column element values and even column element values are divided by 300 and 30000 respectively to generate only mantissa part of payload. The modified odd and even column vector pairs are added element by element to form one resultant vector. The column vector elements of cover image and resultant column vector elements of payload are added to generate stego object column vector elements corresponding to vertical, horizontal and diagonal elements. The inverse LWT is applied to generate stego image.
Keywords: approximation theory; image processing; steganography; vectors; wavelet transforms; cover image; embedded data secrecy; even column vector pairs; inverse LWT; lifting wavelet transform; mantissa replacement; mantissa values; modified odd column vector pairs; payload approximation band; resultant column vector elements; steganography; stego image; stego object column vector elements; Barium; Cryptography; Image coding; Image edge detection; Image segmentation; Lead; Noise; LWT; Mantissa; Payload; Steganography; Stego image (ID#: 15-7093)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219862&isnumber=7219823

 

Kaur, R.; Kaur, J., “Cloud Computing Security Issues and Its Solution: A Review,” in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, vol., no., pp. 1198–1200, 11–13 March 2015. doi: (not provided)
Abstract: Cloud Computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. As information exchange plays an important role in today’s life, information security becomes more important. This paper is focused on the security issues of cloud computing and techniques to overcome the data privacy issue. Before analyzing the security issues, the definition of cloud computing and brief discussion to under cloud computing is presented, then it explores the cloud security issues and problem faced by cloud service provider. Thus, defining the Pixel key pattern and Image Steganography techniques that will be used to overcome the problem of data security.
Keywords: cloud computing; data privacy; image coding; security of data; steganography; cloud computing security; cloud service provider; data privacy; image steganography technique; information exchange; information security; pixel key pattern; Cloud computing; Clouds; Computational modeling; Computers; Image edge detection; Security; Servers; Cloud Computing; Cloud Security; Image steganography; Pixel key pattern; Security issues (ID#: 15-7094)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100438&isnumber=7100186

 

Sariga, N.P.; Sajitha, A.S., “Steganographic Data Hiding in Automatic Converted 3D Image from 2D and 2D to 3D Video Conversion,” in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, vol., no., pp. 1–6, 19–20 March 2015. doi:10.1109/ICIIECS.2015.7193097
Abstract: We can implement data hiding in 3D image by using steganography, So as to achieve more efficiency and security than usual 2D image data hiding. Despite a significant growth during last couple of years, the availability of 3D content is still dwarfed by that of the 2D counterpart. In order to close this gap, a number of 2D-to-3D image and video conversion methods have been proposed. The results demonstrate that repositories of 3D content can be used for effective 2D-to-3D image conversion. Steganography is one of the best and most challenging methods for securing data. It is a branch of secret communication or security system used in hiding secret data inside digital mediums. An extension to video is immediate by enforcing temporal continuity of computed depth maps.
Keywords: image sequences; steganography; 2D image data hiding; 3D video conversion; automatic converted 3D image; computed depth maps; optical flow; steganographic data hiding; temporal continuity; Cameras; Communication systems; Estimation; Image edge detection; Rendering (computer graphics) Smoothing methods; Three-dimensional displays; 3D images; cross-bilateral filtering; image conversion; nearest neighbor classification; stereoscopic images (ID#: 15-7095)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193097&isnumber=7192777

 

Mishra, R.; Bhanodiya, P., “A Review on Steganography and Cryptography,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, vol., no., pp. 119–122, 19–20 March 2015. doi:10.1109/ICACEA.2015.7164679
Abstract: Today’s information world is a digital world. Data transmission over an unsecure channel is becoming a major issue of concern nowadays. And at the same time intruders are spreading over the internet and being very active. So to protect the secret data from theft some security measures need to be taken. In order to keep the data secret various techniques have been implemented to encrypt and decrypt the secret data. Cryptography and Steganography are the two most prominent techniques from them. But these two techniques alone can’t do work as much efficiently as they do together. Steganography is a Greek word which is made up of two words Stegano and graphy. Stegano means hidden and graphy means writing i.e. Steganography means hidden writing. Steganography is a way to hide the fact that data communication is taking place. While cryptography converts the secret message in other than human readable form but this technique is having a limitation that the encrypted message is visible to everyone. In this way over the internet, intruders may try to apply heat and trial method to get the secret message. Steganography overcome the limitation of cryptography by hiding the fact that some transmission is taking place. In steganography the secret message is hidden in other than original media such as Text, Image, video and audio form. These two techniques are different and having their own significance. So in this paper we are going to discuss various cryptographic and steganographic techniques used in order the keep the message secret.
Keywords: cryptography; data communication; steganography; Internet; cryptographic techniques; data communication; data transmission; digital world; hidden writing; secret data decryption; secret data encryption; secret data protection; security measures; steganographic techniques; Computers; Encryption; Image color analysis; Image edge detection; Media; Cipher Text; Cryptanalysis; Cryptograph; LSB; Steganalysis; Steganography (ID#: 15-7096)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164679&isnumber=7164643

 

Kulkarni, N.; Mane, V., “Source Camera Identification Using GLCM,” in Advance Computing Conference (IACC), 2015 IEEE International, vol., no., pp. 1242–1246, 12–13 June 2015. doi:10.1109/IADCC.2015.7154900
Abstract: Digital images are becoming main focus of work for the researchers. Digital image forensics (DIF) is at the forefront of security techniques, aiming to restore the lost trust in digital imagery by uncovering digital counterfeiting techniques. Source camera identification provides different ways to identify the characteristics of the digital devices used. Study of these techniques has been done as literature survey work; from this sensor imperfection based technique is chosen. Sensor pattern noise (SPN), carries abundance of information along with wide frequency range allows for reliable identification in the presence of many imaging sensors. Our proposed system consists of a novel technique used for extracting sensor noise from the database images, and then the feature extraction method is applied to extract the features. The model used for extracting sensor noise consists of use of Gradient based operators and Laplacian operators, a hybrid system consisting of best results from the above two operators obtain a third image giving the edges and noise present in it. The edges are removed by applying threshold to get the noise present in the image. This noisy image is then provided to the feature extraction module consisting of Gray level Co-occurrence Matrix (GLCM). It is used to extract various features based on its properties such as Homogeneity, Contrast, Correlation, and Entropy. The extracted features are used to do the performance evaluation based on various parameters. The accuracy parameter will give the matching rate for the entire dataset. The Sensor Pattern Noise (SPN) is extracted in the GLCM features and used for matching with the test set to get the exact match. The hybrid system used for SPN extraction along with the GLCM feature extraction yields better results.
Keywords: Laplace equations; digital forensics; edge detection; feature extraction; image sensors; matrix algebra; DIF; GLCM feature extraction; Laplacian operators; SPN extraction; digital counterfeiting techniques; digital image forensics; edge removal; gradient based operators; gray level co-occurrence matrix; hybrid system; imaging sensors; performance evaluation; security techniques; sensor noise extraction; sensor pattern noise; source camera identification; Cameras; Conferences; Digital images; Feature extraction; Forensics; Noise; Object recognition; Digital Evidence; Image Forensics; Sensor Pattern noise (ID#: 15-7097)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154900&isnumber=7154658

 

Khare, S., “Finger Gesture and Pattern Recognition Based Device Security System,” in Signal Processing and Communication (ICSC), 2015 International Conference on, vol., no., pp. 443–447, 16–18 March 2015. doi:10.1109/ICSPCom.2015.7150694
Abstract: This research aims at introduction of a hand gesture recognition based system to recognize real time gestures in natural environment and compare patterns with image database for matching of image pairs to trigger unlocking of mobile devices. The efforts made in this direction during past relating to security systems for mobile devices has been a major concern and methods like draw pattern unlock, passcodes, facial and voice recognition technologies have already been employed to a fair level of extent, but these are quiet susceptible to hacks and greater ratio of recognition failure errors (especially in cases of voice and facial). A next step in HMI would be use of fingertip tracking based unlocking mechanism, which would employ minimalistic hardware like webcam or smartphone front camera. Image acquisition through MATLAB is followed up by conversion to grayscale and application of optimal filter for edge detection utilized in different conditions for optimal results in recognizing fingertips up to a precise level of accuracy. Pattern is traced at 60 fps for tracking and tracing and therefore cross referenced with the training image by deployment of neural networks for improved recognition efficiency. Data is registered in real time and device is unlocked at instance when SSIM takes a value above predefined threshold percentage or number. The aforementioned mechanism is employed in applications via user friendly GUI frontend and computational modelling through MATLAB for backend.
Keywords: gesture recognition; image motion analysis; mobile handsets; neural nets; security; GUI frontend; MATLAB; SSIM; computational modelling; device security system; draw pattern unlock; edge detection; facial recognition technologies; failure error recognition; finger gesture; fingertip tracking; hand gesture recognition; image acquisition; image database; image pair matching; mobile devices security systems; mobile devices unlocking; neural networks deployment; optimal filter; passcodes; pattern recognition; smartphone front camera; unlocking mechanism; voice recognition technologies; webcam; Biological neural networks; MATLAB; Pattern matching; Security; Training; Computer vision; HMI (Human Machine Interface); MATLAB; ORB; SIFT (Scale Invariant Feature Transform); SSIM (Structural Similarity Index Measure); SURF (Speed Up Robust Features) (ID#: 15-7098)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150694&isnumber=7150604

 

Kashyap, A.; Suresh, B.; Agrawal, M.; Gupta, H.; Joshi, S.D., “Detection of Splicing Forgery Using Wavelet Decomposition,” in Computing, Communication & Automation (ICCCA), 2015 International Conference on, vol., no., pp. 843–848, 15–16 May 2015. doi:10.1109/CCAA.2015.7148492
Abstract: Authenticity of an image is an important issue in many social areas such as Journalism, Forensic investigation, Criminal investigation and Security services etc. and digital images can be easily manipulated with the help of sophisticated photo editing software and high-resolution digital cameras. So there is a requirement for the implementation of new powerful and efficient algorithms for forgery detection of a tampered images. The splicing is the common forgery in which two images are combine and make a single composite and the duplicated region is retouched by performing operations like edge blurring to get the appearance of the authentic image. In this paper, we have proposed a new computationally efficient algorithm for splicing (copy-create) forgery detection of an image using block matching method. The proposed method achieve an accuracy of 87.75% within a small processing time by modeling the threshold.
Keywords: image processing; wavelet transforms; authentic image; block matching method; high-resolution digital cameras; photo editing software; splicing forgery detection; wavelet decomposition; Accuracy; Automation; Digital images; Forgery; Splicing; Wavelet transforms; BMP; JPEG; PNG; TIFF; Wavelet decomposition; block matching (ID#: 15-7099)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148492&isnumber=7148334

 

Namayanja, J.M.; Janeja, V.P., “Change Detection in Evolving Computer Networks: Changes in Densification and Diameter over Time,” in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, vol., no., pp. 185–187, 27–29 May 2015. doi:10.1109/ISI.2015.7165969
Abstract: Large-scale attacks on computer networks usually cause abrupt changes in network traffic, which makes change detection an integral part of attack detection especially in large communication networks. Such changes in traffic can be defined in terms of sudden absence of key nodes or edges, or the addition of new nodes and edges to the network. These are micro level changes. This on the other hand may lead to changes at the macro level of the network such as changes in the density and diameter of the network that describe connectivity between nodes as well as flow of information within the network. Our assumption is that, changes in the behavior of such key nodes in a network translates into changes in the overall structure of the network since these key nodes represent the major chunk of communication in the network. In this study, we focus on detecting changes at the network-level where we sample the network and select key subgraphs associated to central nodes. Our objective is to study selected network-level properties because they provide a bigger picture of underlying events in the network.
Keywords: computer network security; network theory (graphs); attack detection; change detection; communication networks; computer networks; network density; network diameter; network edges; network nodes; network traffic; network-level properties; Decision support systems; Frequency modulation; Central Nodes; Change Point Detection; Network Evolution; Network Properties; Subgraphs (ID#: 15-7100)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165969&isnumber=7165923

 

Lomotey, R.K.; Deters, R.; Kaletsch, K., “Mobile Hosting and Sensor Eco-System for Radiation Detection,” in Systems Conference (SysCon), 2015 9th Annual IEEE International, vol., no., pp. 740–746, 13–16 April 2015. doi:10.1109/SYSCON.2015.7116839
Abstract: Gamma ray is an electromagnetic radiation with a very high frequency that can be biologically hazardous. Most workers in the mining, manufacturing, security, and other industries find themselves in such hazardous environments and governments are trying to contain this issue. While traditionally, high gamma radiation detection sensors have been manufactured to be carried along by users, they are not good access point for actual dosage readings. With the recent advancement in mobile technology, this paper proposes a mobile hosting architecture to enable mobile-to-sensor communication following the edge-based technique. This means the sensor can detect the radiation and send readings to a smartphone device of the user. All other near-by mobile devices (which are authorized) will receive the notification to alert the people in the hazard zone. In this paper, the notification dissemination is developed based on the sequential flow pattern. The proposed work is tested and the results show that detected radiations are sent in soft real-time to the mobile devices.
Keywords: electromagnetic waves; gamma-ray detection; smart phones; dosage readings; electromagnetic radiation; gamma radiation detection sensors; gamma ray; mobile hosting architecture; mobile-to-sensor communication; notification dissemination; sensor eco-system; sequential flow pattern; smartphone device; Bluetooth; Computer architecture; Mobile communication; Mobile handsets; Real-time systems; Software; Synchronization; Cloud computing; Gamma Radiation; Latency; Mobile hosting; Optimal Path; Process flow (ID#: 15-7101)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116839&isnumber=7116715

 

Sengupta, A.; Bhadauria, S., “User Power-Delay Budget Driven PSO Based Design Space Exploration of Optimal K-Cycle Transient Fault Secured Datapath During High Level Synthesis,” in Quality Electronic Design (ISQED), 2015 16th International Symposium on, vol., no., pp. 289–292, 2–4 March 2015. doi:10.1109/ISQED.2015.7085441
Abstract: Fault security indicates ability to provide error detection or fetching the correct output. Generation (design space exploration (DSE)) of an optimal fault secured datapath structure based on user power-delay budget during high level synthesis (HLS) in the context k-cycle transient fault is considered an intractable problem. This is due to the fact that for every type of candidate design solution produced during exploration, a feasible k-cycle fault secured datapath may not exist satisfying the conflicting user constraints/budget. Secondly, insertion of random cut to optimize delay overhead associated with fault security in most cases may not yield optimal solutions in the context of user constraints/budgets. The solutions to the above problems have not been addressed in the literature so far. The paper solves the problem by presenting: (a) a novel algorithm for fault secured particle swarm optimization driven DSE (b) novel technique for handling k-cycle transient faults (c) novel schemes for selecting appropriate edges for inserting cuts in the scheduled Control Data Flow Graph (CDFG) minimizing delay overhead associated with fault security. The proposed approach yielded optimal results which minimizes hybrid cost as well as satisfies user constraints. Further, results of comparison with recent approaches indicated significant reduction of final cost.
Keywords: circuit optimisation; data flow graphs; error detection; high level synthesis; integrated circuit design; particle swarm optimisation; synchronisation; CDFG; DSE; HLS; PSO; context k-cycle transient fault; control data flow graph; delay overhead; design space exploration; error detection; fault security; high level synthesis; optimal k-cycle transient fault secured datapath; particle swarm optimization; user constraints; user power-delay budget; Algorithm design and analysis; Circuit faults; Delays; Hardware; Security; Space exploration; Transient analysis; Fault; delay; k-cycle; power; transient (ID#: 15-7102)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085441&isnumber=7085355

 

Agarwal, S.; Sureka, A., “Using Common-Sense Knowledge-Base for Detecting Word Obfuscation in Adversarial Communication,” in Communication Systems and Networks (COMSNETS), 2015 7th International Conference on, vol., no.,
pp. 1–6, 6–10 Jan. 2015. doi:10.1109/COMSNETS.2015.7098738
Abstract: Word obfuscation or substitution means replacing one word with another word in a sentence to conceal the textual content or communication. Word obfuscation is used in adversarial communication by terrorist or criminals for conveying their messages without getting red-flagged by security and intelligence agencies intercepting or scanning messages (such as emails and telephone conversations). ConceptNet is a freely available semantic network represented as a directed graph consisting of nodes as concepts and edges as assertions of common sense about these concepts. We present a solution approach exploiting vast amount of semantic knowledge in ConceptNet for addressing the technically challenging problem of word substitution in adversarial communication. We frame the given problem as a textual reasoning and context inference task and utilize ConceptNet’s natural-language-processing tool-kit for determining word substitution. We use ConceptNet to compute the conceptual similarity between any two given terms and define a Mean Average Conceptual Similarity (MACS) metric to identify out-of-context terms. The test-bed to evaluate our proposed approach consists of Enron email dataset (having over 600000 emails generated by 158 employees of Enron Corporation) and Brown corpus (totaling about a million words drawn from a wide variety of sources). We implement word substitution techniques used by previous researches to generate a test dataset.We conduct a series of experiments consisting of word substitution methods used in the past to evaluate our approach. Experimental results reveal that the proposed approach is effective.
Keywords: directed graphs; electronic mail; inference mechanisms; national security; natural language processing; semantic networks; terrorism; text analysis; word processing; Brown corpus; ConceptNet; ConceptNet natural-language-processing tool-kit; Enron Corporation; Enron email dataset; MACS metric; adversarial communication; common-sense knowledge-base; context inference task; criminals; directed graph; intelligence agencies; mean average conceptual similarity metric; message scanning; security agencies; semantic knowledge; semantic network; terrorist; textual communication; textual content; textual reasoning; word obfuscation detection; word substitution techniques; Bismuth; Postal services; ConceptNet; Intelligence and Security Informatics; Natural Language Processing; Semantic Similarity; Word Substitution (ID#: 15-7103)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098738&isnumber=7098633

 

Milling, C.; Caramanis, C.; Mannor, S.; Shakkottai, S., “Local Detection of Infections in Heterogeneous Networks,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 1517–1525, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218530
Abstract: In many networks the operator is faced with nodes that report a potentially important phenomenon such as failures, illnesses, and viruses. The operator is faced with the question: Is it spreading over the network, or simply occurring at random? We seek to answer this question from highly noisy and incomplete data, where at a single point in time we are given a possibly very noisy subset of the infected population (including false positives and negatives). While previous work has focused on uniform spreading rates for the infection, heterogeneous graphs with unequal edge weights are more faithful models of reality. Critically, the network structure may not be fully known and modeling epidemic spread on unknown graphs relies on non-homogeneous edge (spreading) weights. Such heterogeneous graphs pose considerable challenges, requiring both algorithmic and analytical development. We develop an algorithm that can distinguish between a spreading phenomenon and a randomly occurring phenomenon while using only local information and not knowing the complete network topology and the weights. Further, we show that this algorithm can succeed even in the presence of noise, false positives and unknown graph edges.
Keywords: computer network security; computer viruses; graph theory; critical network structure; false negatives; false positives; graph edge; heterogeneous graph; heterogeneous networks; infected population; infection local detection; local information; Analytical models; Approximation algorithms; Computers; Conferences; Electronic mail; Noise measurement; Probabilistic logic (ID#: 15-7104)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218530&isnumber=7218353 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

 

Lightweight Ciphers 2015

 

 
SoS Logo

Lightweight Ciphers

2015


Lightweight cryptography is a major research direction. The release of SIMON in June 2013 generated significant interest and a number of studies evaluating and comparing it to other cipher algorithms. To the Science of Security community, lightweight ciphers can support resilience, especially in cyber-physical systems constrained with power and “weight” budgets. The works cited here were presented in 2015.



Haohao Liao; Heys, H.M., “An Integrated Hardware Platform for Four Different Lightweight Block Ciphers,” in Electrical and Computer Engineering (CCECE), 2015 IEEE 28th Canadian Conference on, vol., no., pp. 701–705, 3–6 May 2015. doi:10.1109/CCECE.2015.7129360
Abstract: In this paper, we investigate the hardware implementation of four different, but similar, lightweight block ciphers: PRESENT, Piccolo, PRINTcipher and LED. The purpose of this paper is to present a common platform which integrates these four ciphers into one system using a shared datapath, with the objective of reducing the area below the total sum of area consumed by the individual ciphers. The structure and implementation of the platform is clearly stated in the paper with the target technology being the Altera Cyclone IV FPGA.
Keywords: cryptography; field programmable gate arrays; Altera Cyclone IV FPGA; LED; PRESENT; PRINTcipher; Piccolo; area reduction; integrated hardware platform; lightweight block ciphers; shared-datapath system; total area sum; Ciphers; Embedded systems; Encryption; Hardware; Light emitting diodes; Registers; Throughput (ID#: 15-6536)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129360&isnumber=7129089

 

Beaulieu, R.; Treatman-Clark, S.; Shors, D.; Weeks, B.; Smith, J.; Wingers, L., “The SIMON and SPECK Lightweight Block Ciphers,” in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, vol., no., pp. 1–6, 8–12 June 2015. doi:10.1145/2744769.2747946
Abstract: The Simon and Speck families of block ciphers were designed specifically to offer security on constrained devices, where simplicity of design is crucial. However, the intended use cases are diverse and demand flexibility in implementation. Simplicity, security, and flexibility are ever-present yet conflicting goals in cryptographic design. This paper outlines how these goals were balanced in the design of Simon and Speck.
Keywords: cryptography; SIMON; SPECK; cryptographic design; lightweight block ciphers; security; Algorithm design and analysis; Ciphers; Hardware; Schedules; Software; Internet of Things;  block cipher; lightweight (ID#: 15-6537)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167361&isnumber=7167177

 

Nemati, A.; Feizi, S.; Ahmadi, A.; Haghiri, S.; Ahmadi, M.; Alirezaee, S., “An Efficient Hardware Implementation of FeW Lightweight Block Cipher,” in Artificial Intelligence and Signal Processing (AISP), 2015 International Symposium on, vol., no., pp. 273–278, 3–5 March 2015. doi:10.1109/AISP.2015.7123493
Abstract: Radio-frequency identifications (RFID) are becoming a part of our everyday life with a wide range of applications such as labeling products and supply chain management and etc. These smart and tiny devices have extremely constrained resources in terms of area, computational abilities, memory, and power. At the same time, security and privacy issues remain as an important problem, thus with the large deployment of low resource devices, increasing need to provide security and privacy among such devices, has arisen. Resource-efficient cryptographic incipient become basic for realizing both security and efficiency in constrained environments and embedded systems like RFID tags and sensor nodes. Among those primitives, lightweight block cipher plays a significant role as a building block for security systems. In 2014 Manoj Kumar et al. proposed a new Lightweight block cipher named as FeW, which are suitable for extremely constrained environments and embedded systems. In this paper, we simulate and synthesize the FeW block cipher. Implementation results of the FeW cryptography algorithm on a FPGA are presented. The design target is efficiency of area and cost.
Keywords: cryptography; field programmable gate arrays; radiofrequency identification; FPGA; FeW cryptography algorithm; FeW lightweight block cipher; RFID; hardware implementation; radio-frequency identification; resource-efficient cryptographic incipient; security system; sensor node; Algorithm design and analysis; Ciphers; Encryption; Hardware; Schedules; Block Cipher; FeW Algorithm; Feistel structure; Field Programmable Gate Array (FPGA); High Level Synthesis}, (ID#: 15-6538)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7123493&isnumber=7123478

 

Dofe, J.; Reed, C.; Ning Zhang; Qiaoyan Yu, “Fault-Tolerant Methods for a New Lightweight Cipher SIMON,” in Quality Electronic Design (ISQED), 2015 16th International Symposium on, vol., no., pp. 460–464, 2–4 March 2015. doi:10.1109/ISQED.2015.7085469
Abstract: We propose three fault-tolerant methods for a new lightweight block cipher SIMON, which has the potential to be a hardware-efficient security primitive for embedded systems. As a single fault in the encryption (decryption) process can completely change the ciphertext (received plaintext), it is critical to ensure the reliability of encryption and decryption modules. We explore double-modular redundancy (DMR), reverse function, and a parity check code combined with a non-linear compensation function (EPC) to detect faults in SIMON. The proposed fault-tolerant methods were implemented in iterative and pipelined SIMON architectures. The corresponding hardware cost, power consumption, and fault detection failure rate were assessed. Simulation results show that EPC-SIMON consumes less area and power than DMR-SIMON and Reversed-SIMON but yields a higher fault detection failure rate as the number of concurrent faults increases. Moreover, our experiments show that the impact of fault location on the fault-detection failure rates for different methods is not consistent.
Keywords: cryptography; embedded systems; fault diagnosis; fault tolerant computing; parity check codes; DMR-SIMON; EPC-SIMON; ciphertext; concurrent faults; decryption modules; decryption process; double-modular redundancy; encryption modules; encryption process; fault detection failure rate; fault location; fault-tolerant methods; hardware cost; hardware-efficient security primitive; iterative SIMON architectures; lightweight block cipher; nonlinear compensation function; parity check code; pipelined SIMON architectures; plaintext; power consumption; reverse function; reversed-SIMON; Ciphers; Circuit faults; Fault detection; Fault tolerance; Fault tolerant systems; Parity check codes; Schedules; SIMON; block cipher; fault tolerance; reliability (ID#: 15-6539)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085469&isnumber=7085355

 

Nozaki, Y.; Asahi, K.; Yoshikawa, M., “Countermeasure of TWINE Against Power Analysis Attack,” in Future of Electron Devices, Kansai (IMFEDK), 2015 IEEE International Meeting for, vol., no., pp. 68–69, 4–5 June 2015. doi:10.1109/IMFEDK.2015.7158553
Abstract: Lightweight block ciphers, which can be embedded using small area, have attracted much attention. This study proposes a new countermeasure for TWINE which is one of the most popular light weight block ciphers. The proposed method masks the correlation between power consumption and confidential information by adding random numbers to intermediate data of encryption. Experiments prove effective tamper-resistance of the proposed method.
Keywords: cryptography; random number generation; TWINE; confidential information; encryption; lightweight block cipher; power analysis attack; power consumption; random number; tamper-resistance; Ciphers; Correlation; Encryption; Hamming distance; Power demand; Registers; power analysis of semiconductor; security of semiconductor; tamper resistance (ID#: 15-6540)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158553&isnumber=7158481

 

Yoshikawa, M.; Sugioka, K.; Nozaki, Y.; Asahi, K., “Secure in-Vehicle Systems Against Trojan Attacks,” in Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, vol., no., pp. 29–33, June 28 2015 – July 1 2015. doi:10.1109/ICIS.2015.7166565
Abstract: Recently, driving support technologies, such as inter-vehicle and road-to-vehicle communication technologies, have been practically used. However, a problem has been pointed out that when a vehicle is connected with an external network, the safety of the vehicle is threatened. As a result, the security of vehicle control systems, which greatly affects vehicle safety, has become more important than ever. Ensuring the security of in-vehicle systems becomes an important priority, similar to ensuring conventional safety. The present study proposes a controller area network (CAN) communications method that uses a lightweight cipher to realize secure in-vehicle systems. The present study also constructs an evaluation system using a field-programmable gate array (FPGA) board and a radio-controlled car. This is used to verify the proposed method.
Keywords: controller area networks; cryptographic protocols; field programmable gate arrays; invasive software; vehicular ad hoc networks; CAN communication method; FPGA; Trojan attack; controller area network communication method; field-programmable gate array; inter-vehicle communication technology; lightweight cipher; radio-controlled car; road-to-vehicle communication technology; vehicle control system security; Authentication; Ciphers; Encryption; Radiation detectors; Safety; Vehicles; Authentication; CAN communication; Embedded system; Lightweight block cipher; Security (ID#: 15-6541)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166565&isnumber=7166553

 

Jinyong Shan; Lei Hu; Siwei Sun, “Security of LBlock-S Against Related-Key Differential Attack,” in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp. 1278–1283, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124790
Abstract: LBlock-s is a 32-round lightweight block cipher and is a simplified version of the LBlock block cipher, which was proposed to achieve an efficiency improvement of implementation but not to weaken its security. It uses 10 identical 4-bit S-boxes instead of 10 different 4-bit S-boxes in LBlock to reduce the cost in hardware and software implementation. Although better bounds on the security of LBlock-s against related-key differential attack have been given, the designers did not have sufficient evidence to show that the cipher is secure enough to resist against this attack. In this paper, we apply the mixed-integer linear programming methods proposed by Sun et al. to show that the cipher is secure against standard related-key differential attack and there is no related-key differential characteristic with probability higher than 2 64 for the 32-round LBlock-s. In particular, more concrete results on reduced versions of the cipher are obtained that the minimum numbers of active S-boxes for 10-round and 11-round related-key differential characteristics are 10 and 11, respectively.
Keywords: cryptography; integer programming; linear programming; 32-round lightweight block cipher; LBlock block cipher; LBlock-s; active S-boxes; mixed-integer linear programming methods; probability; related-key differential attack; Ciphers; Resists; Schedules; Standards; Sun; LBlock block cipher; LBlock-s block cipher; mixed-integer linear programming; related-key differential attack (ID#: 15-6542)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124790&isnumber=7124722

 

Sasdrich, P.; Moradi, A.; Mischke, O.; Guneysu, T., “Achieving Side-Channel Protection with Dynamic Logic Reconfiguration on Modern FPGAs,” in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, vol., no., pp. 130–136, 5–7 May 2015. doi:10.1109/HST.2015.7140251
Abstract: Reconfigurability is a unique feature of modern FPGA devices to load hardware circuits just on demand. This also implies that a completely different set of circuits might operate at the exact same location of the FPGA at different time slots, making it difficult for an external observer or attacker to predict what will happen at what time. In this work we present and evaluate a novel hardware implementation of the lightweight cipher PRESENT with built-in side-channel countermeasures based on dynamic logic reconfiguration. In our design we make use of Configurable Look-Up Tables (CFGLUT) integrated in modern Xilinx FPGAs to nearly instantaneously change hardware internals of our cipher implementation for improved resistance against side-channel attacks. We provide evidence from practical experiments based on a Spartan-6 platform that even with 10 million recorded power traces we were unable to detect a first-order leakage using the state-of-the-art leakage assessment.
Keywords: cryptography; field programmable gate arrays; table lookup; CFGLUT; PRESENT; built-in side-channel countermeasures; configurable look-up tables; dynamic logic reconfiguration; lightweight cipher; modern Xilinx FPGA; side-channel protection; Ciphers; Encryption; Field programmable gate arrays; Hardware; Registers; Table lookup (ID#: 15-6543)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140251&isnumber=7140225

 

Mohd, B.J.; Hayajneh, T.; Abu Khalaf, Z., “Optimization and Modeling of FPGA Implementation of the Katan Cipher,” in Information and Communication Systems (ICICS), 2015 6th International Conference on, vol., no., pp. 68–72, 7–9 April 2015. doi:10.1109/IACS.2015.7103204
Abstract: Lightweight ciphers (e.g., Katan) are crucial for secure communication for resource-constrained devices. The Katan cipher algorithm was proposed for low-resource devices. This paper examines implementing Katan Cipher on field programmable gate array (FPGA) platform. The paper discusses several implementations, with 80-bits key size and 64-bits block size. The energy and power dissipations are examined to select the optimum design. Models for resources and power are derived with average error of 12% and 17%.
Keywords: circuit optimisation; cryptography; field programmable gate arrays; telecommunication security; FPGA implementation; Katan cipher algorithm; energy dissipation; field programmable gate array; power dissipation; resource-constrained devices; secure communication; Algorithm design and analysis; Ciphers; Encryption; Field programmable gate arrays; Hardware; Timing; Cipher; Encryption; Energy; FPGA; Power; Security (ID#: 15-6544)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7103204&isnumber=7103173

 

Forte, A.G.; Ferrari, G., “Towards Distributing Block Ciphers Computations,” in Wireless Communications and Networking Conference Workshops (WCNCW), 2015 IEEE, vol., no., pp. 41–46, 9–12 March 2015. doi:10.1109/WCNCW.2015.7122526
Abstract: Providing data confidentiality for energy constrained devices has proven to be a hard problem. Over the years many efficient implementations of well-known block ciphers, as well as a large number of new “lightweight” block ciphers, have been introduced. We propose to distribute block ciphers encryption and decryption operations between a subset of “trusted” nodes. Any block cipher, lightweight or not, can benefit from it. In particular, we analyze the energy consumption of AES128 in Cipher Block Chaining (CBC) mode and measure the energy savings that a distributed computation of AES128-CBC can give. We show that, by leveraging this distributed computation, a node can save up to 73% and up to 81% of the energy normally spent in encryption and decryption, respectively. This has relevant implications in Internet of Things scenarios.
Keywords: Internet of Things; cryptography; distributed processing; AES128-CBC; Internet-of-things scenarios; block cipher decryption operation distribution; block cipher encryption operation distribution; block ciphers computations; cipher block chaining mode; data confidentiality; energy consumption; energy saving measurement; energy-constrained devices; trusted nodes; Batteries; Ciphers; Conferences; Encryption; Energy measurement; Internet of things (ID#: 15-6545)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122526&isnumber=7122513

 

Ming, Wong Ming; Ling, Dennis Wong Mou, “LFSR Based S-Box for Lightweight Cryptographic Implementation,” in Consumer Electronics - Taiwan (ICCE-TW), 2015 IEEE International Conference on, vol., no., pp. 498–499, 6–8 June 2015. doi:10.1109/ICCE-TW.2015.7217019
Abstract: This paper presents the hardware implementation of the Linear Feedback Shift Register (LFSR) based Substitution Box (S-Box) using ALTERA FPGA platform. Unlike the conventional designs, the proposed architecture is low in terms of its hardware cost; the total area and power consumptions. Hence, the new LFSR based S-box can be deployed in block ciphers to achieve lightweight cryptographic implementations.
Keywords: Ciphers; Clocks; Computer architecture; Galois fields; Hardware; Power demand (ID#: 15-6546)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7217019&isnumber=7216784

 

Bhattacharyya, A.; Bose, T.; Bandyopadhyay, S.; Ukil, A.; Pal, A., “LESS: Lightweight Establishment of Secure Session: A Cross-Layer Approach Using CoAP and DTLS-PSK Channel Encryption,” in Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, vol., no., pp. 682–687, 24–27 March 2015. doi:10.1109/WAINA.2015.52
Abstract: Secure yet lightweight protocol for communication over the Internet is a pertinent problem for constrained environments in the context of Internet of Things (IoT) / Machine to Machine (M2M) applications. This paper extends the initial approaches published in [1], [2] and presents a novel cross-layer lightweight implementation to establish a secure channel. It distributes the responsibility of communication over secure channel in between the application and transport layers. Secure session establishment is performed using a payload embedded challenge response scheme over the Constrained Application Protocol (CoAP) [3]. Record encryption mechanism of Datagram Transport Layer Security (DTLS) [4] with Pre-Shared Key (PSK) [5] is used for encrypted exchange of application layer data. The secure session credentials derived from the application layer is used for encrypted exchange over the transport layer. The solution is designed in such a way that it can easily be integrated with an existing system deploying CoAP over DTLS-PSK. The proposed method is robust under different security attacks like replay attack, DoS and chosen cipher text. The improved performance of the proposed solution is established with comparative results and analysis.
Keywords: Internet; cryptography; CoAP; DTLS; DTLS-PSK channel encryption; DoS; Internet; LESS; M2M applications; PSK; cipher text; constrained application protocol; constrained environments; cross layer approach; datagram transport layer security; encrypted exchange; layer data application; lightweight establishment of secure session; lightweight protocol; machine to machine applications; pre-shared key; record encryption mechanism; replay attack; secure channel; security attacks; transport layer; transport layers; Bandwidth; Encryption; Internet; Payloads; Servers; IoT; M2M; lightweight; pre-shared-key; secure session (ID#: 15-6547)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096256&isnumber=7096097

 

Shahverdi, A.; Taha, M.; Eisenbarth, T., “Silent Simon: A Threshold Implementation Under 100 Slices,” in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 5–7 May 2015. doi:10.1109/HST.2015.7140227
Abstract: Lightweight Cryptography aims at achieving security comparable to conventional cryptography at a much lower cost. Simon is a lightweight alternative to AES, as it shares same cryptographic parameters, but has been shown to be extremely area-efficient on FPGAs. However, in the embedded setting, protection against side channel analysis is often required. In this work we present a threshold implementation of Simon. The proposed core splits the information between three shares and achieves provable security against first order side-channel attacks. The core can be implemented in less than 100 slices of a low-cost FPGA, making it the world smallest threshold implementation of a block-cipher. Hence, the proposed core perfectly suits highly-constrained embedded systems including sensor nodes and RFIDs. Security of the proposed core is validated by provable arguments as well as practical DPA attacks and tests for leakage quantification.
Keywords: cryptography; field programmable gate arrays; FPGA; RFID; Silent Simon; block cipher; conventional cryptography; cryptographic parameters; leakage quantification; lightweight cryptography; side channel analysis; side channel attacks; threshold implementation; Ciphers; Clocks; Field programmable gate arrays; Hardware; Registers; Table lookup (ID#: 15-6548)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140227&isnumber=7140225

 

Dhanuka, S.K.; Sachdeva, P.; Shaikh, S.S., “Cryptographic Algorithm Optimisation,” in Advance Computing Conference (IACC), 2015 IEEE International, vol., no., pp. 1111–1116, 12–13 June 2015. doi:10.1109/IADCC.2015.7154876
Abstract: Lightweight cryptographic algorithm is intended for implementation in resource constrained devices such as smart cards, wireless sensors, Radio Frequency Identification (RFID) tags which aim at providing adequate security. Hummingbird is a recent encryption algorithm based on ultra-lightweight cryptography and its design is based on blend of block cipher and stream cipher. This paper presents design space exploration of the algorithm and optimisation using different architectural approaches. It provides comparative analysis of different models of substitution box, cipher and encryption blocks.
Keywords: cryptography; Hummingbird encryption algorithm; RFID tags; architectural approach; block cipher; cipher block; cryptographic algorithm optimisation; design space exploration; encryption block; radiofrequency identification tags; resource constrained devices; smart cards; stream cipher; substitution box model; ultralightweight cryptographic algorithm; wireless sensors; Algorithm design and analysis; Ciphers; Encryption; Optimization; Resource management; Table lookup; Boolean Function Representation (BFR); Ciphers; Cryptography; Hummingbird; Look Up Table (LUT); Resource Constrained Devices (RCD); Resource Sharing (ID#: 15-6549)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154876&isnumber=7154658

 

Afianian, A.; Nobakht, S.S.; Ghaznavi-Ghoushchi, M.B., “Energy-Efficient Secure Distributed Storage in Mobile Cloud Computing,” in Electrical Engineering (ICEE), 2015 23rd Iranian Conference on, vol., no., pp. 740–745, 10–14 May 2015. doi:10.1109/IranianCEE.2015.7146311
Abstract: In the mobile cloud computing, one of the main concerns is to preserve the confidentiality and integrity of the outsourced data. Trust relations always have been one of the key factors in designing a security architecture for outsourcing data in mobile cloud computing in order to offload some of the computation overhead like pre-encryptions to a third trusted party which in practical environment it is not a wise idea, especially when we have a security sensitive data. In this paper, we present a method to improve Rabin’s IDA to further be used for secure dispersal of information by employing a lightweight energy-efficient pre-processing phase before application of the IDA. In the pre-processing phase, we produce a cipher key using a selfie picture taken by the user. Further we employ a method of key management such that in case of missing one file, there would be no way of reconstructing the file while relieving the user from key management complexities. Due to our method’s low-energy consuming nature, it can be confidently used in mobile cloud computing.
Keywords: cloud computing; cryptography; data integrity; mobile computing; storage management; trusted computing; Rabin IDA; cipher key; energy-efficient secure distributed storage; information dispersal algorithm; key management; mobile cloud computing; outsourced data confidentiality; outsourced data integrity; preencryptions; selfie picture; third trusted party; trust relations; Conferences; Decision support systems; Electrical engineering; Indexes; Distributed storage; Energy-efficient; Mobile cloud; Secure storage; Stream Cipher (ID#: 15-6550)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7146311&isnumber=7146167

 

Ege, B.; Papagiannopoulos, K.; Batina, L.; Picek, S., “Improving DPA Resistance of S-Boxes: How Far Can We Go?,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, vol., no., pp. 2013–2016, 24–27 May 2015. doi:10.1109/ISCAS.2015.7169071
Abstract: Side-channel analysis (SCA) is an important issue for numerous embedded cryptographic devices that carry out secure transactions on a daily basis. Consequently, it is of utmost importance to deploy efficient countermeasures. In this context, we investigate the intrinsic side-channel resistance of lightweight cryptographic S-boxes. We propose improved versions of S-boxes that offer increased power analysis resistance, whilst remaining secure against linear and differential cryptanalyses. To evaluate the side-channel resistance, we work under the Confusion Coefficient model [1] and employ heuristic techniques to produce those improved S-boxes. We evaluate the proposed components in software (AVR microprocessors) and hardware (SASEBO FPGA). Our conclusions show that the model and our approach are heavily platform-dependent and that different principles hold for software and hardware implementations.
Keywords: cryptography; DPA resistance; SCA; confusion coefficient model; differential cryptanalyses; lightweight cryptographic S-boxes; linear cryptanalyses; numerous embedded cryptographic devices; power analysis resistance; side-channel analysis; side-channel resistance; Ciphers; Hardware; Phantoms; Resistance; Software (ID#: 15-6551)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169071&isnumber=7168553

 

Alshahranil, A.M.; Walker, S., “Tesseract: A 4D Symmetric Key Container for Multimedia Security,” in Digital Information, Networking, and Wireless Communications (DINWC), 2015 Third International Conference on, vol., no., pp. 139–142, 3–5 Feb. 2015. doi:10.1109/DINWC.2015.7054232
Abstract: Real time applications (RTA) are application programs that function within a specific timescale. Voice over IP (VoIP) and video conferences are examples of RTA. Transmitting such data via open networks is risky. However, any security must be lightweight and cause no delay. Recently, many algorithms have been created, but very few are viable with RTA. In cryptography, ‘key space’ refers to the number of possible keys that can be used to generate the key from the keys container. In this paper, a tesseract is applied for the first time with RTA The tesseract functions with the suggested method to create a key that can generate 768!-bits. However, only three keys are selected from the tesseract key’s home, which are 128, 256 and 512-bits. Three different rounds will be utilized to create the key. This algorithm is considered to be fast and strong because the rounds and XOR-ing operation are lightweight and cheap.
Keywords: Internet telephony; computer network security; cryptography; multimedia communication; real-time systems; telecommunication security; 4D symmetric key container; RTA; VoIP; Voice over IP; XOR-ing operation; multimedia security; open networks; real time applications; video conferences; Ciphers; Computer science; Containers; Encryption; Multimedia communication; cube; encryption key; shared secret key; tesseract (ID#: 15-6552)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054232&isnumber=7054206

 

Harikrishnan, T.; Babu, C., “Cryptanalysis of Hummingbird Algorithm with Improved Security and Throughput,” in VLSI Systems, Architecture, Technology and Applications (VLSI-SATA), 2015 International Conference on, vol., no., pp. 1–6, 8–10 Jan. 2015. doi:10.1109/VLSI-SATA.2015.7050460
Abstract: Hummingbird is a Lightweight Authenticated Cryptographic Encryption Algorithm. This light weight cryptographic algorithm is suitable for resource constrained devices like RFID tags, Smart cards and wireless sensors. The key issue of designing this cryptographic algorithm is to deal with the trade off among security, cost and performance and find an optimal cost-performance ratio. This paper is an attempt to find out an efficient hardware implementation of Hummingbird Cryptographic algorithm to get improved security and improved throughput by adding Hash functions. In this paper, we have implemented an encryption and decryption core in Spartan 3E and have compared the results with the existing lightweight cryptographic algorithms. The experimental results shows that this algorithm has higher security and throughput with improved area than the existing algorithms.
Keywords: cryptography; telecommunication security; Hash functions; RFID tags; Spartan 3E; decryption core; hummingbird algorithm cryptanalysis; hummingbird cryptographic algorithm; lightweight authenticated cryptographic encryption algorithm; optimal cost-performance ratio; resource constrained devices; security; smart cards; wireless sensors; Authentication; Ciphers; Logic gates; Protocols; Radiofrequency identification; FPGA Implementation; Lightweight Cryptography; Mutual authentication protocol; Security analysis (ID#: 15-6553)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7050460&isnumber=7050449

 

Mishra, M.K.; Sengar, S.S.; Mukhopadhyay, S., “Algorithm for Secure Visual Communication,” in Signal Processing and Integrated Networks (SPIN), 2015 2nd International Conference on, vol., no., pp. 831–836, 19–20 Feb. 2015. doi:10.1109/SPIN.2015.7095310
Abstract: The enormous size of video data of natural scene and objects is a practical threat to storage, transmission. The efficient handling of video data essentially requires compression for economic utilization of storage space, access time and the available network bandwidth of the public channel. In addition, the protection of important video is of utmost importance so as to save it from malicious intervention, attack or alteration by unauthorized users. Therefore, security and privacy has become an important issue. Since from past few years, number of researchers concentrate on how to develop efficient video encryption for secure video transmission, a large number of multimedia encryption schemes have been proposed in the literature like selective encryption, complete encryption and entropy coding based encryption. Among above three kinds of algorithms, they all remain some kind of shortcomings. In this paper, we have proposed a lightweight selective encryption algorithm for video conference which is based on efficient XOR operation and symmetric hierarchical encryption, successfully overcoming the weakness of complete encryption while offering a better security. The proposed algorithm guarantees security, fastness and error tolerance without increasing the video size.
Keywords: cryptography; data privacy; multimedia communication; telecommunication network reliability; telecommunication security; teleconferencing; video communication; XOR operation; economic utilization; entropy coding; lightweight selective encryption algorithm; malicious intervention; multimedia encryption scheme; network bandwidth availability; privacy; public channel; secure visual communication; symmetric hierarchical encryption; video conference; video data handling; video data storage space; video data transmission; Ciphers; Encryption; Signal processing algorithms; Streaming media; Video coding; GDH.3; H.264/AVC; RC4; video encryption (ID#: 15-6554)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7095310&isnumber=7095159

 

Pandey, V.K.; Gupta, G.; Gupta, S., “Secure Protocol for Wireless Sensor Network,” in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on , vol., no., pp. 1080–1083, 11–13 March 2015. doi: (not provided)
Abstract: Wireless sensor network is an emerging technology due to its wide range of application. This scheme proposes a new secure protocol with better security and Even-driven cluster formation brings energy efficiency by avoiding the unnecessary formation of clusters, when no event is there in the network. The proposed scheme adopts a level based secure hierarchical approach to maintain the energy efficiency. It incorporates light-weight security mechanisms like, nested hash based message authentication codes (HMAC), Elliptic-Curve Diffie-Hellman (ECDH) key exchange scheme and Blowfish symmetric cipher.
Keywords: cryptographic protocols; message authentication; public key cryptography; telecommunication power management; wireless sensor networks; Blowfish symmetric cipher; ECDH key exchange scheme; HMAC; elliptic-curve Diffie-Hellman key exchange scheme; energy efficiency; even-driven cluster formation; level based secure hierarchical approach; lightweight security mechanisms; nested hash based message authentication codes; secure protocol; wireless sensor network; Base stations; Energy efficiency; Monitoring; Protocols; Security; Wireless communication; Wireless sensor networks; Data Aggregation; Energy Efficiency; Network Lifetime; Wireless Sensor Network (ID#: 15-6555)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100414&isnumber=7100186 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Machine Learning 2015

 

 
SoS Logo

Machine Learning

2015


Machine learning offers potential efficiencies and is an important tool in data mining. However, the “learned” or derived data must maintain integrity. Machine learning can also be used to identify threats and attacks. Research in this field relates to resilient architectures, composability, and privacy. Works cited here appeared in 2015.



Gaikwad, D.P.; Thool, R.C., “Intrusion Detection System Using Bagging Ensemble Method of Machine Learning,” in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no., pp. 291–295, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.61
Abstract: Intrusion detection system is widely used to protect and reduce damage to information system. It protects virtual and physical computer networks against threats and vulnerabilities. Presently, machine learning techniques are widely extended to implement effective intrusion detection system. Neural network, statistical models, rule learning, and ensemble methods are some of the kinds of machine learning methods for intrusion detection. Among them, ensemble methods of machine learning are known for good performance in learning process. Investigation of appropriate ensemble method is essential for building effective intrusion detection system. In this paper, a novel intrusion detection technique based on ensemble method of machine learning is proposed. The Bagging method of ensemble with REPTree as base class is used to implement intrusion detection system. The relevant features from NSL_KDD dataset are selected to improve the classification accuracy and reduce the false positive rate. The performance of proposed ensemble method is evaluated in term of classification accuracy, model building time and False Positives. The experimental results show that the Bagging ensemble with REPTree base class exhibits highest classification accuracy. One advantage of using Bagging method is that it takes less time to build the model. The proposed ensemble method provides competitively low false positives compared with other machine learning techniques.
Keywords: data analysis; learning (artificial intelligence); neural nets; security of data; statistical analysis; trees (mathematics); NSL-KDD dataset; REPTree; classification accuracy; intrusion detection system; machine learning techniques; neural network; physical computer networks; statistical models; using bagging ensemble method; virtual computer networks; Accuracy; Bagging; Classification algorithms; Feature extraction; Hidden Markov models; Intrusion detection; Training; Bagging; Ensemble; False positives; Machine learning; REPTree; intrusion detection (ID#: 15-6556)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155853&isnumber=7155781

 

Sapegin, A.; Gawron, M.; Jaeger, D.; Feng Cheng; Meinel, C., “High-Speed Security Analytics Powered by In-Memory Machine Learning Engine,” in Parallel and Distributed Computing (ISPDC), 2015 14th International Symposium on, vol., no., pp. 74–81, June 29 2015–July 2 2015. doi:10.1109/ISPDC.2015.16
Abstract: Modern Security Information and Event Management systems should be capable to store and process high amount of events or log messages in different formats and from different sources. This requirement often prevents such systems from usage of computational-heavy algorithms for security analysis. To deal with this issue, we built our system based on an in-memory data base with an integrated machine learning library, namely SAP HANA. Three approaches, i.e. (1) deep normalisation of log messages (2) storing data in the main memory and (3) running data analysis directly in the database, allow us to increase processing speed in such a way, that machine learning analysis of security events becomes possible nearly in real-time. To prove our concepts, we measured the processing speed for the developed system on the data generated using Active Directory tested and showed the efficiency of our approach for high-speed analysis of security events.
Keywords: data analysis; learning (artificial intelligence); security of data; SAP HANA; active directory; computational-heavy algorithms; data analysis; deep log message normalisation; high-speed security analytics; high-speed security event analysis; in-memory database; in-memory machine learning engine; integrated machine learning library; machine learning analysis; security information and event management systems; Algorithm design and analysis; Computers; Databases; Libraries; Machine learning algorithms; Prediction algorithms; Security; in-memory; intrusion detection; machine learning; security (ID#: 15-6557)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165133&isnumber=7165113

 

Mehta, V.; Bahadur, P.; Kapoor, M.; Singh, P.; Rajpoot, S., “Threat Prediction Using Honeypot and Machine Learning,” in Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), 2015 International Conference on, vol., no., pp. 278–282, 25–27 Feb. 2015. doi:10.1109/ABLAZE.2015.7155011
Abstract: Data is an abstraction which encapsulates information. In today’s era businesses are data driven which gives insight to predict the destiny of the business by making predictions but another side of the coin is data also helps in placing the present health of the business under our radar and looking back in our past and answer some important questions: what exactly went wrong in the past?. In this paper we try to look into the architecture of frameworks which can predict threat using Honeypot as the source of data and various machine learning algorithms to make precise prediction using OSSEC as Host Intrusion Detection System [HIDS], SNORT for Network Intrusion Detection System [NIDS] and Honeyd an open source Honeypot.
Keywords: business data processing; computer network security; data encapsulation; learning (artificial intelligence); public domain software; HIDS; Honeyd; NIDS; OSSEC; SNORT; business prediction; data source; frameworks architecture; host intrusion detection system; information encapsulation; machine learning; network intrusion detection system; open source honeypot; threat prediction; Computer hacking; Conferences; IP networks; Intrusion detection; Market research; Operating systems; Ports (Computers); High Interaction Honeypots (HIH); Host Intrusion Detection System (HIDS); Low Interaction Honeypots (LIH); Network Intrusion Detection System (NIDS) (ID#: 15-6558)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155011&isnumber=7154914

 

Tagluk, M.E.; Mamis, M.S.; Arkan, M.; Ertugrul, O.F., “Detecting Fault Type and Fault Location in Power Transmission Lines by Extreme Learning Machines,” in Signal Processing and Communications Applications Conference (SIU), 2015 23th, vol., no., pp. 1090–1093, 16–19 May 2015. doi:10.1109/SIU.2015.7130024
Abstract: Importance of supplying qualified and undisturbed electricity is increasing day by day. Therefore, detecting fault, fault type and fault location is a major issue in power transmission system in order to prevent power delivery system security. In previous studies, we observed that faults can be easily determined by extreme learning machine (ELM) and the aim of this study is to determine applicability of ELM in fault type, zone and location detection. 8 different feature sets were exacted from fault data that produced by ATP and these features were assessed by 15 different classifier and 5 different regression method. The results showed that ELM can be employed for detecting fault types and locations successfully.
Keywords: fault location; learning (artificial intelligence); power engineering computing; power transmission faults; regression analysis; ELM; extreme learning machines; fault type detection; power transmission lines; regression method; Artificial neural networks; Fault location; Feature extraction; Niobium; Optical wavelength conversion; Power transmission lines; Support vector machines; Extreme Learning Machine; Fault Location; Fault Type; Power Transmission Lines (ID#: 15-6559)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130024&isnumber=7129794

 

Dan Jiang; Omote, K., “An Approach to Detect Remote Access Trojan in the Early Stage of Communication,” in Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, vol., no., pp. 706–713, 24–27 March 2015. doi:10.1109/AINA.2015.257
Abstract: As data leakage accidents occur every year, the security of confidential information is becoming increasingly important. Remote Access Trojans (RAT), a kind of spyware, are used to invade the PC of a victim through targeted attacks. After the intrusion, the attacker can monitor and control the victim’s PC remotely, to wait for an opportunity to steal the confidential information. Since it is hard to prevent the intrusion of RATs completely, preventing confidential information being leaked back to the attacker is the main issue. Various existing approaches introduce different network behaviors of RAT to construct detection systems. Unfortunately, two challenges remain: one is to detect RAT sessions as early as possible, the other is to remain a high accuracy to detect RAT sessions, while there exist normal applications whose traffic behave similarly to RATs. In this paper, we propose a novel approach to detect RAT sessions in the early stage of communication. To differentiate network behaviors between normal applications and RAT, we extract the features from the traffic of a short period of time at the beginning. Afterward, we use machine learning techniques to train the detection model, then evaluate it by K-Fold cross-validation. The results show that our approach is able to detect RAT sessions with a high accuracy. In particular, our approach achieves over 96% accuracy together with the FNR of 10% by Random Forest algorithm, which means that our approach is valid to detect RAT sessions in the early stage of communication.
Keywords: invasive software; learning (artificial intelligence); K-fold cross-validation; RAT sessions; confidential information; data leakage accidents; feature extraction; intrusion; machine learning; network behaviors; random forest algorithm; remote access trojan detection; spyware; Accuracy; Feature extraction; Machine learning algorithms; Rats; Support vector machines; Training; Trojan horses; Remote Access Trojan detection; network behavior; targeted attack (ID#: 15-6560)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098042&isnumber=7097928

 

Kawaguchi, N.; Omote, K., “Malware Function Classification Using APIs in Initial Behavior,” in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, vol., no., pp.138–144, 24–26 May 2015. doi:10.1109/AsiaJCIS.2015.15
Abstract: Malware proliferation has become a serious threat to the Internet in recent years. Most of the current malware are subspecies of existing malware that have been automatically generated by illegal tools. To conduct an efficient analysis of malware, estimating their functions in advance is effective when we give priority to analyze. However, estimating malware functions has been difficult due to the increasing sophistication of malware. Although various approaches for malware detection and classification have been considered, the classification accuracy is still low. In this paper, we propose a new classification method which estimates malware’s functions from APIs observed by dynamic analysis on a host. We examining whether the proposed method can correctly classify unknown malware based on function by machine learning. The results show that the our new method can classify each malware’s function with an average accuracy of 83.4%.
Keywords: Internet; invasive software; learning (artificial intelligence); pattern classification; API; Internet; dynamic analysis; efficient malware analysis; illegal tools; initial behavior; machine learning; malware detection; malware function classification; malware proliferation; Accuracy; Data mining; Feature extraction; Machine learning algorithms; Malware; Software; Support vector machines; malware classification (ID#: 15-6561)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153948&isnumber=7153836

 

Beiye Liu; Chunpeng Wu; Hai Li; Yiran Chen; Qing Wu; Barnell, M.; Qinru Qiu, “Cloning Your Mind: Security Challenges in Cognitive System Designs and Their Solutions,” in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, vol., no., pp. 1–5, 8–12 June 2015. doi:10.1145/2744769.2747915
Abstract: With the booming of big-data applications, cognitive information processing systems that leverage advanced data processing technologies, e.g., machine learning and data mining, are widely used in many industry fields. Although these technologies demonstrate great processing capability and accuracy in the relevant applications, several security and safety challenges are also emerging against these learning based technologies. In this paper, we will first introduce several security concerns in cognitive system designs. Some real examples are then used to demonstrate how the attackers can potentially access the confidential user data, replicate a sensitive data processing model without being granted the access to the details of the model, and obtain some key features of the training data by using the services publically accessible to a normal user. Based on the analysis of these security challenges, we also discuss several possible solutions that can protect the information privacy and security of cognitive systems during different stages of the usage.
Keywords: Big Data; cognition; security of data; Big-Data application; cognitive information processing systems; cognitive system design; data mining; data security; machine learning; sensitive data processing model; Data models; Neural networks; Predictive models; Security; Training; Training data; Cognitive Systems; Machine Learning; Security (ID#: 15-6562)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167279&isnumber=7167177

 

Chih-Hung Hsieh; Yu-Siang Shen; Chao-Wen Li; Jain-Shing Wu, “iF2: An Interpretable Fuzzy Rule Filter for Web Log Post-Compromised Malicious Activity Monitoring,” in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, vol., no., pp.130–137, 24–26 May 2015. doi:10.1109/AsiaJCIS.2015.19
Abstract: To alleviate the loads of tracking web log file by human effort, machine learning methods are now commonly used to analyze log data and to identify the pattern of malicious activities. Traditional kernel based techniques, like the neural network and the support vector machine (SVM), typically can deliver higher prediction accuracy. However, the user of a kernel based techniques normally cannot get an overall picture about the distribution of the data set. On the other hand, logic based techniques, such as the decision tree and the rule-based algorithm, feature the advantage of presenting a good summary about the distinctive characteristics of different classes of data such that they are more suitable to generate interpretable feedbacks to domain experts. In this study, a real web-access log dataset from a certain organization was collected. An efficient interpretable fuzzy rule filter (iF2) was proposed as a filter to analyze the data and to detect suspicious internet addresses from the normal ones. The historical information of each internet address recorded in web log file is summarized as multiple statistics. And the design process of iF2 is elaborately modeled as a parameter optimization problem which simultaneously considers 1) maximizing prediction accuracy, 2) minimizing number of used rules, and 3) minimizing number of selected statistics. Experimental results show that the fuzzy rule filter constructed with the proposed approach is capable of delivering superior prediction accuracy in comparison with the conventional logic based classifiers and the expectation maximization based kernel algorithm. On the other hand, though it cannot match the prediction accuracy delivered by the SVM, however, when facing real web log file where the ratio of positive and negative cases is extremely unbalanced, the proposed iF2 of having optimization flexibility results in a better recall rate and enjoys one major advantage due to providing the user with an overall picture of the underlying distributions.
Keywords: Internet; data mining; fuzzy set theory; learning (artificial intelligence); neural nets; pattern classification; statistical analysis; support vector machines; Internet address; SVM; Web log file tracking; Web log post-compromised malicious activity monitoring; Web-access log dataset; decision tree; expectation maximization based kernel algorithm; fuzzy rule filter; iF2; interpretable fuzzy rule filter; kernel based techniques; log data analysis; logic based classifiers; logic based techniques; machine learning methods; malicious activities; neural network; parameter optimization problem; recall rate; rule-based algorithm; support vector machine; Accuracy; Internet; Kernel; Monitoring; Optimization; Prediction algorithms; Support vector machines; Fuzzy Rule Based Filter; Machine Learning; Parameter Optimization; Pattern Recognition; Post-Compromised Threat Identification; Web Log Analysis (ID#: 15-6563)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153947&isnumber=7153836

 

Zheng Dong; Kapadia, A.; Blythe, J.; Camp, L.J., “Beyond the Lock Icon: Real-Time Detection of Phishing Websites Using Public Key Certificates,” in Electronic Crime Research (eCrime), 2015 APWG Symposium on, vol., no., pp.1–12, 26–29 May 2015. doi:10.1109/ECRIME.2015.7120795
Abstract: We propose a machine-learning approach to detect phishing websites using features from their X.509 public key certificates. We show that its efficacy extends beyond HTTPS-enabled sites. Our solution enables immediate local identification of phishing sites. As such, this serves as an important complement to the existing server-based anti-phishing mechanisms which predominately use blacklists. Blacklisting suffers from several inherent drawbacks in terms of correctness, timeliness, and completeness. Due to the potentially significant lag prior to site blacklisting, there is a window of opportunity for attackers. Other local client-side phishing detection approaches also exist, but primarily rely on page content or URLs, which are arguably easier to manipulate by attackers. We illustrate that our certificate-based approach greatly increases the difficulty of masquerading undetected for phishers, with single millisecond delays for users. We further show that this approach works not only against HTTPS-enabled phishing attacks, but also detects HTTP phishing attacks with port 443 enabled.
Keywords: Web sites; computer crime; learning (artificial intelligence); public key cryptography; HTTPS-enabled phishing attack; Web site phishing detection; machine-learning approach from; public key certificate; server-based antiphishing mechanism; site blacklisting; Browsers; Electronic mail; Feature extraction; Public key; Servers; Uniform resource locators; certificates; machine learning; security (ID#: 15-6564)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120795&isnumber=7120794

 

Egemen, E.; Inal ve Albert Levi, E., “Mobile Malware Classification Based on Permission Data,” in Signal Processing and Communications Applications Conference (SIU), 2015 23th, vol., no., pp.1529–1532, 16–19 May 2015. doi:10.1109/SIU.2015.7130137
Abstract: The prevalence of mobile devices in today’s world caused the security of these devices questioned more frequently than ever. Android, as one of the most widely used mobile operating systems, is the most likely target for malwares through third party applications. In this work, a method has been devised to detect malwares that target Android platform, by using classification based machine learning. In this study, we use permissions of applications as the features. After the training and test steps on the dataset consisting 5271 malwares and 5097 goodwares, we conclude that Random Forest classification results in 98% performance on the classification of applications. This work emphasizes how much mobile malware classification result can be improved by a system using only the permissions data.
Keywords: Android (operating system); invasive software; learning (artificial intelligence); mobile computing; pattern classification; Android; classification based machine learning; device security; malware detection; mobile devices; mobile malware classification; mobile operating systems; permission data; random forest classification; third party applications; Androids; Google; Humanoid robots; Malware; Mobile communication; Support vector machines; android; classification; machine learning; malware; mobile; permissions (ID#: 15-6565)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130137&isnumber=7129794

 

Patrascu, Alecsandru; Patriciu, Victor-Valeriu, “Cyber Protection of Critical Infrastructures Using Supervised Learning,” in Control Systems and Computer Science (CSCS), 2015 20th International Conference on, vol., no., pp. 461–468, 27–29 May 2015. doi:10.1109/CSCS.2015.34
Abstract: Interconnected computing units are used more and more in our daily lives, starting from the transportation systems and ending with gas and electricity distribution, together with tenths or hundreds of systems and sensors, called critical infrastructures. In this context, cyber protection is vital because they represent one of the most important parts of a country’s economy thus making them very attractive to cyber criminals or malware attacks. Even though the detection technologies for new threats have improved over time, modern malware still manage to pass even the most secure and well organized computer networks, firewalls and intrusion detection equipments, making all systems vulnerable. This is the main reason that automatic learning is used more often than any other detection algorithms as it can learn from existing attacks and prevent newer ones. In this paper we discuss the issues threatening critical infrastructures systems and propose a framework based on machine learning algorithms and game theory decision models that can be used to protect such systems. We present the results taken after implementing it using three distinct classifiers - k nearest neighbors, decision trees and support vector machines.
Keywords: Biological system modeling; Game theory; Security; Sensors; Support vector machines; Testing; Training; critical infrastructure protection; cybersecurity framework; game theory decision engine; machine learning (ID#: 15-6566)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168469&isnumber=7168393

 

Adachi, T.; Omote, K., “An Approach to Predict Drive-by-Download Attacks by Vulnerability Evaluation and Opcode,” in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, vol., no., pp. 145–151, 24–26 May 2015. doi:10.1109/AsiaJCIS.2015.17
Abstract: Drive-by-download attacks exploit vulnerabilities in Web browsers, and users are unnoticeably downloading malware which accesses to the compromised Web sites. A number of detection approaches and tools against such attacks have been proposed so far. Especially, it is becoming easy to specify vulnerabilities of attacks, because researchers well analyze the trend of various attacks. Unfortunately, in the previous schemes, vulnerability information has not been used in the detection/prediction approaches of drive-by-download attacks. In this paper, we propose a prediction approach of “malware downloading” during drive-by-download attacks (approach-I), which uses vulnerability information. Our experimental results show our approach-I achieves the prediction rate (accuracy) of 92%, FNR of 15% and FPR of 1.0% using Naive Bayes. Furthermore, we propose an enhanced approach (approach-II) which embeds Opcode analysis (dynamic analysis) into our approach-I (static approach). We implement our approach-I and II, and compare the three approaches (approach-I, II and Opcode approaches) using the same datasets in our experiment. As a result, our approach-II has the prediction rate of 92%, and improves FNR to 11% using Random Forest, compared with our approach-I.
Keywords: Web sites; invasive software; learning (artificial intelligence); system monitoring; FNR; FPR; Opcode analysis; Web browsers; Web sites; attack vulnerabilities; drive-by-download attack prediction; dynamic analysis; malware downloading; naive Bayes; prediction rate; random forest; static approach; vulnerability evaluation; vulnerability information; Browsers; Feature extraction; Machine learning algorithms; Malware; Predictive models; Probability; Web pages; Drive-by-Download Attacks; Malware; Supervised Machine Learning (ID#: 15-6567)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153949&isnumber=7153836

 

Gilmore, R.; Hanley, N.; O’Neill, M., “Neural Network Based Attack on a Masked Implementation of AES,” in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, vol., no., pp. 106–111, 5–7 May 2015. doi:10.1109/HST.2015.7140247
Abstract: Masked implementations of cryptographic algorithms are often used in commercial embedded cryptographic devices to increase their resistance to side channel attacks. In this work we show how neural networks can be used to both identify the mask value, and to subsequently identify the secret key value with a single attack trace with high probability. We propose the use of a pre-processing step using principal component analysis (PCA) to significantly increase the success of the attack. We have developed a classifier that can correctly identify the mask for each trace, hence removing the security provided by that mask and reducing the attack to being equivalent to an attack against an unprotected implementation. The attack is performed on the freely available differential power analysis (DPA) contest data set to allow our work to be easily reproducible. We show that neural networks allow for a robust and efficient classification in the context of side-channel attacks.
Keywords: cryptography; neural nets; pattern classification; principal component analysis; AES; Advanced Encryption Standard; DPA; PCA; cryptographic algorithms; differential power analysis contest data set; embedded cryptographic devices; machine learning; mask value identification; masked implementation; neural network based attack; principal component analysis; secret key value identification; side channel attacks; Artificial neural networks; Cryptography; Error analysis; Hardware; Power demand; Principal component analysis; Training; AES; SCA; masking; neural network (ID#: 15-6568)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140247&isnumber=7140225

 

Kavitha, P.; Mukesh, R., “To Detect Malicious Nodes in the Mobile Ad-hoc Networks Using Soft Computing Technique,” in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp.1564–1573, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124851
Abstract: A Mobile Ad-hoc Network (MANET) is a constantly self-configuring, infrastructure-less network of mobile devices where each device is wireless, moves without restraint and be a router to put across traffic unassociated to its own use. Every device must be prepared to constantly sustain the information obligatory for routing the traffic. And this is the main challenge in building a MANET. Such networks may be self operating or linked to a larger internet and may have one or multiple different transceivers between nodes resulting in a highly dynamic and autonomous topology. The first focus is on MANET attacks followed by detection of the malicious node from MANET via Polynomial-Reduction Algorithm. Although scientists have assessed many algorithms for the detection and rectification of the malicious nodes in the MANETs, the problem still persists. Due to the unprecedented growth in technology, the unidentified vulnerabilities are also intensifying. Therefore, it is very crucial to come up with some ground-breaking ideas to prevent the MANET. In this paper we are used NS2 simulator to implementing malicious nodes in MANET.
Keywords: Internet; learning (artificial intelligence); mobile ad hoc networks; polynomials; telecommunication network routing; telecommunication network topology; telecommunication security; telecommunication traffic; uncertainty handling; Internet; MANET attacks; NS2 simulator; autonomous topology; dynamic topology; infrastructure-less network; machine learning algorithm; malicious node detection; malicious node rectification; mobile devices; polynomial-reduction algorithm; self-configuring network; soft computing technique; traffic routing; transceivers; Mobile ad hoc networks; Mobile communication; Routing; Routing protocols; Security; Machine Learning Algorithm; Mobile Ad-hoc Networks; Polynomial-Reduction Algorithm (ID#: 15-6569)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124851&isnumber=7124722

 

Neelam, Sahil; Sood, Sandeep; Mehmi, Sandeep; Dogra, Shikha, “Artificial Intelligence for Designing User Profiling System for Cloud Computing Security: Experiment,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, vol., no., pp. 51–58, 19–20 March 2015. doi:10.1109/ICACEA.2015.7164645
Abstract: In Cloud Computing security, the existing mechanisms: Anti-virus programs, Authentications, Firewalls are not able to withstand the dynamic nature of threats. So, User Profiling System, which registers user’s activities to analyze user’s behavior, augments the security system to work in proactive and reactive manner and provides an enhanced security. This paper focuses on designing a User Profiling System for Cloud environment using Artificial Intelligence techniques and studies behavior (of User Profiling System) and proposes a new hybrid approach, which will deliver a comprehensive User Profiling System for Cloud Computing security.
Keywords: artificial intelligence; authorisation; cloud computing; firewalls; antivirus programs; artificial intelligence techniques; authentications; cloud computing security; cloud environment; firewalls; proactive manner; reactive manner; user activities; user behavior; user profiling system; Artificial intelligence; Cloud computing; Computational modeling; Fuzzy logic; Fuzzy systems; Genetic algorithms; Security; Artificial Intelligence; Artificial Neural Networks; Cloud Computing; Datacenters; Expert Systems; Genetics; Machine Learning; Multi-tenancy; Networking Systems; Pay-as-you-go Model (ID#: 15-6570)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164645&isnumber=7164643

 

Enache, Adriana-Cristina; Sgarciu, Valentin; Petrescu-Nita, Alina, “Intelligent Feature Selection Method Rooted in Binary Bat Algorithm for Intrusion Detection,” in Applied Computational Intelligence and Informatics (SACI), 2015 IEEE 10th Jubilee International Symposium on, vol., no., pp. 517–521, 21–23 May 2015. doi:10.1109/SACI.2015.7208259
Abstract: The multitude of hardware and software applications generate a lot of data and burden security solutions that must acquire informations from all these heterogenous systems. Adding the current dynamic and complex cyber threats in this context, make it clear that new security solutions are needed. In this paper we propose a wrapper feature selection approach that combines two machine learning algorithms with an improved version of the Binary Bat Algorithm. Tests on the NSL-KDD dataset empirically prove that our proposed method can reduce the number of features with almost 60% and obtains good results in terms of attack detection rate and false alarm rate, even for unknown attacks.
Keywords: Feature extraction; Intrusion detection; Machine learning algorithms; Niobium; Silicon; Support vector machines; Training; Feature selection; Naïve Bayes and BBA; SVM (ID#: 15-6571)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208259&isnumber=7208165

 

Aggarwal, P.; Sharma, S.K., “An Empirical Comparison of Classifiers to Analyze Intrusion Detection,” in Advanced Computing & Communication Technologies (ACCT), 2015 Fifth International Conference on, vol., no., pp. 446–450, 21–22 Feb. 2015. doi:10.1109/ACCT.2015.59
Abstract: The massive data exchange on the web has deeply increased the risk of malicious activities thereby propelling the research in the area of Intrusion Detection System (IDS). This paper aims to first select ten classification algorithms based on their efficiency in terms of speed, capability to handle large dataset and dependency on parameter tuning and then simulates the ten selected existing classifiers on a data mining tool Weka for KDD’99 dataset. The simulation results are evaluated and benchmarked based on the generic evaluation metrics for IDS like F-score and accuracy.
Keywords: Internet; data mining; electronic data interchange; pattern classification; security of data; F-score; IDS; Web; Weka; classification algorithms; data classifiers; data mining tool; generic evaluation metrics; intrusion detection system; malicious activities; massive data exchange; parameter tuning; Accuracy; Classification algorithms; Intrusion detection; Machine learning algorithms; Mathematical model; Measurement; Vegetation; Classification algorithm; Intrusion detection system; NSL-KDD (ID#: 15-6572)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7079125&isnumber=7079031

 

Mokhov, S.A.; Paquet, J.; Debbabi, M., “MARFCAT: Fast Code Analysis for Defects and Vulnerabilities,” in Software Analytics (SWAN), 2015 IEEE 1st International Workshop on, vol., no., pp. 35–38, 2–2 March 2015. doi:10.1109/SWAN.2015.7070488
Abstract: We present a fast machine-learning approach to static code analysis and fingerprinting for weaknesses related to security, software engineering, and others using the open-source MARF framework and its MARFCAT application. We used the NIST’s SATE IV static analysis tool exposition workshop’s data sets that included popular open-source projects and large synthetic sets as test cases. To aid detection of weak or vulnerable code, including source or binary on different platforms the machine learning approach proved to be fast and accurate to for such tasks where other tools are either much slower or have much smaller recall of known vulnerabilities. We use signal processing techniques in our approach to accomplish the classification tasks. MARFCAT’s design is independent of the language being analyzed, source code, bytecode, or binary.
Keywords: learning (artificial intelligence); pattern classification; program diagnostics; signal processing; MARF-based Code Analysis Tool; MARFCAT; NIST; SATE IV; defects; fingerprinting; machine-learning; open-source MARF framework; open-source projects; signal processing techniques; static analysis tool exposition workshop data sets; static code analysis; vulnerabilities; Algorithm design and analysis; Feature extraction; Indexes; Java; Testing; Wavelet transforms (ID#: 15-6573)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070488&isnumber=7070476

 

Appelt, D.; Nguyen, C.D.; Briand, L., “Behind an Application Firewall, Are We Safe from SQL Injection Attacks?,” in Software Testing, Verification and Validation (ICST), 2015 IEEE 8th International Conference on, vol., no., pp. 1–10, 13–17 April 2015. doi:10.1109/ICST.2015.7102581
Abstract: Web application firewalls are an indispensable layer to protect online systems from attacks. However, the fast pace at which new kinds of attacks appear and their sophistication require that firewalls be updated and tested regularly as otherwise they will be circumvented. In this paper, we focus our research on web application firewalls and SQL injection attacks. We present a machine learning-based testing approach to detect holes in firewalls that let SQL injection attacks bypass. At the beginning, the approach can automatically generate diverse attack payloads, which can be seeded into inputs of web-based applications, and then submit them to a system that is protected by a firewall. Incrementally learning from the tests that are blocked or passed by the firewall, our approach can then select tests that exhibit characteristics associated with bypassing the firewall and mutate them to efficiently generate new bypassing attacks. In the race against cyber attacks, time is vital. Being able to learn and anticipate more attacks that can circumvent a firewall in a timely manner is very important in order to quickly fix or fine-tune the firewall. We developed a tool that implements the approach and evaluated it on ModSecurity, a widely used application firewall. The results we obtained suggest a good performance and efficiency in detecting holes in the firewall that could let SQLi attacks go undetected.
Keywords: Internet; SQL; firewalls; learning (artificial intelligence); ModSecurity; SQL injection attacks; SQLi attacks; Web application firewalls; bypassing attacks; cyber attacks; machine learning-based testing approach; online system protection; Databases; Grammar; Radio access networks; Security; Servers; Syntactics; Testing (ID#: 15-6574)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102581&isnumber=7102573

 

Stampar, M.; Fertalj, K., “Artificial Intelligence in Network Intrusion Detection,” in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, vol., no., pp.1318–1323, 25–29 May 2015. doi:10.1109/MIPRO.2015.7160479
Abstract: In past, detection of network attacks has been almost solely done by human operators. They anticipated network anomalies in front of consoles, where based on their expert knowledge applied necessary security measures. With the exponential growth of network bandwidth, this task slowly demanded substantial improvements in both speed and accuracy. One proposed way how to achieve this is the usage of artificial intelligence (AI), progressive and promising computer science branch, particularly one of its sub-fields - machine learning (ML) - where main idea is learning from data. In this paper authors will try to give a general overview of AI algorithms, with main focus on their usage for network intrusion detection.
Keywords: computer network security; learning (artificial intelligence); AI algorithm; ML; artificial intelligence; expert knowledge; machine learning; network attacks detection; network bandwidth; network intrusion detection;  Artificial intelligence; Artificial neural networks; Classification algorithms; Intrusion detection; Market research; Niobium; Support vector machines (ID#: 15-6575)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160479&isnumber=7160221

 

Tao Ding; AlEroud, A.; Karabatis, G., “Multi-Granular Aggregation of Network Flows for Security Analysis, in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, vol., no., pp.173–175, 27–29 May 2015. doi:10.1109/ISI.2015.7165965
Abstract: Investigating network flows is an approach of detecting attacks by identifying known patterns. Flow statistics are used to discover anomalies by aggregating network traces and then using machine-learning classifiers to discover suspicious activities. However, the efficiency and effectiveness of the flow classification models depends on the granularity of aggregation. This paper describes a novel approach that aggregates packets into network flows and correlates them with security events generated by payload-based IDSs for detection of cyber-attacks.
Keywords: computer network security; learning (artificial intelligence); pattern classification; statistical analysis; cyber-attack; machine-learning classifier; multigranular aggregation; network flow statistics; payload-based IDS; security analysis; security event; Correlation; Grippers; Hidden Markov models; IP networks; Intrusion detection; Predictive models; Flow aggregation; Intrusion Detection; NetFlow; traffic classification (ID#: 15-6576)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165965&isnumber=7165923

 

Becker, G.T.; Wild, A.; Guneysu, T., “Security Analysis of Index-Based Syndrome Coding for PUF-Based Key Generation,” in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, vol., no., pp. 20–25, 5–7 May 2015. doi:10.1109/HST.2015.7140230
Abstract: Physical Unclonable Functions (PUFs) as secure providers for cryptographic keys have gained significant research interest in recent years. Since plain PUF responses are typically unreliable, error-correcting mechanisms are employed to transform a fuzzy PUF response into a deterministic cryptographic key. In this context, Index-Based Syndrome Coding (IBS) has been reported as being provably secure in case of identical and independently distributed PUF responses and is therefore an interesting option to implement a highly secure key provider. In this paper we analyze the security of IBS in combination with a k-sum PUF as proposed at CHES 2011. Since for a k-sum PUF the assumption of identical and independently distributed responses does not hold, the notion of leaked bits was introduced at CHES 2011 to capture the security of such constructions. Based on a refined analysis using hamming distance characterization and machine learning techniques, we show that the entropy of the key obtained is significantly lower than expected. More precisely, we obtained from our findings that even the construction from CHES with the highest security claims only achieves a bit entropy rate of 0.39.
Keywords: cryptography; fuzzy set theory; learning (artificial intelligence); CHES 2011; IBS; PUF-based key generation; cryptographic keys; deterministic cryptographic key; error-correcting mechanisms; fuzzy PUF response; hamming distance characterization; index-based syndrome coding; k-sum PUF; machine learning techniques; physical unclonable functions; Cost function; Decoding; Encoding; Entropy; Hamming distance; Measurement; Security; Error-Correction; Fuzzy Extractor; Index-Based Syndrome Coding; Physical Unclonable Functions; k-sum PUF (ID#: 15-6577)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140230&isnumber=7140225

 

Merat, S.; Almuhtadi, W., “Artificial Intelligence Application for Improving Cyber-Security Acquirement,” in Electrical and Computer Engineering (CCECE), 2015 IEEE 28th Canadian Conference on, vol., no., pp.1445–1450, 3–6 May 2015. doi:10.1109/CCECE.2015.7129493
Abstract: The main focus of this paper is the improvement of machine learning where a number of different types of computer processes can be mapped in multitasking environment. A software mapping and modelling paradigm named SHOWAN is developed to learn and characterize the cyber awareness behaviour of a computer process against multiple concurrent threads. The examined process start to outperform, and tended to manage numerous tasks poorly, but it gradually learned to acquire and control tasks, in the context of anomaly detection. Finally, SHOWAN plots the abnormal activities of manually projected task and compare with loading trends of other tasks within the group.
Keywords: learning (artificial intelligence); security of data; SHOWAN; anomaly detection; artificial intelligence application; computer process; concurrent threads; cyber awareness behaviour; cyber-security acquirement; machine learning; modelling paradigm; multitasking environment; software mapping; Artificial intelligence; Indexes; Instruction sets; Message systems; Routing; Security; Cyber Multitasking Performance; Cyber-Attack; Cyber-Security; Intrinsically locked; Non-maskable task; Normative Model; Queuing Management; Task Prioritization; synchronized thread (ID#: 15-6578)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129493&isnumber=7129089 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Protocol Verification 2015

 

 
SoS Logo

Protocol Verification

2015


Verifying the accuracy of security protocols is a primary goal of cybersecurity. Research into the area has sought to identify
new and better algorithms, and to identify better methods for verifying security protocols in myriad applications and environments. Verification has implications for compositionality and composability, and for policy-based collaboration.
The works cited here were presented in 2015.



Cheng-Rung Tsai; Ming-Chun Hsiao; Wen-Chung Shen; Wu, A.-Y.A.; Chen-Mou Cheng, “A 1.96 mm2 Low-Latency Multi-Mode Crypto-Coprocessor for PKC-Based IoT Security Protocols,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, vol., no., pp. 834–837, 24–27 May 2015. doi:10.1109/ISCAS.2015.7168763
Abstract: In this paper, we present the implementation of a multi-mode crypto-coprocessor, which can support three different public-key cryptography (PKC) engines (NTRU, TTS, Pairing) used in post-quantum and identity-based cryptosystems. The PKC-based security protocols are more energy-efficient because they usually require less communication overhead than symmetric-key-based counterparts. In this work, we propose the first-of-its-kind tri-mode PKC coprocessor for secured data transmission in Internet-of-Things (IoT) systems. For the purpose of low energy consumption, the crypto-coprocessor incorporates three design features, including 1) specialized instruction set for the multi-mode cryptosystems, 2) a highly parallel arithmetic unit for cryptographic kernel operations, and 3) a smart scheduling unit with intelligent control mechanism. By utilizing the parallel arithmetic unit, the proposed crypto-coprocessor can achieve about 50% speed up. Meanwhile, the smart scheduling unit can save up to 18% of the total latency. The crypto-coprocessor was implemented with AHB interface in TSMC 90nm CMOS technology, and the die size is only 1.96 mm2. Furthermore, our chip is integrated with an ARM-based system-on-chip (SoC) platform for functional verification.
Keywords: CMOS integrated circuits; Internet of Things; coprocessors; cryptographic protocols; CMOS technology; Internet-of-Things systems; IoT security protocols; IoT systems; PKC based security protocols; PKC coprocessor; PKC engines; SoC platform; cryptographic kernel operations; functional verification; highly parallel arithmetic unit; identity based cryptosystems; intelligent control mechanism; multimode cryptocoprocessor; parallel arithmetic unit; post quantum cryptosystems; public key cryptography; secured data transmission; smart scheduling unit; symmetric key based counterparts; system-on-chip; Computer architecture; Elliptic curve cryptography; Engines; Polynomials; System-on-chip; IoT; Public-key cryptography; SoC; crypto-coprocessor (ID#: 15-6579)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168763&isnumber=7168553

 

Ihsan, A.; Saghar, K.; Fatima, T., “Analysis of LEACH Protocol(s) Using Formal Verification,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 254–262, 13–17 Jan. 2015. doi:10.1109/IBCAST.2015.7058513
Abstract: WSN nodes operate in an unattended environment and thus have irreplaceable batteries. Thus an important concern is the network lifetime; we need to utilize their energy for a longer time otherwise nodes run out of power. For this purpose various protocols have been established and the subject of our matter is the LEACH protocol. The LEACH protocol is self-organizing and is characterized as an adaptive clustering protocol which uses randomly distributes energy load among nodes. By using cluster heads and data aggregation excessive energy consumption is avoided. In this paper we analyzed LEACH and its extensions like LEACH-C and LEACH-F using Formal modeling techniques. Formal modeling is often used by researchers these days to verify a variety of routing protocols. By using formal verification one can precisely confirm the authenticity of results and worst case scenarios, a solution not possible using computer simulations and hardware implementation. In this paper, we have applied formal verification to compare how efficient LEACH is as compared to its extensions in various WSN topologies. The paper is not about design improvement of LEACH but to formally verify its correctness, efficiency and performance as already stated. This work is novel as LEACH and its extensions according to our knowledge have not been analyzed using formal verification techniques.
Keywords: formal verification; routing protocols; telecommunication power management; wireless sensor networks; LEACH protocol analysis; LEACH-C; LEACH-F; WSN nodes; WSN topologies; adaptive clustering protocol; cluster heads; data aggregation; energy load; formal modeling techniques; formal verification techniques; irreplaceable batteries; network lifetime; routing protocols; unattended environment; Formal verification; IP networks; Routing protocols; Wireless sensor networks; Formal Modeling; Protocol Verification Hierarchal Networks; Routing Protocol; Wireless Sensor Networks (WSN) (ID#: 15-6580)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058513&isnumber=7058466

 

Mahesh, Golla; Sakthivel SM, “Functional Verification of the Axi2OCP Bridge Using System Verilog and Effective Bus Utilization Calculation for AMBA AXI 3.0 Protocol,” in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, vol., no., pp. 1–5, 19–20 March 2015. doi:10.1109/ICIIECS.2015.7193091
Abstract: Verification is the process of exploring the correct functioning of the design. It is not possible to guarantee a design without proper verification, because misunderstanding and misinterpretation of the specifications, incorrect interaction between the cores and IPS leads to unexpected behavior of the system. Functional verification plays a key role in validating a design and its Functionality. AXI2OCP Bridge connects two different protocols i.e. advanced extensible interface and open core protocol. AXI2OCP Bridge helps in converting AXI 3.0 format signals to OCP format signals, AXI address to OCP address and AXI data to OCP data. Protocols With effective Bus utilization leads to have a faster data rate with increased performance. Measuring the Bus utilization parameter for the AXI 3.0 protocols generated test cases and functional verification of the AXI2OCP Bridge using system verilog language is the main idea of this paper.
Keywords: Bridges; Computer aided software engineering; Hardware; AXI 3.0 Protocol; AXI2OCP Bridge; Busy count and Bus utilization; Functional verification; Valid count (ID#: 15-6581)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193091&isnumber=7192777

 

Filali, R.; Bouhdadi, M., “Formal Verification of the Lowe Modified BAN Concrete Andrew Secure RPC Protocol,” in RFID and Adaptive Wireless Sensor Networks (RAWSN), 2015 Third International Workshop on, vol., no., pp.18–22, 13–15 May 2015. doi:10.1109/RAWSN.2015.7173272
Abstract: The Andrew Secure RPC (ASRPC) is a protocol that allows two entities, already shared a secret key, to establish a new cryptographic key. It must guarantee the authenticity of the new sharing session key in every session. Since the original protocol, several revised versions have been made in order to make the protocol more secure. In this paper we present a formal development to prove the correctness of the newly modified version, we use the formal method Event-B and the Rodin tool to model the protocol and to verify that the desired security properties hold. We show that the protocol is indeed more secure than the previous versions.
Keywords: cryptographic protocols; formal verification; telecommunication security; ASRPC; Andrew secure RPC protocol; Burrows-Abadi-Needham protocol; Event-B formal method; Lowe modified BAN; Rodin tool; cryptographic key; formal verification; security properties; sharing session key; Authentication; Concrete; Cryptography; Niobium; Protocols; Servers; Andrew Secure RPC; Event-B; Formal Modelling; Refinement; Rodin (ID#: 15-6582)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7173272&isnumber=7173261

 

Chaithanya, B.S.; Gopakumar, G.; Krishnan, D.K.; Rao, S.K.; Oommen, B.C., “Assertion Based Reconfigurable Testbenches for Efficient Verification and Better Reusability,” in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp. 1511–1514, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124840
Abstract: Functional verification being the paramount in the design cycle of hardware IP; ample time and cost are spent for verifying the functional correctness of each and every hardware IP. The modern verification methodologies emphasize the concept of reusability to reduce the verification turn around time. The reusability aspect of each verification IP depends vastly on its implementation strategies & architecture. The paper discusses a methodology to build Reconfigurable Testbenches, for verifying design IPs which do not have any stringent implementation protocol. The proposed verification technique focuses on an approach for reconfiguring the golden behavioral model in the testbench to suit the various functional realizations of the design IP. Configuration parameter, along with Assertions ensures effective reconfigurability and reusability of the verification IP. The entire verification environment is modularized into reusable blocks for modifying the functional requirements at ease. Since the output prediction and checker model is designed independent of a global synchronizing signal with respect to the design under verification (DUV), it ensures minimum modification of the reusable blocks for verifying different user implementations of the DUV.
Keywords: DRAM chips; memory architecture; assertion based reconfigurable testbenches; design IP; design under verification; functional verification; verification IP; Computer architecture; Conferences; Hardware; IP networks; Measurement; Protocols; SDRAM; System Verilog; UVM; assertions; reconfigurable; testbench (ID#: 15-6583)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124840&isnumber=7124722

 

Yitian Gu; Shou-pon Lin; Maxemchuk, N.F., “A Fail Safe Broadcast Protocol for Collaborative Intelligent Vehicles,” in World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2015 IEEE 16th International Symposium on a, vol., no., pp. 1–6, 14–17 June 2015. doi:10.1109/WoWMoM.2015.7158215
Abstract: This paper presents a broadcast protocol that makes cooperative driving applications safer. Collaborative driving is a rapidly evolving trend in intelligent transportation system. Current communication services provided by vehicular ad-hoc network (VANET) cannot guarantee fail-safe operation. We present a fail safe broadcast protocol (FSBP) that resides between the cooperative driving applications and VANET to make the cooperative driving applications work in a safer way. The protocol uses synchronized clocks with the help of GPS to schedule the broadcast transmissions of the participants. Electing not to transmit at a scheduled time is a unique message that cannot be lost because of a noisy or lost communication channel. This message is used to abort a collaborative operation and revert to an autonomous driving mode, similar to the current generation of intelligent vehicles, in which a vehicle protects itself. We describe a particular, simple protocol that uses a token passing mechanism. We specify the protocol as a finite state machine and use probabilistic verification to verify the protocol. This is the first formal verification of a multi-party broadcast protocol.
Keywords: Global Positioning System; access protocols; cooperative communication; finite state machines; intelligent transportation systems; telecommunication scheduling; vehicular ad hoc networks; GPS; VANET; autonomous driving mode; broadcast transmission scheduling; collaborative intelligent vehicles; collaborative operation; fail safe broadcast protocol; finite state machine; intelligent transportation system; lost communication channel; noisy communication channel; probabilistic verification; safer cooperative driving applications; synchronized clocks; token passing mechanism; vehicular ad hoc network; Clocks;Collaboration; Protocols; Receivers; Synchronization; Vehicles (ID#: 15-6584)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158215&isnumber=7158105

 

Chen, E.Y.; Shuo Chen; Qadeer, S.; Rui Wang, “Securing Multiparty Online Services via Certification of Symbolic Transactions,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 833–849, 17–21 May 2015. doi:10.1109/SP.2015.56
Abstract: The prevalence of security flaws in multiparty online services (e.g., Single-sign-on, third-party payment, etc.) calls for rigorous engineering supported by formal program verification. However, the adoption of program verification faces several hurdles in the real world: how to formally specify logic properties given that protocol specifications are often informal and vague, how to precisely model the attacker and the runtime platform, how to deal with the unbounded set of all potential transactions. We introduce Certification of Symbolic Transaction (CST), an approach to significantly lower these hurdles. CST tries to verify a protocol-independent safety property jointly defined over all parties, thus avoids the burden of individually specifying every party’s property for every protocol, CST invokes static verification at runtime, i.e., It symbolically verifies every transaction on-the-fly, and thus (1) avoids the burden of modeling the attacker and the runtime platform, (2) reduces the proof obligation from considering all possible transactions to considering only the one at hand. We have applied CST on five commercially deployed applications, and show that, with only tens (or 100+) of lines of code changes per party, the original implementations are enhanced to achieve the objective of CST. Our security analysis shows that 12 out of 14 logic flaws reported in the literature will be prevented by CST. We also stress-tested CST by building a gambling system integrating four different services, for which there is no existing protocol to follow. Because transactions are symbolic and cacheable, CST has near-zero amortized runtime overhead. We make the source code of these implementations public, which are ready to be deployed for real-world uses.
Keywords: cache storage; formal verification; security of data; symbol manipulation; CST; attacker modeling; cacheable transactions; certification of symbolic transaction; code changes; commercially deployed applications; formal program verification; gambling system; logic flaws; logic properties; multiparty online service security; near-zero amortized runtime overhead; proof obligation; protocol specifications; protocol-independent safety property; runtime platform; security analysis; security flaws; static verification; Certification; Data structures; Facebook; Protocols; Runtime; Security; Servers; CST; multiparty protocol; online payment; single-sign-on; symbolic transaction; verification (ID#: 15-6585)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163063&isnumber=7163005

 

Saleh, M.; El-Meniawy, N.; Sourour, E., “Authentication in Flat Wireless Sensor Networks with Mobile Nodes,” in Networking, Sensing and Control (ICNSC), 2015 IEEE 12th International Conference on,  vol., no., pp. 208–212, 9–11 April 2015. doi:10.1109/ICNSC.2015.7116036
Abstract: Secure communication in Wireless Sensor Networks (WSNs) requires the verification of the identities of network nodes. This is to prevent a malicious intruder from injecting false data into the network. We propose an entity authentication protocol for WSNs, and show how its execution can be integrated as part of routing protocols. The integration between routing and authentication leads to two benefits. First, authentication is guided by routing; only nodes on a data path to the base station authenticate each other. Unnecessary protocol executions are therefore eliminated. Second, malicious nodes are not able to use the routing protocol to insert themselves into data paths. Our protocol assumes a flat WSN, i.e., no clustering or cluster heads. We also deal with node mobility issues by proposing a re-authentication protocol that an initially authenticated node uses when its position changes. Finally, we show how to implement the protocol using the TinyOS operating system.
Keywords: cryptographic protocols; routing protocols; telecommunication security; wireless sensor networks; TinyOS operating system; flat wireless sensor networks; malicious nodes; mobile nodes; network nodes; re-authentication protocol; routing protocol; secure communication; Authentication; Base stations; Cryptography; Protocols; Routing; Wireless sensor networks (ID#: 15-6586)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116036&isnumber=7115994

 

Ray, Biplob; Chowdhury, Morshed; Abawajy, Jemal; Jesmin, Monika, “Secure Object Tracking Protocol for Networked RFID Systems,” in Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015 16th IEEE/ACIS International Conference on, vol., no., pp. 1–7, 1–3 June 2015. doi:10.1109/SNPD.2015.7176190
Abstract: Networked systems have adapted Radio Frequency identification technology (RFID) to automate their business process. The Networked RFID Systems (NRS) has some unique characteristics which raise new privacy and security concerns for organizations and their NRS systems. The businesses are always having new realization of business needs using NRS. One of the most recent business realization of NRS implementation on large scale distributed systems (such as Internet of Things (IoT), supply chain) is to ensure visibility and traceability of the object throughout the chain. However, this requires assurance of security and privacy to ensure lawful business operation. In this paper, we are proposing a secure tracker protocol that will ensure not only visibility and traceability of the object but also genuineness of the object and its travel path on-site. The proposed protocol is using Physically Unclonable Function (PUF), Diffie-Hellman algorithm and simple cryptographic primitives to protect privacy of the partners, injection of fake objects, non-repudiation, and unclonability. The tag only performs a simple mathematical computation (such as combination, PUF and division) that makes the proposed protocol suitable to passive tags. To verify our security claims, we performed experiment on Security Protocol Description Language (SPDL) model of the proposed protocol using automated claim verification tool Scyther. Our experiment not only verified our claims but also helped us to eliminate possible attacks identified by Scyther.
Keywords: Mathematical model; Privacy; Protocols; Radiofrequency identification; Security; Supply chains; IoT; NRS; PUF; RFID; injection of fake objects; non-repudiation; privacy; protocol; tracker; unclonable (ID#: 15-6587)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176190&isnumber=7176160

 

Rehman, U.U.; Abbasi, A.G., “Secure Layered Architecture for Session Initiation Protocol Based on SIPSSO: Formally Proved by Scyther,” in Information Technology - New Generations (ITNG), 2015 12th International Conference on, vol., no., pp.185–190, 13–15 April 2015. doi:10.1109/ITNG.2015.35
Abstract: Voice over Internet Protocol (VoIP) is one of the most popular technologies nowadays that facilitate the user by providing different features as instant messages, phone calls, video calls, and voicemails. Basic VoIP protocols were designed to be efficient instead of secure. After numerous attacks on these protocols several solutions were proposed to prevent against these threats. In this paper, we focus on the security of Session Initiation Protocol (SIP) that is used to initiate, modify, and terminate the VoIP sessions. The paper presents the design and implementation of secure layered architecture for SIP, which adds a new layer to the standard SIP layer model and entitled as Security layer. The Security layer provides authentication, authorization, adaptable feature, and secure key exchange, based on our newly designed protocol, named as Session Initiation Protocol using Single Sign-On (SIPSSO). In order to implement the secure layered architecture based on SIPSSO, we have developed an Android Secure Call application and extend the open source Asterisk accordingly. After the designing and implementation phases, we have verified the SIPSSO protocol formally by using an automated security verification tool, Scyther. Our analysis results reveal that by adding Security layer, we ensured protection against different SIP attacks such as Eavesdropping, Man In The Middle (MITM) attack, Message Tampering, Replay attack, Session Teardown, and Spam over Internet Telephony (SPIT).
Keywords: Internet telephony; authorisation; electronic messaging; formal verification; public domain software; signalling protocols; voice mail; Android secure call application; SIPSSO; Scyther; Session Initiation Protocol using Single Sign-On; VoIP; Voice over Internet Protocol; adaptable feature; authentication; authorization; automated security verification tool; instant messages; open source Asterisk; phone calls; secure key exchange; secure layered architecture; standard SIP layer model; video calls; voicemails; Authentication; Cryptography; Lead; Multimedia communication; Protocols; Standards; Asterisk; Cryptographic Token; SIP; SIPSSO; Secure Layered Architecture; SecureCall; VoIP (ID#: 15-6588)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113470&isnumber=7113432

 

Chen, Jing; Yuan, Quan; Xue, Guoliang; Du, Ruiying, “Game-Theory-Based Batch Identification of Invalid Signatures in Wireless Mobile Networks,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 262–270, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218390
Abstract: Digital signature has been widely employed in wireless mobile networks to ensure the authenticity of messages and identity of nodes. A paramount concern in signature verification is reducing the verification delay to ensure the network QoS. To address this issue, researchers have proposed the batch cryptography technology. However, most of the existing works focus on designing batch verification algorithms without sufficiently considering the impact of invalid signatures. The performance of batch verification could dramatically drop, if there are verification failures caused by invalid signatures. In this paper, we propose a Game-theory-based Batch Identification Model (GBIM) for wireless mobile networks, enabling nodes to find invalid signatures with the optimal delay under heterogeneous and dynamic attack scenarios. Specifically, we design an incomplete information game model between a verifier and its attackers, and prove the existence of Nash Equilibrium, to select the dominant algorithm for identifying invalid signatures. Moreover, we propose an auto-match protocol to optimize the identification algorithm selection, when the attack strategies can be estimated based on history information. Comprehensive simulation results demonstrate that GBIM can identify invalid signatures more efficiently than existing algorithms.
Keywords: Batch identification; game theory; wireless mobile networks (ID#: 15-6589)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218390&isnumber=7218353

 

Malekpour, M.R., “A Self-Stabilizing Hybrid Fault-Tolerant Synchronization Protocol,” in Aerospace Conference, 2015 IEEE, vol., no., pp. 1–11, 7–14 March 2015. doi:10.1109/AERO.2015.7119170
Abstract: This paper presents a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. The strategy consists of two parts: first, converting Byzantine faults into symmetric faults, and second, using a proven symmetric-fault tolerant algorithm to solve the general case of the problem. A protocol (algorithm) is also present that tolerates symmetric faults, provided that there are more good nodes than faulty ones. The solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. The solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. A mechanical verification of a proposed protocol is also present. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period.
Keywords: fault tolerance; protocols; synchronisation; Byzantine fault; Byzantine general problem; SMV; arbitrary faulty node; linear convergence; mechanical verification; network element; self-stabilization period; self-stabilizing hybrid fault-tolerant synchronization protocol; symbolic model verifier; symmetric-fault tolerant algorithm; Biographies; NASA; Protocols; Synchronization (ID#: 15-6590)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119170&isnumber=7118873

 

Wu, Zhizheng; Khodabakhsh, Ali; Demiroglu, Cenk; Yamagishi, Junichi; Saito, Daisuke; Toda, Tomoki; King, Simon, “SAS: A Speaker Verification Spoofing Database Containing Diverse Attacks,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, vol., no., pp. 4440–4444, 19–24 April 2015. doi:10.1109/ICASSP.2015.7178810
Abstract: This paper presents the first version of a speaker verification spoofing and anti-spoofing database, named SAS corpus. The corpus includes nine spoofing techniques, two of which are speech synthesis, and seven are voice conversion. We design two protocols, one for standard speaker verification evaluation, and the other for producing spoofing materials. Hence, they allow the speech synthesis community to produce spoofing materials incrementally without knowledge of speaker verification spoofing and anti-spoofing. To provide a set of preliminary results, we conducted speaker verification experiments using two state-of-the-art systems. Without any anti-spoofing techniques, the two systems are extremely vulnerable to the spoofing attacks implemented in our SAS corpus.
Keywords: Databases; Speech; Speech synthesis; Standards; Synthetic aperture sonar;Training; Database; security; speaker verification; speech synthesis; spoofing attack; voice conversion (ID#: 15-6591)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178810&isnumber=7177909

 

Hao Cai; Wolf, T., “Source Authentication and Path Validation with Orthogonal Network Capabilities,” in Computer Communications Workshops (INFOCOM WKSHPS), 2015 IEEE Conference on, vol., no., pp. 111–112, April 26 2015–May 1 2015. doi:10.1109/INFCOMW.2015.7179368
Abstract: In-network source authentication and path validation are fundamental primitives to construct security mechanisms such as DDoS mitigation, path compliance, packet attribution, or protection against flow redirection. Unfortunately, most of the existing approaches are based on cryptographic techniques. The high computational cost of cryptographic operations makes these techniques fall short in the data plane of the network, where potentially every packet needs to be checked at Gigabit per second link rates in the future Internet. In this paper, we propose a new protocol, which uses a set of orthogonal sequences as credentials, to solve this problem, which enables a low overhead of verification in routers. Our evaluation of a prototype experiment demonstrates the fast verification speed and low storage consumption of our protocol, while providing reasonable security properties.
Keywords: Internet; authorisation; computer network security; cryptographic protocols; Gigabit per second link rates; cryptographic operations; in-network source authentication; orthogonal network capabilities; path validation; Authentication; Conferences; Cryptography; Optimized production technology; Routing protocols (ID#: 15-6592)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7179368&isnumber=7179273

 

Ammayappan, K., “TSM Centric Privacy Preserving NFC Mobile Payment Framework with Formal Verification,” in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp.1490–1496, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124834
Abstract: Near field communication is on the verge of broad adoption worldwide as the NFC controllers and Secure Elements (SEs) are now commonplace components in many models of smart phones and point of sale devices currently available in the market. The combination of NFC and smart devices are making the way of life more easier for millennials. Data aggregation is a common phenomena in any e-commerce method. From the aggregated data, sensitive consumer information can be predicted using predictive data mining approaches. Consumers are not aware of their information privacy breach. The business models of the e-commerce industry players are designed in such a way that they can make use of their customers information to enhance their profitability by offering customer friendly services. Ultimately consumer’s sensitive information potentially sits on unsafe retailer’s sever on which consumer has no control and hackers can always potentially find a way into a system. This paper proposes a new TSM centric privacy preserving framework and a protocol for NFC based proximity payments which prevents consumer data from ever touching a merchant’s server where the majority of data breaches occur. The correctness of proposed privacy preserving NFC payment protocol is ensured here via formal modeling and verification using Proverif.
Keywords: data privacy; electronic commerce; formal verification; mobile computing; near-field communication; smart phones; NFC based proximity payments; NFC controllers; NFC mobile payment framework; Proverif; TSM centric privacy preserving framework; consumer data; data aggregation; e-commerce method; electronic commerce; formal modeling; formal verification; near field communications; point-of-sale devices; predictive data mining; secure elements; smart phones; Authentication; Business; Cryptography; Mobile communication; Mobile handsets; Privacy; Protocols (ID#: 15-6593)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124834&isnumber=7124722

 

Alotaibi, A.; Mahmmod, A., “Enhancing OAuth Services Security by an Authentication Service with Face Recognition,” in Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island,  vol., no., pp. 1–6, 1–1 May 2015. doi:10.1109/LISAT.2015.7160208
Abstract: Controlling secure access to web Application Programming Interfaces (APIs) and web services has become more vital with advancement and use of the web technologies. The security of web services APIs is encountering critical issues in managing authenticated and authorized identities of users. Open Authorization (OAuth) is a secure protocol that allows the resource owner to grant permission to a third-party application in order to access the resource owner’s protected resource on their behalf, without releasing their credentials. Most web APIs are still using the traditional authentication which is vulnerable to many attacks such as man-in-the middle attack. To reduce such vulnerability, we enhance the security of OAuth through the implementation of a biometric service. We introduce a face verification system based on Local Binary Patterns as an authentication service handled by the authorization server. The entire authentication process consists of three services: Image registration service, verification service, and access token service. The developed system is most useful in securing those services where a human identification is required.
Keywords: Web services; application program interfaces; authorisation; biometrics (access control); face recognition; image registration; OAuth service security; Web application programming interfaces; Web services API; Web technologies; access token service; authentication service; authorization server; biometric service; face verification system; human identification; image registration service; local binary patterns; open authorization; resource owner protected resource; third-party application; verification service; Authentication; Authorization; Databases; Protocols; Servers; Access Token; Face Recognition; OAuth; Open Authorization; Web API; Web Services (ID#: 15-6594)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160208&isnumber=7160171

 

Ustaoglu, B.; Ors, B., “Design and Implementation of a Custom Verification Environment for Fault Injection and Analysis on an Embedded Microprocessor,” in Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), 2015 Third International Conference on, vol., no., pp. 256–261, April 29 2015–May 1 2015. doi:10.1109/TAEECE.2015.7113636
Abstract: Embedded microprocessors are widely used in most of the safety critical digital system applications. A fault in a single bit in the microprocessors may cause soft errors. It has different affects on the program outcome whether the fault changes a situation in the application. In order to analyse the behaviour of the applications under the faulty conditions we have designed a custom verification system. The verification system has two parts as Field Programmable Gate Array (FPGA) and personnel computer (PC). We have modified Natalius open source microprocessor in order to inject stuck-at-faults into it. We have handled a fault injection method and leveraged it to increase randomness. On FPGA, we have implemented modified Natalius microprocessor, the fault injection method and the communication protocol. Then the “Most Significant Bit First Multiplication Algorithm” has been implemented on the microprocessor as an application. We have prepared an environment which sends inputs to and gets outputs from the Natalius microprocessor on PC part. Finally, we have analysed our application by injecting faults in specific location and random location in register file to make some classifications for effects of the injected faults.
Keywords: embedded systems; fault location; field programmable gate arrays; microprocessor chips; FPGA; Natalius open source microprocessor; PC; application behaviour analysis; communication protocol; custom verification environment design; custom verification environment implementation; embedded microprocessor; fault analysis; faulty conditions; field programmable gate array; most-significant bit first-multiplication algorithm; personnel computer; random location; register file; safetycritical digital system applications; soft errors; specific location; stuck-at-fault injection; Algorithm design and analysis; Circuit faults; Fault location; Hardware; Microprocessors; Random access memory; Registers; Analysis; Design; Fault Injection; Microprocessor (ID#: 15-6595)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113636&isnumber=7113589

 

Kyoungha Kim; Yanggon Kim, “Comparative Analysis on the Signature Algorithms to Validate AS Paths in BGPsec,” in Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, vol., no., pp. 53–58, June 28 2015–July 1 2015. doi:10.1109/ICIS.2015.7166569
Abstract: Because of the lack of security in Border Gateway Protocol (BGP) and its world-wide coverage, BGP is categorized into one of the most vulnerable network protocols. As the Internet grows all around, BGP, which lays the groundwork for all network protocols by connecting all of them together, is being updated by protocol designers in security. The most noticeable topic to secure BGP is to validate paths in BGP. At this point, the most plausible solution to protect BGP paths is BGPsec. However, validating paths in BGPsec gives much more pressure to BGP in routing performance than validating the origin of a BGP message. In order to maximize the path-validating performance, BGPsec currently uses Elliptic Curve Digital Signature Algorithm (ECDSA), which is well known as one of best asymmetric cryptographic algorithms in performance. However, is ECDSA really better than the signature algorithms (i.e., DSA or RSA) originally used in BGP? In this paper, we found that RSA is better than ECDSA in BGPsec due to its outstanding verification speed. Among the signature algorithms (i.e., DSA, RSA, and ECDSA) that are utilized for RPKI and BGPsec, we argue that RSA is the best one in performance to validate paths in BGP Update messages.
Keywords: cryptographic protocols; digital signatures; internetworking; network servers; public key cryptography; AS paths; BGP update messages; BGPsec; DSA; ECDSA; RSA; asymmetric cryptographic algorithms; border gateway protocol; elliptic curve digital signature algorithm; network protocols; path-validating performance; protocol designers; verification speed; Delays; IP networks; Internet; Routing; Routing protocols; Security (ID#: 15-6596)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166569&isnumber=7166553

 

Kar, J.; Alghazzawi, D.M., “On Construction of Signcryption Scheme for Smart Card Security,” in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, vol., no., pp.109–113, 27–29 May 2015. doi:10.1109/ISI.2015.7165948
Abstract: The article proposes a novel construction of sign-cryption scheme with provable security which is most suited to be implement on smart card. It is secure in random oracle model and the security relies on Decisional Bilinear Diffie-Hellmann Problem. The proposed scheme is secure against adaptive chosen ciphertext attack (indistiguishbility) and adaptive chosen message attack (unforgeability). The scheme have the security properties anonymity and forward security. Also it is inspired by zero-knowledge proof and is publicly verifiable. The scheme has applied for mutual authentication to authenticate identity of smart card’s user and reader via Application protocol Data units. This can be achieved by the verification of the signature of the proposed scheme. Also the sensitive information are stored in the form of ciphertext in Read Only Memory of smart cards. These functions are performed in one logical step at a low computational cost.
Keywords: authorisation; public key cryptography; smart cards; adaptive chosen ciphertext attack; adaptive chosen message attack; anonymity property; application protocol data units; decisional bilinear Diffie-Hellmann problem; forward security property; indistiguishbility attack; mutual authentication; provable security; random oracle model; read only memory; signcryption scheme; smart card security; unforgeability attack; user authentication; zero-knowledge proof; Computational efficiency; Computational modeling; Elliptic curve cryptography; Receivers; Smart cards; Provable security; Random oracle; Unforgebility (ID#: 15-6597)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165948&isnumber=7165923

 

Alshinina, R.; Elleithy, K.; Aljanobi, F., “A Highly Efficient and Secure Shared Key for Direct Communications Based on Quantum Channel,” in Wireless Telecommunications Symposium (WTS), 2015, vol., no., pp. 1–6, 15–17 April 2015. doi:10.1109/WTS.2015.7117250
Abstract: The reported research in literature for message transformation by a third party does not provide the necessary efficiency and security against different attacks. The data transmitted through the computer network must be confidential and authenticated in advance. In this paper, we develop and improve security of the braided single stage quantum cryptography. This improvement is based on a novel authentication algorithm by using signature verification without using the three stages protocol to share the secret key between the sender and receiver. This approach will work against attacks such as replay and man-in-the-middle by increasing the security as well as the over efficiency, reducing the overhead through using three stages and increasing the speed of the communication between two parties.
Keywords: computer network security; data communication; digital signatures; handwriting recognition; private key cryptography; quantum cryptography; authentication algorithm; braided single stage quantum cryptography security; computer network security; data transmission; direct communication; overhead reduction; quantum channel; secret key sharing; signature verification; Photonics; Protocols; Public key; Receivers; Transforms; Braided Single Stage Protocol (BSSP); Signature Verification; Three Stages Protocol (TSP); authentication; quantum cryptography (QC); quantum key distribution protocol (QKD) (ID#: 15-6598)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7117250&isnumber=7117237

 

Boyuan He; Rastogi, V.; Yinzhi Cao; Yan Chen; Venkatakrishnan, V.N.; Runqing Yang; Zhenrui Zhang, “Vetting SSL Usage in Applications with SSLINT,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 519–534, 17–21 May 2015. doi:10.1109/SP.2015.38
Abstract: Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols have become the security backbone of the Web and Internet today. Many systems including mobile and desktop applications are protected by SSL/TLS protocols against network attacks. However, many vulnerabilities caused by incorrect use of SSL/TLS APIs have been uncovered in recent years. Such vulnerabilities, many of which are caused due to poor API design and inexperience of application developers, often lead to confidential data leakage or man-in-the-middle attacks. In this paper, to guarantee code quality and logic correctness of SSL/TLS applications, we design and implement SSLINT, a scalable, automated, static analysis system for detecting incorrect use of SSL/TLS APIs. SSLINT is capable of performing automatic logic verification with high efficiency and good accuracy. To demonstrate it, we apply SSLINT to one of the most popular Linux distributions -- Ubuntu. We find 27 previously unknown SSL/TLS vulnerabilities in Ubuntu applications, most of which are also distributed with other Linux distributions.
Keywords: Linux; application program interfaces; formal verification; program diagnostics; protocols; security of data; API design; Linux distributions; SSL usage vetting; SSL-TLS protocols; SSLINT; Ubuntu; application program interfaces; automatic logic verification; code quality; logic correctness; network attacks; secure sockets layer; static analysis system; transport layer security; Accuracy; Libraries; Protocols; Security; Servers; Software; Testing (ID#: 15-6599)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163045&isnumber=7163005

 

Costello, C.; Fournet, C.; Howell, J.; Kohlweiss, M.; Kreuter, B.; Naehrig, M.; Parno, B.; Zahur, S., “Geppetto: Versatile Verifiable Computation,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 253–270, 17–21 May 2015. doi:10.1109/SP.2015.23
Abstract: Cloud computing sparked interest in Verifiable Computation protocols, which allow a weak client to securely outsource computations to remote parties. Recent work has dramatically reduced the client’s cost to verify the correctness of their results, but the overhead to produce proofs remains largely impractical. Geppetto introduces complementary techniques for reducing prover overhead and increasing prover flexibility. With Multi QAPs, Geppetto reduces the cost of sharing state between computations (e.g, For MapReduce) or within a single computation by up to two orders of magnitude. Via a careful choice of cryptographic primitives, Geppetto’s instantiation of bounded proof bootstrapping improves on prior bootstrapped systems by up to five orders of magnitude, albeit at some cost in universality. Geppetto also efficiently verifies the correct execution of proprietary (i.e, Secret) algorithms. Finally, Geppetto’s use of energy-saving circuits brings the prover’s costs more in line with the program’s actual (rather than worst-case) execution time. Geppetto is implemented in a full-fledged, scalable compiler and runtime that consume LLVM code generated from a variety of source C programs and cryptographic libraries.
Keywords: cloud computing; computer bootstrapping; cryptographic protocols; program compilers; program verification; Geppetto; LLVM code generation; QAPs; bootstrapped systems; bounded proof bootstrapping; cloud computing; compiler; correctness verification; cryptographic libraries; cryptographic primitives; energy-saving circuits; outsource computation security; prover flexibility; prover overhead reduction; source C programs; verifiable computation protocols; Cryptography; Generators; Libraries; Logic gates; Protocols; Random access memory; Schedules (ID#: 15-6600)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163030&isnumber=7163005

 

Chin, T.; Mountrouidou, X.; Xiangyang Li; Kaiqi Xiong, “Selective Packet Inspection to Detect DoS Flooding Using Software Defined Networking (SDN),” in Distributed Computing Systems Workshops (ICDCSW), 2015 IEEE 35th International Conference on, vol., no., pp. 95–99, June 29 2015–July 2 2015 doi:10.1109/ICDCSW.2015.27
Abstract: Software-defined networking (SDN) and Open Flow have been driving new security applications and services. However, even if some of these studies provide interesting visions of what can be achieved, they stop short of presenting realistic application scenarios and experimental results. In this paper, we discuss a novel attack detection approach that coordinates monitors distributed over a network and controllers centralized on an SDN Open Virtual Switch (OVS), selectively inspecting network packets on demand. With different scale of network views and information availability, these two elements collaboratively detect signature constituents of an attack. Therefore, this approach is able to quickly issue an alert against potential threats followed by careful verification for high accuracy, while balancing the workload on the OVS. We have applied this method for detection and mitigation of TCP SYN flood attacks on Global Environment for Network Innovations (GENI). This realistic experimentation has provided us with insightful findings helpful toward a systematic methodology of SDN-supported attack detection and containment.
Keywords: computer network security; software defined networking; DoS flooding; GENI; OVS; Open Flow; SDN open virtual switch; TCP SYN flood attacks; global environment for network innovations; novel attack detection approach; selective packet inspection; software defined networking; Collaboration; Correlation; Correlators;  IP networks; Monitoring; Protocols; Security; DoS; Intrusion Detection; SDN; Selective Packet Inspection (ID#: 15-6601)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165090&isnumber=7165001

 

Ben Henda, N.; Norrman, K.; Pfeffer, K., “Formal Verification of the Security for Dual Connectivity in LTE,” in Formal Methods in Software Engineering (FormaliSE), 2015 IEEE/ACM 3rd FME Workshop on , vol., no., vol., no., pp.13–19, 18–18 May 2015. doi:10.1109/FormaliSE.2015.10
Abstract: We describe our experiences from using formal verification tools during the standardization process of Dual Connectivity, a new feature in LTE developed by 3Gvol., no., pp. To the best of our knowledge, this is the first report of its kind in the telecom industry. We present a model for key establishment of this feature and provide a detailed account on its formal analysis using three popular academic tools in order to automatically prove the security properties of secrecy, agreement and key freshness. The main purpose of using the tools during standardization is to evaluate their suitability for modeling a rapidly changing system as it is developed and in the same time raising the assurance level before the system is deployed.
Keywords: Long Term Evolution; formal verification; telecommunication computing; telecommunication security; 3GPP; LTE; Long Term Evolution; agreement property; dual connectivity security; formal analysis; formal verification; key freshness property; secrecy property; security property; standardization; telecom industry; Encryption; Long Term Evolution; Peer-to-peer computing; Protocols; LTE; formal verification; model checking; security protocols; telecom (ID#: 15-6602)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166552&isnumber=7166542

 

Singh, K.J.; De, T., “DDOS Attack Detection and Mitigation Technique Based on Http Count and Verification Using CAPTCHA,” in Computational Intelligence and Networks (CINE), 2015 International Conference on, vol., no., pp.196–197, 12–13 Jan. 2015. doi:10.1109/CINE.2015.47
Abstract: With the rapid development of internet, the number of people who are online also increases tremendously. But now a day’s we find not only growing positive use of internet but also the negative use of it. The misuse and abuse of internet is growing at an alarming rate. There are large cases of virus and worms infecting our systems having the software vulnerability. These systems can even become the clients for the bot herders. These infected system aid in launching the DDoS attack to a target server. In this paper we introduced the concept of IP blacklisting which will blocked the entire blacklisted IP address, http count filter will enable us to detect the normal and the suspected IP addresses and the CAPTCHA technique to counter check whether these suspected IP address are in control by human or botnet.
Keywords: Internet; client-server systems; computer network security; computer viruses; transport protocols; CAPTCHA; DDOS attack detection; DDOS attack mitigation technique; HTTP count filter; HTTP verification; IP address; IP blacklisting; Internet; botnet; software vulnerability; target server; virus; worms; CAPTCHAs; Computer crime; IP networks; Internet; Radiation detectors; Servers; bot; botnets; captcha; filter; http; mitigation (ID#: 15-6603)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053830&isnumber=7053782

 

Kieseberg, P.; Fruhwirt, P.; Schrittwieser, S.; Weippl, E., “Security Tests for Mobile Applications — Why Using TLS/SSL is Not Enough,” in Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, vol., no., pp. 1–2, 13–17 April 2015. doi:10.1109/ICSTW.2015.7107416
Abstract: Security testing is a fundamental aspect in many common practices in the field of software testing. Still, the used standard security protocols are typically not questioned and not further analyzed in the testing scenarios. In this work we show that due to this practice, essential potential threats are not detected throughout the testing phase and the quality assurance process. We put our focus mainly on two fundamental problems in the area of security: The definition of the correct attacker model, as well as trusting the client when applying cryptographic algorithms.
Keywords: cryptographic protocols; mobile computing; program testing; quality assurance; software quality; TLS-SSL; correct attacker model; cryptographic algorithms; mobile applications; quality assurance process; security testing; software testing; standard security protocols; Encryption; Mobile communication; Protocols; Servers; Software; Testing; Security; TLS/SSL; Testing (ID#: 15-6604)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107416&isnumber=7107396

 

Carbone, R.; Compagna, L.; Panichella, A.; Ponta, S.E., “Security Threat Identification and Testing,” in Software Testing, Verification and Validation (ICST), 2015 IEEE 8th International Conference on, vol., no., pp. 1–8, 13–17 April 2015. doi:10.1109/ICST.2015.7102630
Abstract: Business applications are more and more collaborative (cross-domains, cross-devices, service composition). Security shall focus on the overall application scenario including the interplay between its entities/devices/services, not only on the isolated systems within it. In this paper we propose the Security Threat Identification And TEsting (STIATE) toolkit to support development teams toward security assessment of their under-development applications focusing on subtle security logic flaws that may go undetected by using current industrial technology. At design-time, STIATE supports the development teams toward threat modeling and analysis by identifying automatically potential threats (via model checking and mutation techniques) on top of sequence diagrams enriched with security annotations (including WHAT-IF conditions). At run-time, STIATE supports the development teams toward testing by exploiting the identified threats to automatically generate and execute test cases on the up and running application. We demonstrate the usage of the STIATE toolkit on an application scenario employing the SAML Single Sign-On multi-party protocol, a well-known industrial security standard largely studied in previous literature.
Keywords: computer crime; program testing; program verification; SAML; STIATE toolkit; WHAT-IF conditions; business applications; design-time; development teams; industrial security standard; industrial technology; model checking; mutation techniques; security annotations; security assessment; security logic flaws; security threat identification and testing; sequence diagrams; single sign-on multiparty protocol; test cases; threat analysis; threat modeling; under-development applications; Authentication; Business; Engines; Protocols; Testing; Unified modeling language (ID#: 15-6605)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102630&isnumber=7102573

 

Afzal, Z.; Lindskog, S., “Automated Testing of IDS Rules,” in Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, vol., no., pp. 1–2, 13–17 April 2015. doi:10.1109/ICSTW.2015.7107461
Abstract: As technology becomes ubiquitous, new vulnerabilities are being discovered at a rapid rate. Security experts continuously find ways to detect attempts to exploit those vulnerabilities. The outcome is an extremely large and complex rule set used by Intrusion Detection Systems (IDSs) to detect and prevent the vulnerabilities. The rule sets have become so large that it seems infeasible to verify their precision or identify overlapping rules. This work proposes a methodology consisting of a set of tools that will make rule management easier.
Keywords: program testing; security of data; IDS rules; automated testing; intrusion detection systems; Conferences; Generators; Intrusion detection; Payloads; Protocols; Servers; Testing (ID#: 15-6606)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107461&isnumber=7107396 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Steganography 2015

 

 
SoS Logo

Steganography

2015


Digital steganography is one of the primary areas of science of security research. Detection and countermeasures are the topics pursued. The articles cited here were presented in 2015. They cover a range of topics, including Least Significant Bit (LSB), LDPC codes, combinations with DES encryption, and Hamming code. The related hard problems are privacy, metrics, and composability.  



Habib, M.; Bakhache, B.; Battikh, D.; El Assad, S., “Enhancement Using Chaos of a Steganography Method in DCT Domain,” Digital Information and Communication Technology and Its Applications (DICTAP), 2015 Fifth International Conference on, vol., no., pp. 204, 209, April 29 2015–May 1 2015. doi:10.1109/DICTAP.2015.7113200
Abstract: Recently, Steganography is widely used for communicating data secretly. It can be divided into two domains spatial and frequency. One of the most used frequency transformation is Discrete Cosine Transform DCT. There are many techniques based on DCT. The most common is the DCT steganography based on Least Significant Bit LSB. Many proposed methods rely on it such as the LSB-DCT randomized bit embedding based on a threshold. This method is simple and provides some security. In this paper, a secure DCT steganography method is proposed. It allows hiding a secret image in another image randomly using Chaos. The chaotic generator Peace Wise Linear Chaotic Map PWLCM with perturbation was selected, it has good chaotic properties and an easy implementation. It was used to obtain the pseudo-random series of pixels in which the secret image will be embedded in their DCT coefficients. It enhances the LSB-DCT technique with threshold. Many metrics have been evaluated such as Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM) Index and the capacity. A supervised universal approach based on Fisher Linear Discriminator (FLD) was used to evaluate the security against the steganalysts. The obtained experimental results demonstrate that the proposed algorithm achieves higher quality and security.
Keywords: discrete cosine transforms; image enhancement; steganography; Fisher linear discriminator; LSB-DCT randomized bit embedding; chaotic generator peace wise linear chaotic map; discrete cosine transform; least significant bit; pseudo-random series; secret image; secure DCT steganography method; supervised universal approach; Chaotic communication; Discrete cosine transforms; Frequency-domain analysis; Generators; PSNR; Security; Chaos; DCT; FLD; LSB-DCT; PSNR; PWLCM; Steganography; threshold (ID#: 15-6609)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113200&isnumber=7113160

 

Pund-Dange, S.; Desai, C.G., “Secured Data Communication System Using RSA with Mersenne Primes and Steganography,” Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, vol., no., pp. 1306, 1310, 11–13 March 2015. doi: (not provided)
Abstract: To add multiple layers of security our present work proposes a method for integrating together cryptography and Steganography for secure communication using an image file. We have used here combination of cryptography and steganography that can hide a text in an image in such a way so as to prevent any possible suspicion of having a hidden text, after RSA cipher. It offers privacy and high security through the communication channel.
Keywords: data communication; image coding; public key cryptography; steganography; Mersenne primes; RSA cipher; communication channel; cryptography; hidden text; image file; secured data communication system;  Arrays; Ciphers; Encryption; Image color analysis; Public key; Cryptography; Mersenne; Prime RSA; Steganography; factorization (ID#: 15-6610)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100461&isnumber=7100186

 

Manisha, M.; Malvika, S.S.; Karthikeyan, B.; Vaithiyanathan, V.; Srinivasan, B., “Devanagari Text Embedding in a Gray Image: An Offbeat Approach,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp. 1284, 1288, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124791
Abstract: Steganography is a tool which helps in hiding information that plays a crucial role in many ways and in many lives. With the advent of the Internet, information exchange is possible in many languages other than English. This technology eventually carries with it a disadvantage which is the loss of security and privacy of information. Steganography an insipid medium, is one such way to ensure privacy. Steganography plays a vital role in securing the secret data. In this paper, a different approach is chosen for encoding Devanagari (Hindi) Text in the cover image. This approach of hiding Devanagari (Hindi) and English Text in an alternate manner is very efficient and simple to use. This paper describes a duplet algorithm, one for encoding and another for decoding. The image parameters are calculated by this proposed methodology, which proves that this process is more efficient and innovative.
Keywords: Internet; data encapsulation; image coding; image colour analysis; natural language processing; steganography; text analysis; Devanagari text embedding; English text; Hindi text; Internet; decoding; duplet algorithm; gray image; hiding information; image parameter; information exchange; information privacy; secret data; steganography; Decoding; Histograms; Image coding; Image segmentation; Internet; PSNR; Security; (Devanagari) Hindi Text Steganography; (Devanagari) Hindi Unicode; Image quality measures; Linguistic Steganography; Steganography; alternative encoding (ID#: 15-6611)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124791&isnumber=7124722

 

Kaur, R.; Kaur, J., “Cloud Computing Security Issues and Its Solution: A Review,” Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, vol., no., pp. 1198, 1200, 11–13 March 2015. doi: (not provided)
Abstract: Cloud Computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. As information exchange plays an important role in today’s life, information security becomes more important. This paper is focused on the security issues of cloud computing and techniques to overcome the data privacy issue. Before analyzing the security issues, the definition of cloud computing and brief discussion to under[stand] cloud computing is presented, then it explores the cloud security issues and problem faced by cloud service provider. Thus, defining the Pixel key pattern and Image Steganography techniques that will be used to overcome the problem of data security.
Keywords: cloud computing; data privacy; image coding; security of data; steganography; cloud computing security; cloud service provider; image steganography technique; information exchange; information security; pixel key pattern; Cloud computing; Clouds; Computational modeling; Computers; Image edge detection; Security; Servers; Cloud Computing; Cloud Security; Image steganography; Pixel key pattern; Security issues (ID#: 15-6612)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100438&isnumber=7100186

 

Vegh, Laura; Miclea, Liviu, “A Simple Scheme for Security and Access Control in Cyber-Physical Systems,” Control Systems and Computer Science (CSCS), 2015 20th International Conference on, vol., no., pp. 294, 299, 27–29 May 2015. doi:10.1109/CSCS.2015.13
Abstract: In a time when technology changes continuously, where things you need today to run a certain system, might not be needed tomorrow anymore, security is a constant requirement. No matter what systems we have, or how we structure them, no matter what means of digital communication we use, we are always interested in aspects like security, safety, privacy. An example of the ever-advancing technology are cyber-physical systems. We propose a complex security architecture that integrates several consecrated methods such as cryptography, steganography and digital signatures. This architecture is designed to not only ensure security of communication by transforming data into secret code, it is also designed to control access to the system and detect and prevent cyber attacks.
Keywords: Computer architecture; Digital signatures; Encryption; Public key; access control; cryptography; cyber attacks; cyber-physical systems; digital signatures; multi-agent systems; steganography (ID#: 15-6613)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168445&isnumber=7168393

 

Dabrowski, A.; Echizen, I.; Weippl, E.R., “Error-Correcting Codes as Source for Decoding Ambiguity,” Security and Privacy Workshops (SPW), 2015 IEEE, vol., no., pp. 99, 105, 21–22 May 2015. doi:10.1109/SPW.2015.28
Abstract: Data decoding, format, or language ambiguities have been long known for amusement purposes. Only recently it came to attention that they also pose a security risk. In this paper, we present decoder manipulations based on deliberately caused ambiguities facilitating the error correction mechanisms used in several popular applications. This can be used to encode data in multiple formats or even the same format with different content. Implementation details of the decoder or environmental differences decide which data the decoder locks onto. This leads to different users receiving different content based on a language decoding ambiguity. In general, ambiguity is not desired, however in special cases it can be particularly harmful. Format dissectors can make wrong decisions, e.g. A firewall scans based on one format but the user decodes different harmful content. We demonstrate this behavior with popular barcodes and argue that it can be used to deliver exploits based on the software installed, or use probabilistic effects to divert a small percentage of users to fraudulent sites.
Keywords: bar codes; decoding; encoding; error correction codes; fraud; security of data; barcodes; data decoding; data encoding; decoder manipulations; error correction mechanisms; error-correcting codes; format dissectors; fraudulent sites; language decoding ambiguity; security risk; Decoding; Error correction codes; Security; Software; Standards; Synchronization; Visualization; Barcode; Error Correcting Codes; LangSec; Language Security; Packet-in-Packet; Protocol decoding ambiguity; QR; Steganography (ID#: 15-6614)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163213&isnumber=7163193

 

Ji Young Chun; Hye Lim Lee; Ji Won Yoon, “Passing Go with DNA Sequencing: Delivering Messages in a Covert Transgenic Channel,” Security and Privacy Workshops (SPW), 2015 IEEE, vol., no., pp. 17, 26, 21–22 May 2015. doi:10.1109/SPW.2015.10
Abstract: DNA which carries genetic information in living organisms has become a new steganographic carrier of secret information. Various researchers have used this technique to try to develop watermarks to be used to protect proprietary products, however, as recent advances in genetic engineering have made it possible to use DNA as a carrier of information, we have realized that DNA steganography in the living organism also facilitates a new, stealthy cyber-attack that could be used nefariously to bypass entrance control systems that monitor and screen for files and electronic devices. In this paper, we explain how “DNA-courier” attacks could easily be carried out to defeat existing monitoring and screening techniques. Using our proposed method, we found that DNA as a steganographic carrier of secret information poses a realistic cyber-attack threat by enabling secret messages to be sent to an intended recipient without being noticed by third parties.
Keywords: DNA; genetic engineering; security of data; steganography; DNA sequencing; DNA steganography; DNA-courier attacks; covert transgenic channel; genetic information; living organism; message delivery; monitoring technique; proprietary product protection; screening technique; secret information; secret message; stealthy cyber-attack; steganographic carrier; watermark; DNA; Encoding; Encryption; Genomics; Microorganisms (ID#: 15-6615)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163204&isnumber=7163193

 

Bobade, S.; Goudar, R., “Secure Data Communication Using Protocol Steganography in IPv6,” Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no., pp. 275, 279, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.59
Abstract: In secure data communication Network Security is important. Basically in cryptography Encryption is used for data security. Still attacker can attract towards encrypted data due to different form of data. so this limitation could overcome by using steganography. Steganography is the technique of information hiding. In steganography different carriers can be used for information hiding like image, audio, video, network protocols. Network steganography is a new approach for data hiding. In network steganography network layer protocol of TCP/IP suite are used for data hiding. In Network layer covert channels are used for data hiding. Covert channels violate security policies of the system. Covert channels are either used for steal the information or communicate secrete information overt a network. Covert channel in TCP, IPv4 are previously implemented and studied. IPv6 is a new generation protocol which slowly replaces IPv4 in future because IPv4 is rapidly running out. So there is need to examine security issues related IPv6 protocol. Covert channels are present in IPv6 protocol. 20 bit Flow label field of IPv6 protocol can be used as covert channel. RSA algorithm is used for data Encryption. Chaotic method used for data encoding. Secret data communication is possible in IPv6.
Keywords: IP networks; computer network security; cryptographic protocols; data communication; steganography; transport protocols; IPv6 protocol; RSA algorithm; TCP/IP suite; chaotic method; cryptography encryption; data encoding; data encryption; data hiding; flow label field; information hiding; network layer covert channels; network security; network steganography network layer protocol; protocol steganography; secure data communication; security policy; Chaotic communication; Encoding; Logistics; Protocols; Security; Chaos Theory; Covert channel; Network Security; Steganography; TCP/IP (ID#: 15-6616)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155850&isnumber=7155781

 

Mishra, R.; Bhanodiya, P., “A Review on Steganography and Cryptography,” Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, vol., no., pp.119,122, 19–20 March 2015. doi:10.1109/ICACEA.2015.7164679
Abstract: Today’s information world is a digital world. Data transmission over an unsecure channel is becoming a major issue of concern nowadays. And at the same time intruders are spreading over the internet and being very active. So to protect the secret data from theft some security measures need to be taken. In order to keep the data secret various techniques have been implemented to encrypt and decrypt the secret data. Cryptography and Steganography are the two most prominent techniques from them. But these two techniques alone can't do work as much efficiently as they do together. Steganography is a Greek word which is made up of two words Stegano and graphy. Stegano means hidden and graphy means writing i.e. Steganography means hidden writing. Steganography is a way to hide the fact that data communication is taking place. While cryptography converts the secret message in other than human readable form but this technique is having a limitation that the encrypted message is visible to everyone. In this way over the internet, intruders may try to apply heat and trial method to get the secret message. Steganography overcome the limitation of cryptography by hiding the fact that some transmission is taking place. In steganography the secret message is hidden in other than original media such as Text, Image, video and audio form. These two techniques are different and having their own significance. So in this paper we are going to discuss various cryptographic and steganographic techniques used in order the keep the message secret.
Keywords: cryptography; data communication; steganography; Internet; cryptographic techniques; data transmission; digital world; hidden writing; secret data decryption; secret data encryption; secret data protection; security measures; steganographic techniques; Computers; Encryption; Image color analysis; Image edge detection; Media; Cipher Text; Cryptanalysis; Cryptograph; LSB; Steganalysis; Steganography (ID#: 15-6617)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164679&isnumber=7164643

 

Das, P.; Kushwaha, S.C.; Chakraborty, M., “Multiple Embedding Secret Key Image Steganography Using LSB Substitution and Arnold Transform,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp. 845, 849, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7125033
Abstract: Cryptography and steganography are the two major fields available for data security. While cryptography is a technique in which the information is scrambled in an unintelligent gibberish fashion during transmission, steganography focuses on concealing the existence of the information. Combining both domains gives a higher level of security in which even if the use of covert channel is revealed, the true information will not be exposed. This paper focuses on concealing multiple secret images in a single 24-bit cover image using LSB substitution based image steganography. Each secret image is encrypted before hiding in the cover image using Arnold Transform. Results reveal that the proposed method successfully secures the high capacity data keeping the visual quality of transmitted image satisfactory.
Keywords: image coding; security of data; steganography; transforms; Arnold transform; LSB substitution; covert channel; cryptography; data security; secret image encryption; secret key image steganography; Digital images; Histograms; Public key; Transforms; Visualization; Arnold Transform; Digital Image Steganography; Spatial domain (ID#: 15-6618)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7125033&isnumber=7124722

 

Madhuravani, B.; Murthy, D.S.R.; Reddy, P.B.; Rama Rao, K.V.S.N., “Strong Authentication Using Dynamic Hashing and Steganography,” Computing, Communication & Automation (ICCCA), 2015 International Conference on, vol., no., pp. 735, 738,
15–16 May 2015. doi:10.1109/CCAA.2015.7148490
Abstract: Now a day’s online services became part of our life which performs communication digitally. This digital communication needs confidentiality and data integrity to protect from unauthorized use. Security can be provided by using two popular methods cryptography and steganography. Cryptography scrambles the message so that it cannot be understood, where as the steganography hides the message in another medium which cannot be detected by normal human eye. This paper introduces a technique which provides high security to the digital communication by using dynamic hashing for integrity and also embedding the data in image file using steganography to misguide attacker. There by providing high security for the communication between two parties.
Keywords: Internet; cryptography; data integrity; image coding; steganography; confidentiality; digital communication; dynamic hashing; image file; online services; strong authentication; Automation; Cryptography; Distortion; Histograms; Message authentication; Receivers; Cryptography; Human Visual System (HVS); Least Significant Bit (LSB); Steganography (ID#: 15-6619)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148490&isnumber=7148334

 

Mani, M.R.; Lalithya, V.; Rekha, P.S., “An Innovative Approach for Pattern Based Image Steganography,” Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, vol., no., pp. 1, 4, 19–21 Feb. 2015. doi:10.1109/SPICES.2015.7091380
Abstract: Currently, image steganography methods are becoming popular in the field of data authentication and image processing. This provides efficient way of communication between sender and receiver without any loss in the originality of the cover image. So, the present paper proposes a novel method called as pattern based image steganography. The proposed method allows the sender to embed the secret message into hierarchical divided sub sections. During the first stage, the input cover image is divided into 25×25 non overlapped windows. Then further each window is divided into 5×5 sub sections. Then among them, the sub sections will be selected based on the pattern ‘Z’. Next, in the second stage, within each selected 5×5 sub section, the pixels will be selected based on the pattern ‘a’. Finally, in the selected pixels, 1 bit Least Significant Method (LSB) is used for embedding the message. The proposed method is experimented with nine cover images and the performance of the proposed method is measured with several measures viz., Mean Square Error (MSE), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Peak Signal to Noise Ratio (PSNR), Signal to Noise Ratio (SNR). The results show the effeciency of the proposed method.
Keywords: image processing; mean square error methods; steganography; LSB; MAE; PSNR; RMSE; cover image; data authentication; hierarchical divided sub sections; image steganography methods; least significant method; mean absolute error; nonoverlapped windows; pattern based image steganography; peak signal to noise ratio; root mean square error; secret message; Computed tomography; Cryptography; Magnetic resonance imaging; Receivers; Robustness; Signal to noise ratio; Cover image; Pattern; Performance Measure; Stego image; window (ID#: 15-6620)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091380&isnumber=7091354

 

Pehlivanoglu, M.K.; Savas, B.K.; Duru, N., “LSB Based Steganography over Video Files Using Koblitz’s Method,” Signal Processing and Communications Applications Conference (SIU), 2015 23rd, vol., no., pp. 1034,1037, 16–19 May 2015. doi:10.1109/SIU.2015.7130009
Abstract: In this work we aimed to Least Significant Bit (LSB) based information hiding over video files using Koblitz’s Method over Elliptic Curve Cryptography. After which the message to be sent to the recipient converted into ASCII characters, single index characters are encrypted by Koblitz’s Method, (x,y) pairs obtained, and it is assumed that these pairs (x,y) express as a coordinate points. The pixel value on the relevant frame of single index points replaced with next double index ASCII character’s binary value using LSB method. After repeating the same process for all characters in the message, the message is hidden into the video frames.
Keywords: steganography; video coding; ASCII characters; Koblitz method; LSB based steganography; information hiding; least significant bit; message hiding; video files; Art; Conferences; Elliptic curve cryptography; Indexes; PSNR; Watermarking; Cryptography; Elliptic Curve Cryptography; Image Processing; Koblitz’s Method; LSB; Steganography; Video Processing; Video Steganography (ID#: 15-6621)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130009&isnumber=7129794

 

Vegh, L.; Miclea, L., “Access Control in Cyber-Physical Systems Using Steganography and Digital Signatures,” Industrial Technology (ICIT), 2015 IEEE International Conference on, vol., no., pp. 1504, 1509, 17–19 March 2015. doi:10.1109/ICIT.2015.7125309
Abstract: In a world in which technology has an essential role, security of the systems we use is a crucial aspect. Most of the time this means ensuring communications’ security, protecting data and it automatically makes us think of cryptography, changing the form of the data so no one can view it without authorization. Cyber-physical systems are more and more present in critical applications in which security is of the utmost importance. In the present paper, we propose a look on security not by encrypting data but by controlling the access to the system. For this we combine digital signatures with an encryption algorithm with divided private key in order to control access to the system and to define roles for each user. We also add steganography, to increase the level of security of the system.
Keywords: authorisation; data protection; digital signatures; private key cryptography; steganography; access control; authorization; communication security; cryptography; cyber-physical systems; data protection; divided private key; encryption algorithm; Access control; Digital signatures; Encryption; Multi-agent systems; Public key; digital signature; hierarchical access; multi-agent systems (ID#: 15-6622)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7125309&isnumber=7125066

 

Karakis, R.; Capraz, I.; Bilir, E.; Güler, I., “A New Method of Fuzzy Logic-Based Steganography for the Security of Medical Images,” Signal Processing and Communications Applications Conference (SIU), 2015 23rd, vol., no., pp. 272, 275, 16–19 May 2015. doi:10.1109/SIU.2015.7129812
Abstract: Dicom (Digital Imaging and Communications in Medicine) files stores the personal data of patients in file headers. The personal data of patients can be obtained illegally while archiving and transmitting Dicom files. Therefore, the personal rights of patients can also be invaded. It can be also changed the treatment of disease. This study proposes a new fuzzy logic-based steganography method for the security of medical images. It provides to select randomly the least significant bits (LSB) of image pixels. The message which combined of personal data and comment of doctor, are compressed and encrypted to prevent the attacks.
Keywords: cryptography; data compression; fuzzy logic; image coding; medical image processing; steganography; disease treatment; encryption; fuzzy logic-based steganography; image compression; image pixel; least significant bits; medical image security; patient personal data; Cryptography; DICOM; Histograms; Internet; Watermarking; Medical data security; image steganography; least significant bit (ID#: 15-6623)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129812&isnumber=7129794

 

Kulkarni, S.A.; Patil, S.B., “A Robust Encryption Method for Speech Data Hiding in Digital Images for Optimized Security,” Pervasive Computing (ICPC), 2015 International Conference on, vol., no., pp. 1, 5, 8–10 Jan. 2015. doi:10.1109/PERVASIVE.2015.7087134
Abstract: Steganography is a art of hiding information in a host signal. It is very important to hide the secret data efficiently, as many attacks made on the data communication. The host signal can be a still image, speech or video and the message signal that is hidden in the host signal can be a text, image or an audio signal. The cryptography concept is used for locking the secret message in the cover file. The cryptography makes the secret message not understood unless the decryption key is available. It is related with constructing and analyzing various methods that overcome the influence of third parties. Modern cryptography works on the disciplines like mathematics, computer science and electrical engineering. In this paper a symmetric key is developed which consists of reshuffling and secret arrangement of secret signal data bits in cover signal data bits. In this paper the authors have performed the encryption process on secret speech signal data bits-level to achieve greater strength of encryption which is hidden inside the cover image. The encryption algorithm applied with embedding method is the robust secure method for data hiding.
Keywords: cryptography; image coding; speech coding; cover image; cryptography concept; data communication; decryption key; digital images; embedding method; host signal; optimized security; robust encryption method; secret signal data bit reshuffling; secret signal data bit secret arrangement; speech data hiding; steganography; symmetric key; Encryption; Noise; Receivers; Robustness; Speech; Transmitters; Cover signal; Cryptography; Secret key; Secret signal (ID#: 15-6624)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087134&isnumber=7086957

 

Kumar, A.A.; Santhosha; Jagan, A., “Two Layer Security for Data Storage in Cloud,” Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), 2015 International Conference on, vol., no., pp. 471, 474, 25–27 Feb. 2015. doi:10.1109/ABLAZE.2015.7155041
Abstract: Cloud data security is one of the critical factors of business conviction. Increasing internet bugs and intrusion necessitate efficient security mechanism. The work presented here proposed a two layer mechanism for providing efficient and computationally light security procedure. At the first layer public key cryptography has been used, whereas second layer is totally based on steganography. RSA method is used for key exchange and AES for encryption and decryption to make the method computationally efficient. Since the second layer shuffles the encrypted messages in stegad images so security is much higher than individual approach and other existing approaches. The cloud is computationally very efficient and these processes computationally very light so availability of data is unaffected through it.
Keywords: cloud computing; public key cryptography; security of data; steganography; Internet bugs; RSA method; cloud data security; computationally light security procedure; data storage; first layer public key cryptography; intrusion necessitate efficient security mechanism; stegad images; two layer security; Cloud computing; Computational efficiency; Encryption; Public key cryptography; Secure storage; Cloud storage; data availability; data security (ID#: 15-6625)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155041&isnumber=7154914

 

SaiKrishna, A.; Parimi, S.; Manikandan, G.; Sairam, N., “A Clustering Based Steganographic Approach for Secure Data Communication,” Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, vol., no., pp. 1, 5,
19–20 March 2015. doi:10.1109/ICCPCT.2015.7159515
Abstract: A major challenge in data communication is to provide security for the message during transmission. For achieving this goal various cryptographic and steganographic algorithms are used. Steganography provides impending means to hide the existence of private data. In this paper two new approaches are proposed for embedding the data using clustering algorithm. The objective of clustering is to group the pixels for the embedding process.
Keywords: cryptography; data communication; pattern clustering; steganography; clustering based steganographic approach; cryptographic algorithm; data communication security; embedding process; message security; Algorithm design and analysis; Clustering algorithms; Computers; Cryptography; Histograms; Robustness; Data Hiding; Image Steganography; K-means Clustering; LSB Technique; Security (ID#: 15-6626)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159515&isnumber=7159156

 

Yamamoto, Hirotsugu, “Unconscious Imaging (UcI) and Its Applications for Digital Signage,” Information Optics (WIO), 2015 14th Workshop on, vol., no., pp. 1, 3, 1–5 June 2015. doi:10.1109/WIO.2015.7206897
Abstract: This paper proposes a technique to make watching digital signage enjoyable experiences. The proposed technique is based on unconscious imaging (UcI). UcI is composed of two conversions: the first conversion is conscious to unconscious conversion, where apparently visual information is embedded, encrypted, or modulated into unconscious image; the second conversion is unconscious to conscious conversion, where imperceptible information becomes apparent by decoding, detection, or demodulaltion. The second conversion gives viewers enjoyable sensation. Examples for UcI include secure display by use of visual cryptography, a waving-hand steganography by use of temporal modulation, and an aerial LED signage that hides optical hardware and gives only aerial screen.
Keywords: Decoding; Encryption; Imaging; Light emitting diodes; Three-dimensional displays; Visualization; aerial imaging; secure display; unconscious imaging; waving-hand steganography (ID#: 15-6628)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7206897&isnumber=7206883

 

Praveenkumar, Padmapriya; Priyanga, GU; Rajalakshmi, P; Thenmozhi, K; Rayappan, J.B.B; Amirtharajan, Rengarajan, “2π Rotated Key2 Shuffling and Scrambling — A Cryptic Track,” Computer Communication and Informatics (ICCCI), 2015 International Conference on, vol., no., pp. 1, 4, 8–10 Jan. 2015. doi:10.1109/ICCCI.2015.7218069
Abstract: As science grows threats also gets cultivated along with it. Hence Information security is emerging as an integral part of growing technology. In order to achieve this we get into the fields of cryptography, steganography and watermarking. There are many ways to encrypt and decrypt a message to be communicated. Enabling multiple encryptions will disable an intruder to recover the message. In this paper, initially image is circular shifted, scrambled twice, rotated and finally Cipher Block Chaining (CBC) mode has been applied twice to produce the final encrypted image. These multiple steps strengthen the security of the information. Image metrics like vertical correlation, horizontal correlation and diagonal correlation were computed for gray scale and DICOM images and compared with the available literature.
Keywords: CBC; Circular Shifting; DICOM images; Image Encryption; NPCR; correlation values (ID#: 15-6629)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218069&isnumber=7218046

 

Sukumar, T.; Santha, K.R., “An Approach for Secret Communication Using Adaptive Key Technique for Gray Scale Images,” Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, vol., no., pp. 1, 5, 19–20 March 2015. doi:10.1109/ICCPCT.2015.7159290
Abstract: This work describes an adaptive key technique to hide data in an image. In Stage I, an encryption key has been generated to encrypt an envelope image. The length of the key has been adjusted with respect to size of an image and number of bits required to represent every pixel. In stage II, the encoding key has been generated with respect to size of the message. The key length has been calculated based on number of characters and equivalent bits to represent it. The encoding key encodes the message and its length. Stage III used to superimpose encoded data on encrypted image using data hider. In receiver side, the subsequent process carried out to decrypt image and decoding the data using same methodology that was adapted in the transmitter side. This work carried out for gray scale image and achieves good embedding capacity of 3bpp and also achieves good withstanding capability against to Steganalysis.
Keywords: cryptography; image processing; adaptive key technique; encoding key; encrypted image; encryption key; envelope image; equivalent bits; gray scale images; hide data; pixel representation; secret communication; steganalysis; transmitter side; Arrays; Computers; Encoding; Encryption; Indexes; Payloads; Steganography; decryption; encoding; encryption (ID#: 15-6630)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159290&isnumber=7159156

 

Venkata Keerthy S; Rhishi Kishore T K C; Karthikeyan B; Vaithiyanathan V; Anishin Raj M M, “A Hybrid Technique for Quadrant Based Data Hiding Using Huffman Coding,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, vol., no., pp. 1, 6, 19–20 March 2015. doi:10.1109/ICIIECS.2015.7193011
Abstract: The paper proposes a robust steganography technique to hide the data in an image. The method proposed uses Huffman coding to minimize the number of bits to be embedded and to improve the security of the information. The security aspect is also improved by using a cryptographic substitution cipher and quadrant based embedding of the data. The quadrant based embedding of data bits helps in distribution of bits uniformly over the entire image rather having concentrated data bits over a particular region. The quality of stego image and the embedding capacity is also improved by the usage of Huffman coding. LSB embedding technique is used in the algorithm for concealing the data in the image.
Keywords: Airplanes; Cryptography; Entropy; Huffman coding; MATLAB; Robustness; LSB; data hiding; quadrant based embedding; substitution cipher (ID#: 15-6631)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193011&isnumber=7192777

 

Jitha, R.T.; Sivadasan, E.T., “A Survey Paper on Various Reversible Data Hiding Techniques in Encrypted Images,” Advance Computing Conference (IACC), 2015 IEEE International, vol., no., pp. 1139, 1143, 12–13 June 2015. doi:10.1109/IADCC.2015.7154881
Abstract: Data hiding is a process of hiding information. There are various techniques used for hiding data. Data hiding can be done in audio, video, image, text, and picture. This method is steganography i.e., embedding data in another data. Usually we use image for data hiding especially digital images. For embedding data in images there are many techniques are used. Some techniques will embed data but embedding cause some distortion to image, some techniques can embed only small amount of data, and some techniques will cause distortion during the extraction of data. So the various methods that are used for embedding and extraction of data are described in this.
Keywords: cryptography; image coding; image watermarking; steganography; data embedding; data extraction; digital images; encrypted images; information hiding; reversible data hiding techniques; steganography; Data mining; Distortion; Encryption; Image coding; Receivers; Watermarking; Data hiding; data Recovery; data encryption; encryption Key (ID#: 15-6632)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154881&isnumber=7154658
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Swarm Intelligence Security 2015

 

 
SoS Logo

Swarm Intelligence Security

2015


Swarm Intelligence is a concept using the metaphor of insect colonies to describe decentralized, self-organized systems. The method is often used in artificial intelligence, and there are about a dozen variants ranging from ant colony optimization to stochastic diffusion. For cybersecurity, these systems have significant value both offensively and defensively. For the Science of Security, swarm intelligence relates to composability and compositionality. The research cited here includes focus on drones, botnets and malware, intrusion detection, cryptanalysis, and security risk analysis. The works cited below were published in 2015.  



Jongho Won, Seung-Hyun Seo, Elisa Bertino; “A Secure Communication Protocol for Drones and Smart Objects,” ASIA CCS '15, Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 249–260. doi:10.1145/2714576.2714616
Abstract: In many envisioned drone-based applications, drones will communicate with many different smart objects, such as sensors and embedded devices. Securing such communications requires an effective and efficient encryption key establishment protocol. However, the design of such a protocol must take into account constrained resources of smart objects and the mobility of drones. In this paper, a secure communication protocol between drones and smart objects is presented. To support the required security functions, such as authenticated key agreement, non-repudiation, and user revocation, we propose an efficient Certificateless Signcryption Tag Key Encapsulation Mechanism (eCLSC-TKEM). eCLSC-TKEM reduces the time required to establish a shared key between a drone and a smart object by minimizing the computational overhead at the smart object. Also, our protocol improves drone’s efficiency by utilizing dual channels which allows many smart objects to concurrently execute eCLSC-TKEM. We evaluate our protocol on commercially available devices, namely AR.Drone2.0 and TelosB, by using a parking management testbed. Our experimental results show that our protocol is much more efficient than other protocols.
Keywords: certificateless signcryption, drone communications (ID#: 15-7041)
URL: http://doi.acm.org/10.1145/2714576.2714616

 

Sam Palmer, Denise Gorse, Ema Muk-Pavic; “Neural Networks and Particle Swarm Optimization for Function Approximation in Tri-SWACH Hull Design,” EANN '15, Proceedings of the 16th International Conference on Engineering Applications of Neural Networks (INNS), September 2015, Article No. 8. doi:10.1145/2797143.2797168
Abstract: Tri-SWACH is a novel multihull ship design that is well suited to a wide range of industrial, commercial, and military applications, but which because of its novelty has few experimental studies on which to base further development work. Using a new form of particle swarm optimization that incorporates a strong element of stochastic search, Breeding PSO, it is shown it is possible to use multilayer nets to predict resistance functions for Tri-SWACH hullforms, including one function, the Residual Resistance Coefficient, which was found intractable with previously explored neural network training methods.
Keywords: Particle swarm optimization, Tri-SWACH, function approximation, hullform design, multihull resistance (ID#: 15-7042)
URL: http://doi.acm.org/10.1145/2797143.2797168

 

George Eleftherakis, Milos Kostic, Konstantinos Rousis, Anca Vasilescu; “Stigmergy Inspired Approach to Enable Agent Communication in Emergency Scenarios,” BCI '15, Proceedings of the 7th Balkan Conference on Informatics Conference, September 2015, Article No. 22.  doi:10.1145/2801081.2801119
Abstract: Coordination is one of the main challenges in emergency management. Recent disasters demonstrated that there is a need for communication mechanisms which do not rely on centralized systems and infrastructure. This paper investigates alternative communication models in emergency scenarios and provides an implementation that enables communication between different actors (machine and human) through the environment. First, it analyses the dynamics of emergency scenarios with special focus on coordination and communication challenges. Multi-agent systems are a promising solution for this type of situations and in this work are used in a theoretical framework for developing a bio-inspired communication model. Following this approach, a proof of concept solution has been implemented, named the Alternative Communication Framework. This framework utilises a wide range of alternative media in order to facilitate an indirect, stigmergic, communication. Finally, the real life applicability of this model is evaluated with the use of a realistic scenario which was designed in order to demonstrate the core concepts involved in this work.
Keywords: decentralized intelligence, emergence, emergency scenarios, multi-agent systems, stigmergy (ID#: 15-7043)
URL: http://doi.acm.org/10.1145/2801081.2801119

 

Yu Liu, Wei-Neng Chen, Xiao-min Hu, Jun Zhang; “An Ant Colony Optimizing Algorithm Based on Scheduling Preference for Maximizing Working Time of WSN,” GECCO '15, Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, July 2015, Pages 41–48. doi:10.1145/2739480.2754671
Abstract: With the proliferation of wireless sensor networks (WSN), the issues about how to schedule all the sensors in order to maximize the system’s working time have been in the spotlight. Inspired by the promising performance of ant colony optimization (ACO) in solving combinational optimization problem, we attempt to apply it in prolonging the life time of WSN. In this paper, we propose an improved version of ACO algorithm to get solutions about selecting exact sensors to accomplish the covering task in a reasonable way to preserve more energy to maintain longer active time. The methodology is based on maximizing the disjoint subsets of sensors, in other words, in every time interval, choosing which sensor to sustain active state must be rational in certain extent. With the aid of pheromone and heuristic information, a better solution can be constructed in which pheromone denotes the previous scheduling experience, while heuristic information reflects the desirable device assignment. Orderly sensor selection is designed to construct an advisable subset for coverage task. The proposed method has been successfully applied in solving limited energy assignment problem no matter in homogenous or heterogeneous WSNs. Simulation experiments have shown it has a good performance in addressing relevant issues.
Keywords: ant colony optimization algorithm, maximize working time, schedule, wireless sensors network (WSM) (ID#: 15-7044)
URL: http://doi.acm.org/10.1145/2739480.2754671

 

J. Amudhavel, S. Kumarakrishnan, H. Gomathy, A. Jayabharathi, M. Malarvizhi, K. Prem Kumar; “An Scalable Bandwidth Reduction and Optimization in Smart Phone Ad hoc Network (SPAN) Using Krill Herd Algorithm,” ICARCSET '15, Proceedings of the 2015 International Conference on Advanced Research in Computer Science Engineering & Technology (ICARCSET 2015), March 2015, Article No. 26. doi:10.1145/2743065.2743091
Abstract: In this paper a Krill Herd Algorithm is applied in Smart Phone Ad Hoc Network to solve the challenges present in SPAN. The main problem faced in Smart Phone Ad Hoc Network is Synchronization, Bandwidth, Power conservation. Smart Phone Ad hoc Networks [24] (SPANs) influence the existing hardware (primarily Bluetooth and Wi-Fi) in commercially available smart phones to create peer-to-peer networks without depend on cellular carrier networks, wireless access points, or traditional network infrastructure. It differs from traditional hub and spoke networks in that they support multi-hop relays. The issues in smart phone ad hoc network is resolved using biologically-inspired algorithm namely krill herd for solving optimization tasks. The best solution will be given by intensification process by krill herd algorithm. By using intensification process bandwidth in smart phone ad hoc network gets reduced. Power consumption is also a major issue in smart phone which affects the efficiency. By using intensification process the less power is consumed. Thus the increased bandwidth and power consumption in smart phone ad hoc network get reduced.
Keywords: Krill herd, issues in smart phone ad hoc network, optimization of smart phone (ID#: 15-7045)
URL: http://doi.acm.org/10.1145/2743065.2743091

 

D. Jude Hemanth, J. Anitha, Valentina Emilia Balas; “Performance Improved Hybrid Intelligent System for Medical Image Classification,” BCI '15, Proceedings of the 7th Balkan Conference on Informatics Conference, September 2015, Article No. 8. doi:10.1145/2801081.2801095
Abstract: Kohonen neural networks are one of the commonly used Artificial Neural Network (ANN) for medical imaging applications. In spite of the numerous advantages, there are some demerits associated with Kohonen neural network which are mostly unexplored. Being an unsupervised neural network, they are mostly dependent on iterations which ultimately affect the accuracy of the overall system. Any iteration dependent ANN may have to face local minima problems also. In this work, this specific problem is solved by proposing a hybrid swarm intelligence-Kohonen approach. The inclusion of Particle Swarm Optimization (PSO) in the training algorithm of Kohonen network provides a convergence condition which eliminates the iteration-dependent nature of Kohonen network. The proposed methodology is tested on Magnetic Resonance (MR) brain tumor image classification. A comparative analysis with the conventional Kohonen network shows the superior nature of the proposed technique in terms of the performance measures.
Keywords: Image segmentation and Classification Accuracy, Kohonen Neural network, Particle Swarm Optimization (ID#: 15-7046)
URL: http://doi.acm.org/10.1145/2801081.2801095

 

Matthias Galster; “Software Reference Architectures: Related Architectural Concepts and Challenges,” CobRA '15, Proceedings of the 1st International Workshop on Exploring Component-based Techniques for Constructing Reference Architectures, May 2015, Pages 5–8. doi:10.1145/2755567.2755570
Abstract: Software reference architectures provide guidance when designing systems for particular application or technology domains. In this paper we contribute a better understanding of developing and using reference architectures: First, we relate the concept of software reference architecture to other architectural concepts to help engineers better understand the relationships between software development artifacts. Second, we discuss several high-level (and mostly non-technical) challenges related to the design and use of software reference architectures. These challenges can be used a) to formulate research problems for future work, and b) to define software product and development scenarios in which reference architectures may be difficult to apply. Finally, we explore application domains that may benefit from established reference architectures, including concrete challenges related to reference architectures in these domains.
Keywords: architectural concepts, challenges, frameworks, software reference architecture (ID#: 15-7047)
URL: http://doi.acm.org/10.1145/2755567.2755570

 

Ahmad-Reza Sadeghi, Christian Wachsmann, Michael Waidner; “Security and Privacy Challenges in Industrial Internet of Things,” DAC '15, Proceedings of the 52nd Annual Design Automation Conference, June 2015, Article No. 54. doi:10.1145/2744769.2747942
Abstract: Today, embedded, mobile, and cyberphysical systems are ubiquitous and used in many applications, from industrial control systems, modern vehicles, to critical infrastructure. Current trends and initiatives, such as “Industrie 4.0” and Internet of Things (IoT), promise innovative business models and novel user experiences through strong connectivity and effective use of next generation of embedded devices. These systems generate, process, and exchange vast amounts of security-critical and privacy-sensitive data, which makes them attractive targets of attacks. Cyberattacks on IoT systems are very critical since they may cause physical damage and even threaten human lives. The complexity of these systems and the potential impact of cyberattacks bring upon new threats.  This paper gives an introduction to Industrial IoT systems, the related security and privacy challenges, and an outlook on possible solutions towards a holistic security framework for Industrial IoT systems.
Keywords: (not provided) (ID#: 15-7048)
URL: http://doi.acm.org/10.1145/2744769.2747942

 

Jia-bin Wang, Wei-Neng Chen, Jun Zhang, Ying Lin; “A Dimension-Decreasing Particle Swarm Optimization Method for Portfolio Optimization,” GECCO Companion '15, Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, July 2015, Pages 1515–1516. doi:10.1145/2739482.2764652
Abstract: Portfolio optimization problems are challenging as they contain different kinds of constrains and their complexity becomes very high when the number of assets grows. In this paper, we develop a dimension-decreasing particle swarm optimization (DDPSO) for solving multi-constrained portfolio optimization problems. DDPSO improves the efficiency of PSO for solving portfolio optimization problems with a lot of asset and it can easily handle the cardinality constraint in portfolio optimization. To improve search diversity, the dimension-decreasing method is coupled with the comprehensive learning particle swarm optimization (CLPSO) algorithm. The proposed method is tested on benchmark problems from the OR library. Experimental results show that the proposed algorithm performs well.
Keywords: cardinality constraint, dimension-decreasing, particle swarm optimization, portfolio optimization (ID#: 15-7049)
URL: http://doi.acm.org/10.1145/2739482.2764652

 

William F. Bond, Ahmed Awad E.A.; “Touch-based Static Authentication Using a Virtual Grid,” IH&MMSec '15,
Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security
, June 2015,
Pages 129–134. doi:10.1145/2756601.2756602
Abstract: Keystroke dynamics is a subfield of computer security in which the cadence of the typist’s keystrokes are used to determine authenticity. The static variety of keystroke dynamics uses typing patterns observed during the typing of a password or passphrase. This paper presents a technique for static authentication on mobile tablet devices using neural networks for analysis of keystroke metrics. Metrics used in the analysis of typing are monographs, digraphs, and trigraphs. Monographs as we define them consist of the time between the press and release of a single key, coupled with the discretized x–y location of the keystroke on the tablet. A digraph is the duration between the presses of two consecutively pressed keys, and a trigraph is the duration between the press of a key and the press of a key two keys later. Our technique combines the analysis of monographs, digraphs, and trigraphs to produce a confidence measure. Our best equal error rate for distinguishing users from impostors is 9.3% for text typing, and 9.0% for a custom experiment setup that is discussed in detail in the paper.
Keywords: Bayesian fusion, back-propagation neural networks, digraphs, discretization, keystroke dynamics, mobile authentication, monographs, receiver operating characteristic curve, static authentication, trigraphs (ID#: 15-7050)
URL: http://doi.acm.org/10.1145/2756601.2756602

 

Marlena R. Fraune, Steven Sherrin, Selma Sabanović, Eliot R. Smith; “Rabble of Robots Effects: Number and Type of Robots Modulates Attitudes, Emotions, and Stereotypes,” HRI '15, Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, March 2015, Pages 109–116. doi:10.1145/2696454.2696483
Abstract: Robots are expected to become present in society in increasing numbers, yet few studies in human-robot interaction (HRI) go beyond one-to-one interaction to examine how emotions, attitudes, and stereotypes expressed toward groups of robots differ from those expressed toward individuals. Research from social psychology indicates that people interact differently with individuals than with groups. We therefore hypothesize that group effects might similarly occur when people face multiple robots. Further, group effects might vary for robots of different types. In this exploratory study, we used videos to expose participants in a between-subjects experiment to robots varying in Number (Single or Group) and Type (anthropomorphic, zoomorphic, or mechanomorphic). We then measured participants’ general attitudes, emotions, and stereotypes toward robots with a combination of measures from HRI (e.g., Godspeed Questionnaire, NARS) and social psychology (e.g., Big Five, Social Threat, Emotions). Results suggest that Number and Type of observed robots had an interaction effect on responses toward robots in general, leading to more positive responses for groups for some robot types, but more negative responses for others.
Keywords: attitudes, emotion, group effects, human-robot interaction, inter-group interactions, robot type, stereotypes (ID#: 15-7051)
URL: http://doi.acm.org/10.1145/2696454.2696483

 

Ayumi Sugiyama, Toshiharu Sugawara; “Meta-Strategy for Cooperative Tasks with Learning of Environments in Multi-Agent Continuous Tasks,” SAC '15, Proceedings of the 30th Annual ACM Symposium on Applied Computing, April 2015,
Pages 494–500. doi:10.1145/2695664.2695878
Abstract: With the development of robot technology, we can expect self-propelled robots working in large areas where cooperative and coordinated behaviors by multiple (hardware and software) robots are necessary. However, it is not trivial for agents, which are control programs running on robots, to determine the actions for their cooperative behaviors, because such strategies depend on the characteristics of the environment and the capabilities of individual agents. Therefore, using the example of continuous cleaning tasks by multiple agents, we propose a method of meta-strategy that decide the appropriate planning strategies for cooperation and coordination through with the learning of the performance of individual strategies and the environmental data in a multi-agent systems context, but without complex reasoning for deep coordination due to the limited CPU capability and battery capacity. We experimentally evaluated our method by comparing it with a conventional method that assumes that agents have knowledge on where agents visit frequently (since they are easy to become dirty). We found that agents with the proposed method could operate as effectively as and, in complex areas, outperformed those with the conventional method. Finally, we describe that the reasons for such a counterintuitive phenomenon is induced from splitting up in working by autonomous agents based on the local observations. We also discuss the limitation of the current method.
Keywords: continuous cleaning, cooperation, coordination, division of labor, multi-agent systems (ID#: 15-7052)
URL: http://doi.acm.org/10.1145/2695664.2695878

 

Afshin Shahriari, Hamid Parvin, Alireza Monajati; “Exploring Weights of Hierarchical and Equivalency Relationship in General Persian Texts,” EANN '15, Proceedings of the 16th International Conference on Engineering Applications of Neural Networks (INNS), September 2015, Article No. 7. doi:10.1145/2797143.2797167
Abstract: A thesaurus is a reference work that lists words grouped together according to similarity of meaning (containing synonyms and sometimes antonyms), in contrast to a dictionary, which contains definitions and pronunciations. Three kinds of relationships used in a thesaurus includes: (1) equivalency, (2) hierarchy, and finally (2) association. This paper proposes a novel method to develop a classification task in general Persian context while it employs a thesaurus. Two kinds of word relationships are employed in our used thesaurus: (1) equivalency, and (2) hierarchy. Each of these kinds has a weight that can be tuned. The paper explores all possible weights for the proper ones. After that a feature selection mechanism is also employed. A host of machine learning algorithms are employed as the classifier over the frequency based features. Experimental results indicate the usage of the best weights for these relationships; can lead to a good result.
Keywords: Equivalency, General Persian Text, Hierarchy (ID#: 15-7053)
URL: http://doi.acm.org/10.1145/2797143.2797167

 

Jean Michel Rouly, Huzefa Rangwala, Aditya Johri; “What Are We Teaching?: Automated Evaluation of CS Curricula Content Using Topic Modeling,” ICER '15, Proceedings of the Eleventh Annual International Conference on International Computing Education Research, July 2015, Pages 189–197. doi:10.1145/2787622.2787723
Abstract: Identifying the concepts covered in a university course based on a high level description is a necessary step in the evaluation of a university’s program of study. To this end, data describing university courses is readily available on the Internet in vast quantities. However, understanding natural language course descriptions requires manual inspection and, often, implicit knowledge of the subject area. Additionally, a holistic approach to curricular evaluation involves analysis of the prerequisite structure within a department, specifically the conceptual overlap between courses in a prerequisite chain. In this work we apply existing topic modeling techniques to sets of course descriptions extracted from publicly available university course catalogs. The inferred topic models correspond to concepts taught in the described courses. The inference process is unsupervised and generates topics without the need for manual inspection. We present an application framework for data ingestion and processing, along with a user-facing web-based application for inferred topic presentation. The software provides tools to view the inferred topics for a university’s courses, quickly compare departments by their topic composition, and visually analyze conceptual overlap in departmental prerequisite structures. The tool is available online at http://edmine.cs.gmu.edu/.
Keywords: course descriptions, prerequisite chain, topic modeling, web visualization (ID#: 15-7054)
URL: http://doi.acm.org/10.1145/2787622.2787723

 

Xiao-Fang Liu, Zhi-Hui Zhan, Jun Zhang; “Dichotomy Guided Based Parameter Adaptation for Differential Evolution,” GECCO '15, Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, July 2015,
Pages 289–296. doi:10.1145/2739480.2754646
Abstract: Differential evolution (DE) is an efficient and powerful population-based stochastic evolutionary algorithm, which evolves according to the differential between individuals. The success of DE in obtaining the optima of a specific problem depends greatly on the choice of mutation strategies and control parameter values. Good parameters lead the individuals towards optima successfully. The increasing of the success rate (the ratio of entering the next generation successfully) of population can speed up the searching. Adaptive DE incorporates success-history or population-state based parameter adaptation. However, sometimes poor parameters may improve individual with small probability and are regarded as successful parameters. The poor parameters may mislead the parameter control. So, in this paper, we propose a novel approach to distinguish between good and poor parameters in successful parameters. In order to speed up the convergence of algorithm and find more “good” parameters, we propose a dichotomy adaptive DE (DADE), in which the successful parameters are divided into two parts and only the part with higher success rate is used for parameter adaptation control. Simulation results show that DADE is competitive to other classic or adaptive DE algorithms on a set of benchmark problem and IEEE CEC 2014 test suite.
Keywords: adaptive parameter control, dichotomy-guided, differential evolution, evolutionary optimization (ID#: 15-7055)
URL: http://doi.acm.org/10.1145/2739480.2754646

 

Aleksandr Farseev, Liqiang Nie, Mohammad Akbari, Tat-Seng Chua; “Harvesting Multiple Sources for User Profile Learning: a Big Data Study,” ICMR '15, Proceedings of the 5th ACM International Conference on Multimedia Retrieval, June 2015,
Pages 235–242. doi:10.1145/2671188.2749381
Abstract: User profile learning, such as mobility and demographic profile learning, is of great importance to various applications. Meanwhile, the rapid growth of multiple social platforms makes it possible to perform a comprehensive user profile learning from different views. However, the research efforts on user profile learning from multiple data sources are still relatively sparse, and there is no large-scale dataset released towards user profile learning. In our study, we contribute such benchmark and perform an initial study on user mobility and demographic profile learning. First, we constructed and released a large-scale multi-source multi-modal dataset from three geographical areas. We then applied our proposed ensemble model on this dataset to learn user profile. Based on our experimental results, we observed that multiple data sources mutually complement each other and their appropriate fusion boosts the user profiling performance.
Keywords: demographic profile, mobility profile, multiple source integration, user profile learning (ID#: 15-7056)
URL: http://doi.acm.org/10.1145/2671188.2749381

 

J. Amudhavel, D. Rajaguru, S. Sampath Kumar, Sonali H. Lakhani, T. Vengattaraman, K. Prem Kumar; “A Chaotic Krill Herd Optimization Approach in VANET for Congestion Free Effective Multi Hop Communication,” ICARCSET '15, Proceedings of the 2015 International Conference on Advanced Research in Computer Science Engineering & Technology (ICARCSET 2015), March 2015, Article No. 27. doi:10.1145/2743065.2743092
Abstract: VANET, which stands for Vehicular Ad-Hoc Network has many applications in Urban areas where congestion has become a drastic problem. VANET is a network where vehicles act as nodes. The Krill Herd algorithm recently designed by Gandomi and Alavi is one of the best optimization techniques. Even though Krill Herd algorithm satisfies many optimization problems, it lacks the three issues, namely local optima avoidance, high convergence speed and the absence of congestion. So in this paper, Chaotic theory is introduced into the krill herd algorithm to solve the three issues, thus forming the Chaotic Krill Herd algorithm (CKH). The Chaotic Krill Herd algorithm introduces three chaotic maps, namely Circle, Sine and Sinusoidal to provide chaotic behaviors and also enables the Krill herd algorithm to have a group of krills with chaotic induced movements. With the help of these three chaotic maps, Congestion can also be reduced to a greater extent.
Keywords: Congestion, Convergence speed, Krill Herd Algorithm, Local optima, Route Discovery, VANET (ID#: 15-7057)
URL: http://doi.acm.org/10.1145/2743065.2743092

 

Yixiao Lin, Sayan Mitra; “StarL: Towards a Unified Framework for Programming, Simulating and Verifying Distributed Robotic Systems,” LCTES '15, Proceedings of the 16th ACM SIGPLAN/SIGBED Conference on Languages, Compilers and Tools for Embedded Systems 2015, June 2015, Article No. 9. doi:10.1145/2670529.2754966
Abstract: We developed StarL as a framework for programming, simulating, and verifying distributed systems that interacts with physical processes. StarL framework has (a) a collection of distributed primitives for coordination, such as mutual exclusion, registration and geocast that can be used to build sophisticated applications, (b) theory libraries for verifying StarL applications in the PVS theorem prover, and (c) an execution environment that can be used to deploy the applications on hardware or to execute them in a discrete event simulator. The primitives have (i) abstract, nondeterministic specifications in terms of invariants, and assume-guarantee style progress properties, (ii) implementations in Java/Android that always satisfy the invariants and attempt progress using best effort strategies. The PVS theories specify the invariant and progress properties of the primitives, and have to be appropriately instantiated and composed with the application’s state machine to prove properties about the application. We have built two execution environments: one for deploying applications on Android/iRobot Create platform and a second one for simulating large instantiations of the applications in a discrete even simulator. The capabilities are illustrated with a StarL application for vehicle to vehicle coordination in an automatic intersection that uses primitives for point-to-point motion, mutual exclusion, and registration.
Keywords: Programming models, distributed systems, mechanical theorem proving (ID#: 15-7058)
URL: http://doi.acm.org/10.1145/2670529.2754966

 

Thomas Holleczek, Dang The Anh, Shanyang Yin, Yunye Jin, Spiros Antonatos, Han Leong Goh, Samantha Low, Amy Shi-Nash; “Traffic Measurement and Route Recommendation System for Mass Rapid Transit (MRT),” KDD '15, Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, August 2015,
Pages 1859–1868. doi:10.1145/2783258.2788590
Abstract: Understanding how people use public transport is important for the operation and future planning of the underlying transport networks. We have therefore developed and deployed a traffic measurement system for a key player in the transportation industry to gain insights into crowd behavior for planning purposes. The system has been in operation for several months and reports, at hourly intervals, (1) the crowdedness of subway stations, (2) the flows of people inside interchange stations, and (3) the expected travel time for each possible route in the subway network of Singapore. The core of our system is an efficient algorithm which detects individual subway trips from anonymized real-time data generated by the location based system of Singtel, the country's largest telecommunications company. To assess the accuracy of our system, we engaged an independent market research company to conduct a field study--a manual count of the number of passengers boarding and disembarking at a selected station on three separate days. A strong correlation between the calculations of our algorithm and the manual counts was found. One of our key findings is that travelers do not always choose the route with the shortest travel time in the subway network of Singapore. We have therefore also been developing a mobile app which allows users to plan their trips based on the average travel time between stations.
Keywords: call detail records (cdrs), cellular networks, monitoring system, public transport (ID#: 15-7059)
URL: http://doi.acm.org/10.1145/2783258.2788590

 

Anirban Sengupta, Saumya Bhadauria; “Untrusted Third Party Digital IP Cores: Power-Delay Trade-off Driven Exploration of Hardware Trojan Secured Datapath During High Level Synthesis,” GLSVLSI '15, Proceedings of the 25th edition on Great Lakes Symposium on VLSI, May 2015, Pages 167–172. doi:10.1145/2742060.2742061
Abstract: An evolutionary algorithm (EA) driven novel design space exploration (DSE) of an optimized hardware Trojan secured datapath based on user power-delay constraint during high level synthesis (HLS) is presented. The focus on hardware Trojan secured datapath generation during HLS has been very little with absolutely zero effort so far in design space exploration of a user multi-objective (MO) constraint optimized hardware Trojan secured datapath. This problem mandates attention as producing a Trojan secured datapath is not inconsequential. Merely the detection process of Trojan is not as straightforward as concurrent error detection (CED) of transient faults as it involves the concept of multiple third party intellectual property (3PIP) vendors to facilitate detection, let aside the exploration process of a user optimized Trojan secured datapath based on MO constraints. The proposed DSE for hardware Trojan detection includes novel problem encoding technique that enables exploration of efficient distinct vendor allocation as well as enables exploration of an optimized Trojan secured datapath structure. The exploration backbone for the proposed approach is bacterial foraging optimization algorithm (BFOA) which is known for its adaptive feature (tumbling/swimming) and simplified model. Results of comparison with recent approach indicated an average improvement in quality of results (QoR) of >14.1%
Keywords: 3PIP, BFOA, delay, DSE, hardware trojan, HLS, power (ID#: 15-7060)
URL: http://doi.acm.org/10.1145/2742060.2742061
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Web Caching 2015

 

 
SoS Logo

Web Caching

2015


Web caches offer a potential for mischief. With the expanded need for caching capability with the cloud and mobile communications, the need for more and better security has also grown. The articles cited here address cache security issues including geo-inference attacks, scriptless timing attacks, and a proposed incognito tab. Other research on caching generally is cited. These articles appeared in 2015.



Panja, B.; Gennarelli, T.; Meharia, P., “Handling Cross Site Scripting Attacks Using Cache Check to Reduce Webpage Rendering Time with Elimination of Sanitization and Filtering in Light Weight Mobile Web Browser,” in Mobile and Secure Services (MOBISECSERV), 2015 First Conference on, vol., no., pp. 1–7, 20–21 Feb. 2015. doi:10.1109/MOBISECSERV.2015.7072878
Abstract: In this paper we propose a new approach to prevent and detect potential cross-site scripting attacks. Our method called Buffer Based Cache Check, will utilize both the server-side as well as the client-side to detect and prevent XSS attacks and will require modification of both in order to function correctly. With Cache Check, instead of the server supplying a complete whitelist of all the known trusted scripts to the mobile browser every time a page is requested, the server will instead store a cache that contains a validated “trusted” instance of the last time the page was rendered that can be checked against the requested page for inconsistencies. We believe that with our proposed method that rendering times in mobile browsers will be significantly reduced as part of the checking is done via the server, and fewer checking within the mobile browser which is slower than the server. With our method the entire checking process isn’t dumped onto the mobile browser and as a result the mobile browser should be able to render pages faster as it is only checking for “untrusted” content whereas with other approaches, every single line of code is checked by the mobile browser, which increases rendering times.
Keywords: cache storage; client-server systems; mobile computing; online front-ends; security of data; trusted computing; Web page rendering time; XSS attacks; buffer based cache check; client-side; cross-site scripting attacks; filtering; light weight mobile Web browser; sanitization; server-side; trusted instance; untrusted content; Browsers; Filtering; Mobile communication; Radio access networks; Rendering (computer graphics); Security; Servers; Cross site scripting; cache check; mobile browser; webpage rendering (ID#: 15-7179)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7072878&isnumber=7072857

 

Basile, C.; Lioy, A., “Analysis of Application-Layer Filtering Policies with Application to HTTP,” in Networking, IEEE/ACM Transactions on, vol. 23, no.1, pp. 28–41, Feb. 2015. doi:10.1109/TNET.2013.2293625
Abstract: Application firewalls are increasingly used to inspect upper-layer protocols (as HTTP) that are the target or vehicle of several attacks and are not properly addressed by network firewalls. Like other security controls, application firewalls need to be carefully configured, as errors have a significant impact on service security and availability. However, currently no technique is available to analyze their configuration for correctness and consistency. This paper extends a previous model for analysis of packet filters to the policy anomaly analysis in application firewalls. Both rule-pair and multirule anomalies are detected, hence reducing the likelihood of conflicting and suboptimal configurations. The expressiveness of this model has been successfully tested against the features of Squid, a popular Web caching proxy offering various access control capabilities. The tool implementing this model has been tested on various scenarios and exhibits good performance.
Keywords: Internet; authorisation; firewalls; transport protocols; HTTP; Squid Web caching proxy; access control capabilities; application firewalls; application-layer filtering policies; multirule anomalies; packet filters; policy anomaly analysis; rule-pair anomalies; service security; upper-layer protocols; Access control; Analytical models; IEEE transactions; IP networks; Logic gates; Protocols; Application gateway; firewall; policy anomalies; policy conflicts; proxy; regular expressions (ID#: 15-7180)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6690252&isnumber=7041254

 

Gerbet, Thomas; Kumar, Amrit; Lauradoux, Cedric, “The Power of Evil Choices in Bloom Filters,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 101–112, 22–25 June 2015. doi:10.1109/DSN.2015.21
Abstract: A Bloom filter is a probabilistic hash-based data structure extensively used in software including online security applications. This paper raises the following important question: Are Bloom filters correctly designed in a security context? The answer is no and the reasons are multiple: bad choices of parameters, lack of adversary models and misused hash functions. Indeed, developers truncate cryptographic digests without a second thought on the security implications. This work constructs adversary models for Bloom filters and illustrates attacks on three applications, namely SCRAPY web spider, BITLY DABLOOMS spam filter and SQUID cache proxy. As a general impact, filters are forced to systematically exhibit worst-case behavior. One of the reasons being that Bloom filter parameters are always computed in the average case. We compute the worst-case parameters in adversarial settings, show how to securely and efficiently use cryptographic hash functions and propose several other countermeasures to mitigate our attacks.
Keywords: Complexity theory; Cryptography; Data structures; Electronic mail; Indexes; Software; Bloom filters; Denial-of-Service; Digest truncation; Hash functions; Pre-image attack (ID#: 15-7181)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266842&isnumber=7266818

 

Yaoqi Jia; Xinshu Dong; Zhenkai Liang; Saxena, P., “I Know Where You’ve Been: Geo-Inference Attacks via the Browser Cache,” in Internet Computing, IEEE, vol. 19, no.1, pp. 44–53, Jan–Feb. 2015. doi:10.1109/MIC.2014.103
Abstract: To provide more relevant content and better responsiveness, many websites customize their services according to users’ geolocations. However, if geo-oriented websites leave location-sensitive content in the browser cache, other sites can sniff that content via side channels. The authors’ case studies demonstrate the reliability and power of geo-inference attacks, which can measure the timing of browser cache queries and track a victim’s country, city, and neighborhood. Existing defenses cannot effectively prevent such attacks, and additional support is required for a better defense deployment.
Keywords: Web sites; cache storage; geography; online front-ends; browser cache; geo-inference attacks; geo-oriented Websites; side channels; Browsers; Cache memory; Content management; Geography; Google; Internet; Mobile radio management; Privacy; Web browsers; Web technologies; security and privacy protection (ID#: 15-7182)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879050&isnumber=7031813

 

Qiao, Xiuquan.; Chen, Jun-Liang.; Tan, Wei.; Dustdar, Schahram., “Service Provisioning in Content-Centric Networking: Challenges, Opportunities, and Promising Directions,” in Internet Computing, IEEE, vol., no. 99, vol., no., pp.1–1. doi:10.1109/MIC.2015.116
Abstract: With the evolution of Internet applications, contemporary IP-based Internet architecture increasingly finds itself not capable to meet the demands of current network usage patterns. Content-Centric Networking (CCN), as a clean-slate future network architecture, is different from existing IP networks, and has some salient features such as in-network caching, name-based routing, friendly mobility and built-in security. This new architecture has a profound impact on how to provision Internet applications. Here, from the perspective of upper-layer applications, we discuss four challenges and three opportunities regarding service provisioning in CCN. We describe an approach called Service Innovation Environment for Future Internet (SIEFI) that addresses challenges while exploits opportunities for the future of CCN.
Keywords: Computer architecture; IP networks; Routing; Technological innovation; Web servers (ID#: 15-7183)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7239513&isnumber=5226613

 

Lee, R.B., “Rethinking Computers for Cybersecurity,” in Computer, vol. 48, no.4, pp.16–25, Apr. 2015. doi:10.1109/MC.2015.118
Abstract: Cyberattacks are growing at an alarming rate, even as our dependence on cyberspace transactions increases. Our software security solutions may no longer be sufficient. It is time to rethink computer design from the foundations. Can hardware security be enlisted to improve cybersecurity? The author discusses two classes of hardware security: hardware-enhanced security architectures for improving software and system security, and secure hardware. The Web extra at https://youtu.be/z-c9ACviGNo is a video of a 2006 invited seminar at the Naval Postgraduate School, in which author Ruby B. Lee presents the Secret-Protected (SP) architecture, which is a minimalist set of hardware features that can be added to any microprocessor or embedded processor that protects the “master secrets” that in turn protect other keys and encrypted information, programs and data.
Keywords: security of data; computer design; cybersecurity; cyberspace transactions; hardware security; hardware-enhanced security architectures; software security improvement; system security improvement; Access control; Computer architecture; Computer crime; Computer security; Cryptography; Cloud; SaaS; computer architecture; cryptography; data access control; hackers; secure caches; security; self-protecting data; trusted software (ID#: 15-7184)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085648&isnumber=7085638

 

Aghaei-Foroushani, V.; Zincir-Heywood, A.N., “A Proxy Identifier Based on Patterns in Traffic Flows,” in High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on, vol., no., pp. 118–125, 8–10 Jan. 2015. doi:10.1109/HASE.2015.26
Abstract: Proxies are used commonly on today’s Internet. On one hand, end users can choose to use proxies for hiding their identities for privacy reasons. On the other hand, ubiquitous systems can use it for intercepting the traffic for purposes such as caching. In addition, attackers can use such technologies to anonymize their malicious behaviours and hide their identities. Identification of such behaviours is important for defense applications since it can facilitate the assessment of security threats. The objective of this paper is to identify proxy traffic as seen in a traffic log file without any access to the proxy server or the clients behind it. To achieve this: (i) we employ a mixture of log files to represent real-life proxy behavior, and (ii) we design and develop a data driven machine learning based approach to provide recommendations for the automatic identification of such behaviours. Our results show that we are able to achieve our objective with a promising performance even though the problem is very challenging.
Keywords: Internet; data privacy; pattern recognition; telecommunication traffic; ubiquitous computing; Internet; log files; malicious behaviours; patterns; privacy reasons; proxy identifier; real-life proxy behavior; security threats; traffic flows; ubiquitous systems; Cryptography; Delays; IP networks; Probes; Web servers; Behavior Analysis; Network Security; Proxy; Traffic Flow (ID#: 15-7185)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027422&isnumber=7027398

 

Gillman, D.; Yin Lin; Maggs, B.; Sitaraman, R.K., “Protecting Websites from Attack with Secure Delivery Networks,” in Computer, vol. 48, no.4, pp. 26–34, Apr. 2015. doi:10.1109/MC.2015.116
Abstract: Secure delivery networks can help prevent or mitigate the most common attacks against mission-critical websites. A case study from a leading provider of content delivery services illustrates one such network’s operation and effectiveness. The Web extra at https://youtu.be/4FRRI0aJLQM is an overview of the evolving threat landscape with Akamai Director of Web Security Solutions Product Marketing, Dan Shugrue. Dan also shares how Akamai’s Kona Site Defender service handles the increasing frequency, volume and sophistication of Web attacks with a unique architecture that is always on and doesn’t degrade performance.
Keywords: Web sites; security of data; Web attacks; Website protection; content delivery services; mission-critical Websites; secure delivery networks; Computer crime; Computer security; Firewalls (computing); IP networks; Internet; Protocols; Akamai Technologies; DDoS attacks; DNS; Domain Name System; Internet/Web technologies; Operation Ababil; SQL injection; WAF; Web Application Firewall; XSS; cache busting; cross-site scripting; cybercrime; distributed denial-of-service attacks; distributed systems; floods; hackers; security (ID#: 15-7186)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085639&isnumber=7085638

 

Jin, Yong; Fujikawa, Kenji; Harai, Hiroaki; Ohta, Masataka, “Secure Glue: A Cache and Zone Transfer Considering Automatic Renumbering,” in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol.2, no., pp. 393–398, 1-5 July 2015. doi:10.1109/COMPSAC.2015.38
Abstract: Domain Name System (DNS) is the most widely used name resolution system for computers and services in the Internet. The number of domain name registrations is reaching 276 million across all top level domains (TLDs) today and the DNS query count is increasing year over year. The main reason of the high DNS query count is the increase of out-of-bailiwick domain name delegation since it (NS without glue A record) makes the client send extra DNS queries for the glue A record. On the other hand, the master/slave model is not compatible with address renumbering in DNS since the master is indicated by its IP address in the slave. Thus it is necessary to redesign the current DNS protocol considering lower name resolution latency as well as the enhancement of automatic convergence after the address renumbering for the effective and sustained name resolution service. In this paper, we propose two mechanisms: one is the secure glue A cache and update to reduce the name resolution latency by cutting the DNS query count with low security risk, the other is the automatic zone transfer which automatically recovers the DNS based on FQDN (Fully Qualified Domain Name) after address renumbering. We successfully implemented the prototype in Linux as an extended form of BIND (Berkeley Internet Name Domain). The evaluation results confirmed approximately 25% down of the DNS query count and the successful automatic DNS recovery after address renumbering.
Keywords: IP networks; Protocols; Prototypes; Semiconductor optical amplifiers; Servers; Web and internet services; Automatic address renumbering; DNS; Glue A; Out-of-bailiwick; Zone transfer (ID#: 15-7187)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273645&isnumber=7273573

 

Nakano, Yuusuke; Kamiyama, Noriaki; Shiomoto, Kohei; Hasegawa, Go; Murata, Masayuki; Miyahara, Hideo, “Web Performance Acceleration by Caching Rendering Results,” in Network Operations and Management Symposium (APNOMS), 2015 17th Asia-Pacific, vol., no., pp. 244–249, 19–21 Aug. 2015. doi:10.1109/APNOMS.2015.7275434
Abstract: Web performance, the time from clicking a link on a web page to finishing displaying the web page of the link, is becoming increasingly important. Low web performance of web pages tends to result in the loss of customers. In our research, we measured the time for downloading files on popular web pages by running web browsers on four hosts worldwide using PlanetLab and detected the longest portion in download time. We found the longest portion in download time to be Blocked time, which is the waiting time for the start of downloading in web browsers. In this paper, we propose a method for accelerating web performance by reducing such Blocked time with a cache of rendering results. The proposed method uses an in-network rendering function which renders web pages instead of web browsers. The in-network rendering function also stores the rendering results in its cache and reuses them for other web browsers to reduce the Blocked time. To evaluate the proposed method, we calculated the web performance of web pages whose render results are cached by analyzing the measured download time of actual web pages. We found that the proposed method accelerates web performance of long round trip time (RTT) web pages or long RTT clients if the web pages’ dynamic file percentages are within 80%.
Keywords: Acceleration; Browsers; Rendering (computer graphics);Time measurement; Web pages; Web servers (ID#: 15-7188)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275434&isnumber=7275336

 

Chuanfei Xu; Bo Tang; Man Lung Yiu, “Diversified Caching for Replicated Web Search Engines,” in Data Engineering (ICDE), 2015 IEEE 31st International Conference on, vol., no., pp. 207–218, 13–17 April 2015. doi:10.1109/ICDE.2015.7113285
Abstract: Commercial web search engines adopt parallel and replicated architecture in order to support high query throughput. In this paper, we investigate the effect of caching on the throughput in such a setting. A simple scheme, called uniform caching, would replicate the cache content to all servers. Unfortunately, it does not exploit the variations among queries, thus wasting memory space on caching the same cache content redundantly on multiple servers. To tackle this limitation, we propose a diversified caching problem, which aims to diversify the types of queries served by different servers, and maximize the sharing of terms among queries assigned to the same server. We show that it is NP-hard to find the optimal diversified caching scheme, and identify intuitive properties to seek good solutions. Then we present a framework with a suite of techniques and heuristics for diversified caching. Finally, we evaluate the proposed solution with competitors by using a real dataset and a real query log.
Keywords: cache storage; query processing; search engines; NP-hard; optimal diversified caching scheme; parallel architecture; real query log; replicated Web search engines; replicated architecture; Computer architecture; Indexes; Search engines; Servers; Silicon; Throughput; Training (ID#: 15-7189)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113285&isnumber=7113253

 

Bangar, P.; Singh, K.N., “Investigation and Performance Improvement of Web Cache Recommender System,” in Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), 2015 International Conference on, vol., no., pp. 585–589, 25–27 Feb. 2015. doi:10.1109/ABLAZE.2015.7154930
Abstract: A number of large and small scale applications are developed now in these days for fulfilling the users need. In recent years the Web based applications are also growing rapidly. Due to this the network performance is affected and browsing experience becomes slow. Thus performance improvement of traditional browsing and prefetching techniques are required, by which the application speed is optimized and delivers the high performance Web pages. Thus, in this paper pre-fetching techniques are investigated, and for cache replacement a recommendation system is developed. In order to design recommendation engine a promising data model is find in [6]. The given system utilizes the proxy access log for data analysis. The main advantage of proxy access log, it contains entire navigations of Web pages by a targeted user. This data model offers high performance outcomes. But computational complexity is not much adoptable. Thus the traditional data model is modified using a new scheme, where the K-mean algorithm is applied for user data personalization. Then after ID3 algorithm is used, for learning the user navigation patterns and KNN and probability theory is utilized for predicting the upcoming Web URLs for pre-fetching. The proposed data model is implemented using visual studio framework and the performance of the system are evaluated and compared in terms of memory used, time consumption, accuracy and error rate. According to the obtained results the proposed predictive system offers high performance results as compared to the traditional data model.
Keywords: cache storage; data models; learning (artificial intelligence); probability; recommender systems; ID3 algorithm; K-mean algorithm; KNN; Web URL prediction; Web based applications; Web cache recommender system; Web pages; accuracy analysis; browsing experience; browsing technique; cache replacement; computational complexity; data analysis; data model; error rate; memory consumption; network performance; performance evaluation; performance improvement; predictive system; prefetching technique; probability theory; proxy access log; recommendation engine design; time consumption; user data personalization; user navigation pattern learning; visual studio framework; Accuracy; Algorithm design and analysis; Data mining; Data models; Error analysis; Memory management; Prediction algorithms; ID3; K-means; caching; pre-fetching (ID#: 15-7191)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154930&isnumber=7154914

 

Johnson, T.A.; Seeling, P., “Browsing the Mobile Web: Device, Small Cell, and Distributed Mobile Caches,” in Communication Workshop (ICCW), 2015 IEEE International Conference on, vol., no., pp.1025–1029, 8–12 June 2015. doi:10.1109/ICCW.2015.7247311
Abstract: The increasing amounts of data requested by mobile client devices has given rise to broad research endeavors to determine how network providers can cope with this challenge. Based on real world data used to derive upper limits of web page complexity, we provide an evaluation of web browsing and localized caching approaches. In this paper, we employ two different user-browsing models for (i) individual mobile clients, (ii) mobile clients sharing one centralized small cell cache, and (iii) mobile clients operating in an energy-optimized co-located fashion. We find that for a given content popularity distribution, average group savings due to caching depend highly on the user model. Furthermore, we find that for the purpose of overall savings determinations, an aggregated virtual cache falls within less than ten percent of a more elaborate energy-conscious approach to caching.
Keywords: Internet; cellular radio; mobile computing; Web page complexity; aggregated virtual cache; centralized small cell cache; content popularity distribution; distributed mobile caches; energy-conscious approach; energy-optimized colocated fashion; group savings; localized caching approaches; mobile Web; mobile client devices; mobile clients sharing; network providers; real world data; user-browsing models; Conferences; Data models; Joints; Mobile communication; Mobile computing; Mobile handsets; Web pages; Cooperative communications; Green mobile communications; Mobile communications; Mobile cooperative applications (ID#: 15-7192)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247311&isnumber=7247062

 

Matsushita, Kazuki; Nishimine, Masashi; Ueda, Kazunori, “Cooperative Cache Distribution System for Virtual P2P Web Proxy,” in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual , vol.3, no., pp. 646–647, 1–5 July 2015. doi:10.1109/COMPSAC.2015.147
Abstract: Recent years, data transfer via www is one of the most popular application and web traffic on the Internet consumes much network resources. We have proposed a peer-to-peer cache distribution system to reduce consumption of network resources so far. Systems based on our proposal enable peers to receive a part of the data while the peers are downloading data from a server. In this paper, we report further extensions for implementation on web browser as plug-in software.
Keywords: Computers; Conferences; Multimedia communication; Peer-to-peer computing; Protocols; Servers; Software (ID#: 15-7193)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273447&isnumber=7273299

 

Khandekar, A.A.; Mane, S.B., “Analyzing Different Cache Replacement Policies on Cloud,” in Industrial Instrumentation and Control (ICIC), 2015 International Conference on, vol., no., pp. 709–712, 28–30 May 2015. doi:10.1109/IIC.2015.7150834
Abstract: Today, Caching is considered to be the key technology which bridges the performance gap between memory hierarchies through spatial or temporal localities. Particularly, in disk storage system, it has a prominent effect. To get a higher performance in operating systems, Databases and World Wide Web caching is considered as one of the major steps in system design. In cloud systems, heavy I/O activities are associated with different applications. Due to heavy I/O activities, performance is degrading. If caching is implemented, these applications would be benefited the most. For enhancing the system performance various cache replacement policies have been proposed and implemented and these algorithms defines the enhancement factor and plays a major role in modifying the efficiency of the system. Different caching policies have different effects on the system performance. However, the traditional cache replacement algorithms are not easily applicable to web applications. As the demand for web services is increasing, there is a need to reduce the download time and Internet traffic. To avoid the case of cache saturation and make the caching effective, an informative decision has to be made as to which documents are to be evicted from the cache effectively. This paper gives comparison of different cache replacement policies in traditional system as well as in web applications and proposes a system which implements LRU and CERA caching algorithms and gives it’s performance evaluation.
Keywords: Web services; cache storage; cloud computing; operating systems (computers); storage management; telecommunication traffic; I/O; Internet traffic; Web services; World Wide Web; cache replacement policies; cloud; databases; disk storage system; memory hierarchies; operating systems; spatial localities; temporal localities; Algorithm design and analysis; Cloud computing; Computers; Performance evaluation; Servers; System performance (ID#: 15-7194)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150834&isnumber=7150576

 

Ahmed, S.T.; Loguinov, D., “Modeling Randomized Data Streams in Caching, Data Processing, and Crawling Applications,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 1625–1633, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218542
Abstract: Many BigData applications (e.g., MapReduce, web caching, search in large graphs) process streams of random key-value records that follow highly skewed frequency distributions. In this work, we first develop stochastic models for the probability to encounter unique keys during exploration of such streams and their growth rate over time. We then apply these models to the analysis of LRU caching, MapReduce overhead, and various crawl properties (e.g., node-degree bias, frontier size) in random graphs.
Keywords: Big Data; cache storage; information retrieval; parallel processing; stochastic processes; Big Data applications; LRU caching; MapReduce overhead; caching application; crawl properties; crawling application; data processing; frequency distribution; probability; random graphs; randomized data streams; stochastic model; Analytical models; Computational modeling; Computers; Conferences; Random variables; Stochastic processes; Yttrium (ID#: 15-7195)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218542&isnumber=7218353

 

Ahammad, P.; Gaunker, R.; Kennedy, B.; Reshadi, M.; Kumar, K.; Pathan, A.K.; Kolam, H., “A Flexible Platform for QoE-Driven Delivery of Image-Rich Web Applications,” in Multimedia and Expo (ICME), 2015 IEEE International Conference on, vol., no.,
pp. 1–6,  June 29 2015–July 3 2015. doi:10.1109/ICME.2015.7177516
Abstract: The advent of content-rich modern web applications, unreliable network connectivity and device heterogeneity demands flexible web content delivery platforms that can handle the high variability along many dimensions — especially for the mobile web. Images account for more than 60% of the content delivered by present-day webpages and have a strong influence on the perceived webpage latency and end-user experience. We present a flexible web delivery platform with a client-cloud architecture and content-aware optimizations to address the problem of delivering image-rich web applications. Our solution makes use of quantitative measures of image perceptual quality, machine learning algorithms, partial caching and opportunistic client-side choices to efficiently deliver images on the web. Using data from the WWW, we experimentally demonstrate that our approach shows significant improvement on various web performance criteria that are critical for maintaining a desirable end-user quality-of-experience (QoE) for image-rich web applications.
Keywords: Internet; cloud computing; image processing; learning (artificial intelligence); mobile computing; quality of experience; QoE-driven delivery; Web performance criteria; client-cloud architecture; content-aware optimizations; content-rich modern Web applications; end-user experience; end-user quality-of-experience; flexible Web content delivery platforms; image perceptual quality; image-rich Web applications; machine learning algorithms; mobile Web; opportunistic client-side choices; partial caching; perceived Web page latency; Browsers; Image coding; Mobile communication; Optimization; Servers; Streaming media; Transcoding; Content-aware performance optimization; Multimedia web applications; Quality of Experience; Web delivery service (ID#: 15-7196)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177516&isnumber=7177375

 

Herrero Agustin, J.L., “Model-Driven Web Applications,” in Science and Information Conference (SAI), 2015, vol., no.,
pp. 954–964, 28–30 July 2015. doi:10.1109/SAI.2015.7237258
Abstract: With the evolution of web 2.0 and the appearance of AJAX technology a new breed of applications for the Web has emerged. However, the low reusability degree achieved and high development costs are the main problems identified in this domain. Another important issue that must be taken into consideration is that the performance degree of this type of applications is drastically affected by latency, since they must be downloaded before they can be used. Therefore, it becomes essential to boost a software development approach to attenuate these problems. This is the reason why this paper proposes a model-driven architecture for developing web applications. Towards this end, the following tasks have been developed: first a new profile extends UML and introduces web concepts at design level, then a new framework supports web application development according to the component-based methodology, and finally a transformation model is proposed to generate the final code semi-automatically. Another contribution of this work is the definition of a cache and a prefetching protocol to reduce latency and provide high performance web applications.
Keywords: Internet; object-oriented programming; software engineering; storage management; AJAX technology; UML; cache protocol; component-based methodology; high performance Web applications; model-driven Web applications; prefetching protocol; software development approach; Browsers; Cities and towns; Computational modeling; Data models; Proposals; Unified modeling language; Web services; AJAX; component-based software engineering; model-driven architecture; rich internet applications; web applications (ID#: 15-7197)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237258&isnumber=7237120

 

Horiuchi, A.; Saisho, K., “Development of Scaling Mechanism for Distributed Web System,” in Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015 16th IEEE/ACIS International Conference on, vol., no., pp. 1–6, 1–3 June 2015. doi:10.1109/SNPD.2015.7176214
Abstract: Progress of virtualization technology in recent years made it easy to build cache server in the Cloud. It became possible to increase Web service capacity using virtual cache servers. However, expected responsiveness can not be gained with insufficient cache servers against load. In contrast, costs will increase by surplus resources with too many cache servers against load. Therefore, we have been developing a distributed Web system suitable for the Cloud adjusting the number of Web servers according to load of them to reduce running cost. This research aims to implement the scaling mechanism for the distributed Web system. It has three functions: load monitoring function, cache server management function and destination setting function. This paper describes these functions and evaluation of the prototype of scaling mechanism.
Keywords: cache storage; cloud computing; distributed processing; virtualisation; Web service capacity; cache server management function; destination setting function; distributed Web system; load monitoring function; scaling mechanism; scaling mechanism development; virtual cache servers; virtualization technology; Load management; Mirrors; Monitoring; Time factors; Time measurement; Web servers; Auto Scaling; Cache Server; Cloud; Load Balancing (ID#: 15-7198)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176214&isnumber=7176160

 

Polonia, P.V.; Bier Melgarejo, L.F.; Hering de Queiroz, M., “A Resource Oriented Architecture for Web-Integrated SCADA Applications,” in Factory Communication Systems (WFCS), 2015 IEEE World Conference on, vol., no., pp. 1–8, 27–29 May 2015. doi:10.1109/WFCS.2015.7160563
Abstract: Supervisory Control and Data Acquisition (SCADA) systems are widely used on industry and public utility services to gather information from field devices and to control and monitor processes. The adoption of Internet technologies in automation have brought new opportunities and challenges for industries, establishing the need to integrate information from various sources on the Web. This paper exposes the design and implementation of a Resource Oriented Architecture for typical SCADA applications based on the architectural principles of the Representational State Transfer (REST) architectural style. The application to a didactic Flexible Manufacturing Cell illustrates how SCADA can take advantage of the interoperability afforded by open Web technologies, interact with a wide range of systems and leverage from the existing Web infrastructure, such as proxies and caches.
Keywords: Internet; SCADA systems; cellular manufacturing; control engineering computing; flexible manufacturing systems; open systems; process control; production engineering computing; software architecture; Internet technologies; REST; Web-integrated SCADA applications; caches; didactic flexible manufacturing cell; field devices; industry services; information gathering; information integration; interoperability; open Web technologies; process control; process monitoring; proxies; public utility services; representational state transfer architectural style; resource oriented architecture; supervisory control-and-data acquisition systems; Computer architecture; Protocols; SCADA systems; Scalability; Servers; Service-oriented architecture; Industry 4.0; M2M; REST; ROA; SCADA; WEB (ID#: 15-7199)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160563&isnumber=7160536
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.