Publications of Interest |
The Publications of Interest section contains bibliographical citations, abstracts if available, and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years; others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness to current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: news@scienceofsecurity.net
(ID#: 15-7670)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Actuator Security 2015 |
Cyber physical system security requires the need to build secure sensors and actuators. The research works here address the hard problems of human behavior, resiliency, metrics, and composability for actuator security and were presented or published in 2015.
Unger, S.; Timmermann, D., “DPWSec: Devices Profile for Web Services Security,” in Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1–6, 7–9 April 2015. doi:10.1109/ISSNIP.2015.7106961
Abstract: As cyber-physical systems (CPS) build a foundation for visions such as the Internet of Things (IoT) or Ambient Assisted Living (AAL), their communication security is crucial so they cannot be abused for invading our privacy and endangering our safety. In the past years many communication technologies have been introduced for critically resource-constrained devices such as simple sensors and actuators as found in CPS. However, many do not consider security at all or in a way that is not suitable for CPS. Also, the proposed solutions are not interoperable although this is considered a key factor for market acceptance. Instead of proposing yet another security scheme, we looked for an existing, time-proven solution that is widely accepted in a closely related domain as an interoperable security framework for resource-constrained devices. The candidate of our choice is the Web Services Security specification suite. We analysed its core concepts and isolated the parts suitable and necessary for embedded systems. In this paper we describe the methodology we developed and applied to derive the Devices Profile for Web Services Security (DPWSec). We discuss our findings by presenting the resulting architecture for message level security, authentication and authorization and the profile we developed as a subset of the original specifications. We demonstrate the feasibility of our results by discussing the proof-of-concept implementation of the developed profile and the security architecture.
Keywords: Internet; Internet of Things; Web services; ambient intelligence; assisted living; security of data; AAL; CPS; DPWSec; IoT; ambient assisted living; communication security; cyber-physical system; devices profile for Web services security; interoperable security framework; message level security; resource-constrained devices; Authentication; Authorization; Cryptography; Interoperability; Applied Cryptography; Cyber-Physical Systems (CPS); DPWS; Intelligent Environments; Internet of Things (IoT); Usability (ID#: 15-7608)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106961&isnumber=7106892
Kerlin, S.D.; Straub, J.; Huhn, J.; Lewis, A., “Small Satellite Communications Security and Student Learning in the Development of Ground Station Software,” in Aerospace Conference, 2015 IEEE, vol., no., pp. 1–11, 7–14 March 2015. doi:10.1109/AERO.2015.7119177
Abstract: Communications security is gaining importance as small spacecraft include actuator capabilities (i.e., propulsion), payloads which could be misappropriated (i.e., high resolution cameras), and research missions with high value/cost. However, security is limited by capability, interoperability and regulation. Additionally, as the small satellite community becomes more mainstream and diverse, the lack of cheap, limited-to-no configuration, pluggable security modules for small satellites also presents a limit for user adoption of security. This paper discusses a prospective approach for incorporating robust security into a student-developed ground station created at the University of North Dakota as part of a Computer Science Department senior design project. This paper includes: A discussion of hardware and software security standards applicable to small spacecraft (including those historically used in the space domain and standards and practices from nonspace activities that can be applied). Analysis directed at how those existing standards can be modified or implemented to best serve the emerging small satellite user-base. A discussion of the impact of Federal Communications Commission (FCC) regulations (for ammeter, experimental and commercial licensees) on the security approaches that can be utilized. This will include identification of key roadblocks and how they may be bridged by clever development. Consideration of the impact of export control on security standards and the ability to have distributed (beyond U.S. border) data collection and command transmission. This reflects the reality that an open and universal standard must be used, resulting in a related discussion of how that effects performance and complexity. A review of the student work on ground station development, including its pedagogical goals, results and an overview of what students learned in the process. A discussion of the broader impact of student generated research and the benefits to the research community at large is also included. An overview of the ground station design produced by the student team. This includes an analysis and explanation of the design choices as they relate to the aforementioned topics. A strategy for incorporating security best practices into this ground station design in a manner that is largely transparent to the user and can be enabled / disabled as needed, based on mission characteristics. The potential for pluggable modules and interfaces that can be utilized easily by non-technical users who are implementing small satellite mission is also discussed.
Keywords: aerospace engineering; computer aided instruction; open systems; satellite communication; security of data; space vehicles; student experiments; Computer Science Department; FCC; Federal Communications Commission; University of North Dakota; ground station software; interoperability; pluggable modules; regulation; small satellite communications security; spacecraft; student learning; Biographies; Cryptography; Databases; Robustness; Satellites; Standards (ID#: 15-7609)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119177&isnumber=7118873
Reddy, Y.B., “Security and Design Challenges in Cyber-Physical Systems,” in Information Technology - New Generations (ITNG), 2015 12th International Conference on, vol., no., pp. 200–205, 13–15 April 2015. doi:10.1109/ITNG.2015.38
Abstract: Cyber-Physical Systems require the development of security models at cloud interacting with physical systems. The current research discusses the security requirements in the future engineering systems includes the state of security in cloud cyber-physical systems, and a security model in sensor networks in relation to cyber-physical systems. In addition, we develop a model to transfer the packets in a secure environment and provide the simulations to detect the malicious node in the network.
Keywords: cloud computing; security of data; cloud cyber-physical systems; design challenges; future engineering systems; malicious node detection; security model; security models; security requirements; sensor networks; Mathematical model; Medical services; Real-time systems; Reliability; Robot sensing systems; Security; Vehicles; Cyber-physical system; actuators; environment; neighbor node; sensors; trust-based systems (ID#: 15-7610)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113473&isnumber=7113432
Hale, M.L.; Ellis, D.; Gamble, R.; Waler, C.; Lin, J., “Secu Wear: An Open Source, Multi-Component Hardware/Software Platform for Exploring Wearable Security,” in Mobile Services (MS), 2015 IEEE International Conference on, vol., no.,
pp. 97–104, June 27 2015–July 2 2015. doi:10.1109/MobServ.2015.23
Abstract: Wearables are the next big development in the mobile internet of things. Operating in a body area network around a smartphone user they serve a variety of commercial, medical, and personal uses. Whether used for fitness tracking, mobile health monitoring, or as remote controllers, wearable devices can include sensors that collect a variety of data and actuators that provide hap tic feedback and unique user interfaces for controlling software and hardware. Wearables are typically wireless and use Bluetooth LE (low energy) to transmit data to a waiting smartphone app. Frequently, apps forward this data onward to online web servers for tracking. Security and privacy concerns abound when wearables capture sensitive data or provide critical functionality. This paper develops a platform, called SecuWear, for conducting wearable security research, collecting data, and identifying vulnerabilities in hardware and software. SecuWear combines open source technologies to enable researchers to rapidly prototype security vulnerability test cases, evaluate them on actual hardware, and analyze the results to understand how best to mitigate problems. The paper includes two types of evaluation in the form of a comparative analysis and empirical study. The results reveal how several passive observation attacks present themselves in wearable applications and how the SecuWear platform can capture the necessary information needed to identify and combat such attacks.
Keywords: Bluetooth; Internet of Things; body area networks; mobile computing; security of data; Bluetooth LE; SecuWear platform; body area network; mobile Internet of Things; online Web servers; open source multicomponent hardware-software platform; security vulnerability test cases; smartphone user; wearable security; Biomedical monitoring; Bluetooth; Hardware; Mobile communication; Security; Sensors; Trade agreements; Bluetooth low energy; internet of things; man-in-the-middle; security; vulnerability discovery; wearables (ID#: 15-7611)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226677&isnumber=7226653
Perešíni, Ondrej; Krajčovič, Tibor, “Internet Controlled Embedded System for Intelligent Sensors and Actuators Operation,” in Applied Electronics (AE), 2015 International Conference on, vol., no., pp. 185–188, 8–9 Sept. 2015. doi: (not provided)
Abstract: Devices compliant with Internet of Things concept are currently getting increased interest amongst users and numerous manufacturers. Our idea is to introduce intelligent household control system respecting this trend. Primary focus of this work is to propose a new solution of intelligent house actuators realization, which is less expensive, more robust and more secure against intrusion. The hearth of the system consists of the intelligent modules which are modular, autonomous, decentralized, cheap and easily extensible with support for encrypted network communication. The proposed solution is opened and therefore ready for the future improvements and application in the field of the Internet of Things.
Keywords: Actuators; Hardware; Protocols; Security; Sensors; Standards; User interfaces; Internet of Things; actuators; decentralized network; embedded hardware; intelligent household (ID#: 15-7612)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301084&isnumber=7301036
Ariş, A.; Oktuğ, S.F.; Yalçin, S.B.Ö., “Internet-of-Things Security: Denial of Service Attacks,” in Signal Processing and Communications Applications Conference (SIU), 2015 23rd, vol., no., pp. 903–906, 16–19 May 2015. doi:10.1109/SIU.2015.7129976
Abstract: Internet of Things (IoT) is a network of sensors, actuators, mobile and wearable devices, simply things that have processing and communication modules and can connect to the Internet. In a few years time, billions of such things will start serving in many fields within the concept of IoT. Self-configuration, autonomous device addition, Internet connection and resource limitation features of IoT causes it to be highly prone to the attacks. Denial of Service (DoS) attacks which have been targeting the communication networks for years, will be the most dangerous threats to IoT networks. This study aims to analyze and classify the DoS attacks that may target the IoT environments. In addition to this, the systems that try to detect and mitigate the DoS attacks to IoT will be evaluated.
Keywords: Internet; Internet of Things; actuators; computer network security; mobile computing; sensors; wearable computers; DoS attacks; Internet connection; Internet-of-things security; IoT; actuator; autonomous device addition; communication modules; denial of service attack; mobile device; processing modules; resource limitation; self-configuration; sensor; wearable device; Ad hoc networks; Computer crime; IEEE 802.15 Standards; Internet of things; Wireless communication; Wireless sensor networks; DDoS; DoS; network security (ID#: 15-7613)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129976&isnumber=7129794
Jacobsen, Rune Hylsberg; Mikkelsen, Soren Aagaard; Rasmussen, Niels Holm, “Towards the Use of Pairing-Based Cryptography for Resource-Constrained Home Area Networks,” in Digital System Design (DSD), 2015 Euromicro Conference on, vol., no., pp. 233–240, 26–28 Aug. 2015. doi:10.1109/DSD.2015.73
Abstract: In the prevailing smart grid, the Home Area Network (HAN) will become a critical infrastructure component at the consumer premises. The HAN provides the electricity infrastructure with a bi-directional communication infrastructure that allows monitoring and control of electrical appliances. HANs are typically equipped with wireless sensors and actuators, built from resource-constrained hardware devices, that communicate by using open standard protocols. This raises concerns on the security of these networked systems. Because of this, securing a HAN to a proper degree becomes an increasingly important task. In this paper, a security model, where an adversary may exploit the system both during HAN setup as well as during operations of the network, is considered. We propose a scheme for secure bootstrapping of wireless HAN devices based on Identity-Based Cryptography (IBC). The scheme minimizes the number of exchanged messages needed to establish a session key between HAN devices. The feasibility of the approach is demonstrated from a series of prototype experiments.
Keywords: Authentication; Elliptic curve cryptography; Logic gates; Prototypes; constrained devices; home area network; identity-based cryptography; network bootstrap; pairing-based cryptography; security (ID#: 15-7614)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7302275&isnumber=7302233
Rom, Werner; Priller, Peter; Koivusaari, Jani; Komi, Maarjana; Robles, Ramiro; Dominguez, Luis; Rivilla, Javier; Driel, Willem van, “DEWI -- Wirelessly into the Future,” in Digital System Design (DSD), 2015 Euromicro Conference on, vol., no., pp. 730–739,
26–28 Aug. 2015. doi:10.1109/DSD.2015.114
Abstract: The ARTEMIS 1 project DEWI (“Dependable Embedded Wireless Infrastructure”) focusses on the area of wireless sensor / actuator networks and wireless communication. With its four industrial domains (Aeronautics, Automotive, Rail, and Building) and 21 clearly industry-driven use cases / applications, DEWI will provide and demonstrate key solutions for wireless seamless connectivity and interoperability in smart cities and infrastructures, by considering everyday physical environments of citizens in buildings, cars, trains and airplanes. It will add clear cross-domain benefits in terms of re-usability of techno-logical building bricks and architecture, processes and methods. DEWI currently is one of the largest funded European R&D projects, comprising 58 renowned industrial and research partners from 11 European countries.
(For further details see www.dewiproject.eu)
Keywords: Automotive engineering; Buildings; Communication system security; Standards; Vehicles; Wireless communication; Wireless sensor networks; Actuator Network; Aeronautics; Automotive; Building; Certification; Communication; Communication Bubble; Cross-Domain; Demonstrator; Dependability; Interoperability; Rail; Safety; Security; Sensor Network; Standardization; Wireless (ID#: 15-7615)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7302350&isnumber=7302233
Srivastava, P.; Garg, N., “Secure and Optimized Data Storage for IoT Through Cloud Framework,” in Computing, Communication & Automation (ICCCA), 2015 International Conference on, vol., no., pp. 720–723, 15–16 May 2015. doi:10.1109/CCAA.2015.7148470
Abstract: Internet of Things (IoT) is the future. With increasing popularity of internet, soon internet in routine devices will be a common practice by people. Hence we are writing this paper to encourage IoT accomplishment using cloud computing features with it. Basic setback of IoT is management of the huge quantity of data. In this paper, we have suggested a framework with several data compression techniques to store this large amount of data on cloud acquiring lesser space and using AES encryption techniques we have also improved the security of this data. Framework also shows the interaction of data with reporting and analytic tools through cloud. At the end, we have concluded our paper with some of the future scopes and possible enhancements of our ideas.
Keywords: Internet of Things; cloud computing; cryptography; data compression; optimisation; storage management; AES encryption technique; IoT; cloud computing feature; data compression technique; data storage optimization; data storage security; Cloud computing; Encryption; Image coding; Internet of things; Sensors; AES; actuators; compression; encryption; sensors; trigger (ID#: 15-7616)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148470&isnumber=7148334
Zhibo Pang; Yuxin Cheng; Johansson, Morgan E.; Bag, Gargi, “Work-in-Progress: Industry-Friendly and Native-IP Wireless Communications for Building Automation,” in Industrial Networks and Intelligent Systems (INISCom), 2015 1st International Conference on, vol., no., pp. 163–167, 2–4 March 2015. doi:10.4108/icst.iniscom.2015.258563
Abstract: Wireless communication technologies for building automation (BA) systems are evolving towards native IP connectivity. More Industry Friendly and Native-IP Wireless Building Automation (IF-NIP WiBA) is needed to address the concerns of the entire value chain of the BA industry including the security, reliability, latency, power consumption, engineering process, and independency. In this paper, a hybrid architecture which can seamless support both Cloud-Based Mode and Stand-Alone Mode is introduced based on the 6LoWPAN WSAN (wireless sensor and actuator networks) technology and verified by a prototyping minimal system. The preliminary experimental results suggest that, 1) both the WSAN and Cloud communications can meet the requirements of non-real-time application of BA systems, 2) the reliability and latency of the WSAN communications is not sufficient for soft real-time applications but it is not far away to meet such requirements by sufficient optimization in the near future, 3) the reliability of Cloud is pretty sufficient but the latency is quite far from the requirement of soft real-time applications. To optimize the latency and power consumption in WSAN, design industrial friendly engineering process, and investigate security mechanisms should be the main focus in the future.
Keywords: IP networks; building management systems; optimisation; telecommunication network reliability; wireless sensor networks; work in progress; 6LoWPAN WSAN; BA systems; IF-NIP WiBA; building automation; cloud-based mode; industry-friendly wireless communications; native IP connectivity; native-IP wireless communications; optimization; reliability; stand-alone mode; wireless sensor and actuator networks; work-in-progress; Actuators; Communication system security; Logic gates; Optimization; Reliability; Wireless communication; Wireless sensor networks; 6LoWPAN; Native IP Connectivity (NIP); Wireless Building Automation (WiBA); Wireless Sensor and Actuator Networks (WSAN) (ID#: 15-7617)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7157839&isnumber=7157808
Papadopoulos, G., “Challenges in the Design and Implementation of Wireless Sensor Networks: A Holistic Approach-Development and Planning Tools, Middleware, Power Efficiency, Interoperability,” in Embedded Computing (MECO), 2015 4th Mediterranean Conference on, pp.1–3, 14–18 June 2015. doi:10.1109/MECO.2015.7181857
Abstract: Wireless Sensor Networks (WSNs) constitute a networking area with promising impact in the environment, health, security, industrial applications and more. Each of these presents different requirements, regarding system performance and QoS, and involves a variety of mechanisms such as routing and MAC protocols, algorithms, scheduling policies, security, OS, all of which are residing over the HW, the sensors, actuators and the Radio Tx/Rx. Furthermore, they encompass special characteristics, such as constrained energy, CPU and memory resources, multi-hop communication, leading to a few steps higher the required special knowledge. Although the status of WSNs is nearing the stage of maturity and wide-spread use, the issue of their sustainability hinges upon the implementation of some features of paramount importance: Low power consumption to achieve long operational life-time for battery-powered unattended WSN nodes, joint optimization of connectivity and energy efficiency leading to best-effort utilization of constrained radios and minimum energy cost, self-calibration and self-healing to recover from failures and errors to which WSNs are prone, efficient data aggregation lessening the traffic load in constrained WSNs, programmable and reconfigurable stations allowing for long life-cycle development, system security enabling protection of data and system operation, short development time making more efficient the time-to-market process and simple installation and maintenance procedures for wider acceptance. Despite the considerable research and important advances in WSNs, large scale application of the technology is still hindered by technical, complexity and cost impediments. Ongoing R&D is addressing these shortcomings by focusing on energy harvesting, middleware, network intelligence, standardization, network reliability, adaptability and scalability. However, for efficient WSN development, deployment, testing, and maintenance, a holistic unified approach is necessary which will address the above WSN challenges by developing an integrated platform for smart environments with built-in user friendliness, practicality and efficiency. This platform will enable the user to evaluate his design by identifying critical features and application requirements, to verify by adopting design indicators and to ensure ease of development and long life cycle by incorporating flexibility, expandability and reusability. These design requirements can be accomplished to a significant extent via an integration tool that provides a multiple level framework of functionality composition and adaptation for a complex WSN environment consisting of heterogeneous platform technologies, establishing a software infrastructure which couples the different views and engineering disciplines involved in the development of such a complex system, by means of the accurate definition of all necessary rules and the design of the ‘glue-logic’ which will guarantee the correctness of composition of the various building blocks. Furthermore, to attain an enhanced efficiency, the design/development tool must facilitate consistency control as well as evaluate the selections made by the user and, based on specific criteria, provide feedback on errors concerning consistency and compatibility as well as warnings on potentially less optimal user selections. Finally, the WSN planning tool will provide answers to fundamental issues such as the number of nodes needed to meet overall system objectives, the deployment of these nodes to optimize network performance and the adjustment of network topology and sensor node placement in case of changes in data sources and network malfunctioning.
Keywords: computer network reliability; computer network security; data protection; energy conservation; energy harvesting; middleware; open systems; optimisation; quality of service; sensor placement; telecommunication network planning; telecommunication network topology; telecommunication power management; telecommunication traffic; time to market; wireless sensor networks; QoS; WSN reliability; constrained radio best-effort utilization; data aggregation; data security enabling protection; design-development tool; energy efficiency; failure recovery; heterogeneous platform technology; holistic unified approach; interoperability; network intelligence; network topology adjustment; power consumption; power efficiency; sensor node placement; time-to-market process; traffic load; wireless sensor network planning tools; Electrical engineering; Embedded computing; Europe; Security; Wireless sensor networks (ID#: 15-7618)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181857&isnumber=7181853
Youngchoon Park, “Connected Smart Buildings, a New Way to Interact with Buildings,” in Cloud Engineering (IC2E), 2015 IEEE International Conference on, vol., no., pp. 5–5, 9–13 March 2015. doi:10.1109/IC2E.2015.57
Abstract: Summary form only given. Devices, people, information and software applications rarely live in isolation in modern building management. For example, networked sensors that monitor the performance of a chiller are common and collected data are delivered to building automation systems to optimize energy use. Detected possible failures are also handed to facility management staffs for repairs. Physical and cyber security services have to be incorporated to prevent improper access of not only HVAC (Heating, Ventilation, Air Conditioning) equipment but also control devices. Harmonizing these connected sensors, control devices, equipment and people is a key to provide more comfortable, safe and sustainable buildings. Nowadays, devices with embedded intelligences and communication capabilities can interact with people directly. Traditionally, few selected people (e.g., facility managers in building industry) have access and program the device with fixed operating schedule while a device has a very limited connectivity to an operating environment and context. Modern connected devices will learn and interact with users and other connected things. This would be a fundamental shift in ways in communication from unidirectional to bi-directional. A manufacturer will learn how their products and features are being accessed and utilized. An end user or a device on behalf of a user can interact and communicate with a service provider or a manufacturer without go though a distributer, almost real time basis. This will requires different business strategies and product development behaviors to serve connected customers’ demands. Connected things produce enormous amount of data that result many questions and technical challenges in data management, analysis and associated services. In this talk, we will brief some of challenges that we have encountered In developing connected building solutions and services. More specifically, (1) semantic interoperability requirements among smart sensors, actuators, lighting, security and control and business applications, (2) engineering challenges in managing massively large time sensitive multi-media data in a cloud at global scale, and (3) security and privacy concerns are presented.
Keywords: HVAC; building management systems; intelligent sensors; actuators; building automation systems; building management; business strategy; chiller performance; connected smart buildings; control devices; cyber security services; data management; facility management staffs; heating-ventilation-air conditioning equipment; lighting; networked sensors; product development behaviors; service provider; smart sensors; time sensitive multimedia data; Building automation; Business; Conferences; Intelligent sensors; Security; Building Management; Cloud; Internet of Things (ID#: 15-7619)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092892&isnumber=7092808
Chen Wen-lin; Cao Rui-min; Hao Li-na; Wang Qing, “Researches on Robot System Architecture in CPS,” in Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015 IEEE International Conference on, vol., no., pp. 603–607, 8–12 June 2015. doi:10.1109/CYBER.2015.7288009
Abstract: In the premise of introducing existing seven kinds of commonly used robot architectures at home and abroad and analyzing their strengths, weaknesses as well as applications, this paper shows the CPS-R architecture which is applied in the internal of robots to solve the problem in unknown dynamic environment and deeply compliant operation in detail where the current robot architecture cannot meet. Through describing the function and data flow of each component in CPS-R such as external environment in real-world, sensors, actuators, device interfaces in the physical layer, message generation and task allocation, network security authentication and prioritization, information collection, analysis, decision-making and sharing in the information layer. This paper analyzes the challenges of CPS-R architecture such as feasibility, real-time, security, reliability, intelligence, hardware/software standardization of information sharing in the end.
Keywords: control engineering computing; reliability; robots; security of data; CPS-R architecture; actuators; component data flow; cyber-physical system; decision-making; deeply compliant operation; device interfaces; hardware-software standardization; information collection; information layer; intelligence; message generation; network security authentication; physical layer; prioritization; reliability; robot system architecture; sensors; task allocation; unknown dynamic environment; Actuators; Computer architecture; Robot kinematics; Robot sensing systems; Service robots; CPS-R; challenge; robot; system architecture (ID#: 15-7620)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288009&isnumber=7287893
Januário, F.; Santos, A.; Palma, L.; Cardoso, A.; Gil, P., “A Distributed Multi-Agent Approach for Resilient Supervision over a Ipv6 WSAN Infrastructure,” in Industrial Technology (ICIT), 2015 IEEE International Conference on, vol., no., pp. 1802–1807,
17–19 March 2015. doi:10.1109/ICIT.2015.7125358
Abstract: Wireless Sensor and Actuator Networks has become an important area of research. They can provide flexibility, low operational and maintenance costs and they are inherently scalable. In the realm of Internet of Things the majority of devices is able to communicate with one another, and in some cases they can be deployed with an IP address. This feature is undoubtedly very beneficial in wireless sensor and actuator networks applications, such as monitoring and control systems. However, this kind of communication infrastructure is rather challenging as it can compromise the overall system performance due to several factors, namely outliers, intermittent communication breakdown or security issues. In order to improve the overall resilience of the system, this work proposes a distributed hierarchical multi-agent architecture implemented over a IPv6 communication infrastructure. The Contiki Operating System and RPL routing protocol were used together to provide a IPv6 based communication between nodes and an external network. Experimental results collected from a laboratory IPv6 based WSAN test-bed, show the relevance and benefits of the proposed methodology to cope with communication loss between nodes and the server.
Keywords: Internet of Things multi-agent systems; routing protocols; wireless sensor networks; Contiki operating system; IP address; IPv6 WSAN infrastructure; IPv6 communication infrastructure; Internet of Things; RPL routing protocol; distributed hierarchical multiagent architecture; distributed multiagent approach; external network; intermittent communication; resilient supervision; wireless sensor and actuator networks; Actuators; Electric breakdown; Monitoring; Peer-to-peer computing; Routing protocols; Security (ID#: 15-7621)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7125358&isnumber=7125066
Moga, D.; Stroia, N.; Petreus, D.; Moga, R.; Munteanu, R.A., “Embedded Platform for Web-Based Monitoring and Control of
a Smart Home,” in Environment and Electrical Engineering (EEEIC), 2015 IEEE 15th International Conference on, vol., no.,
pp. 1256–1261, 10–13 June 2015. doi:10.1109/EEEIC.2015.7165349
Abstract: This paper presents the architecture of a low cost embedded platform for Web-based monitoring and control of a smart home. The platform consists of a distributed sensing and control network, devices for access control and a residential gateway with touch-screen display offering an easy to use interface to the user as well as providing remote, Web based access. The key issues related to the design of the proposed platform were addressed: the problem of security, the robustness of the distributed control network to faults and a low cost hardware implementation.
Keywords: Internet; authorisation; computerised monitoring; embedded systems; home automation; touch sensitive screens; user interfaces; Web based access; Web-based monitoring; distributed control network; distributed sensing; embedded platform; residential gateway; smart home; touch-screen display; user interface; Actuators; Logic gates; Monitoring; Protocols; Sensors; Wireless communication; Wireless sensor networks; fault tolerance; smart homes (ID#: 15-7622)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165349&isnumber=7165173
Yumei Li; Voos, Holger; Pan, Lin; Darouach, Mohammed; Changchun Hua, “Stochastic Cyber-Attacks Estimation for Nonlinear Control Systems Based on Robust H∞ Filtering Technique,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 5590–5595, 23–25 May 2015. doi:10.1109/CCDC.2015.7161795
Abstract: Based on robust H∞ filtering technique, this paper presents the cyber-attacks estimation problem for nonlinear control systems under stochastic cyber-attacks and disturbances. A nonlinear H∞ filter that maximize the sensitivity of the cyber-attacks and minimize the effect of the disturbances is designed. The nonlinear filter is required to be robust to the disturbances and the residual need to remain the sensitivity of the attacks as much as possible. Applying linear matrix inequality (LMI), the sufficient conditions guaranteeing the H∞ filtering performance are obtained. Simulation results demonstrate that the designed nonlinear filter efficiently solve the robust estimation problem of the stochastic cyber-attacks.
Keywords: H∞ filters; estimation theory; linear matrix inequalities; nonlinear control systems; nonlinear filters; robust control; security of data; stochastic processes; LMI; linear matrix inequality; nonlinear control system; nonlinear filter design; robust H∞ filtering technique; stochastic cyber-attack estimation; Actuators; Estimation; Noise; Robustness; Sensitivity; Stochastic processes; H∞ filter; stochastic cyber-attacks; stochastic nonlinear system (ID#: 15-7623)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161795&isnumber=7161655
Yan Zhang; Larsson, Mats; Pal, Bikash; Thornhill, Nina F., “Simulation Approach to Reliability Analysis of WAMPAC System,” in Innovative Smart Grid Technologies Conference (ISGT), 2015 IEEE Power & Energy Society, pp. 1–5, 18–20 Feb. 2015. doi:10.1109/ISGT.2015.7131814
Abstract: Wide area monitoring, protection and control (WAMPAC) plays a critical role in smart grid development. Since WAMPAC frequently has the tasks of executing control and protection actions necessary for secure operation of power systems, its reliability is essential. This paper proposes a novel approach to the reliability analysis of WAMPAC systems. WAMPAC system functions are first divided into four subsystems: the measured inputs, the communication, the actuator and the analytic execution subsystems. The reliability indices of the subsystems are computed then using Monte Carlo approach. A sensitivity analysis is also described to illustrate the influence of different components on the system reliability.
Keywords: Monte Carlo methods; power system control; power system measurement; power system protection; power system reliability; power system security; smart power grids; Monte Carlo approach; WAMPAC system; actuator; analytic execution subsystem; reliability analysis; sensitivity analysis; smart grid development; wide area monitoring protection and control; Actuators; Phasor measurement units; Power system reliability; Reliability; State estimation; Substations; Monte Carlo methods; wide area measurements; wide area networks (ID#: 15-7624)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131814&isnumber=7131775
Maia, M.E.F.; Andrade, R.M.d.C., “System Support for Self-Adaptive Cyber-Physical Systems,” in Distributed Computing in Sensor Systems (DCOSS), 2015 International Conference on, vol., no., pp. 214–215, 10–12 June 2015. doi:10.1109/DCOSS.2015.33
Abstract: As the number of interacting devices and the complexity of cyber-physical systems increases, self-adaptation is a natural solution to address challenges faced by software developers. To provide a systematic and unified solution to support the development and execution of cyber-physical systems, this doctoral thesis proposes the creation of an environment that offers mechanisms to facilitate the technology-independent communication and uncoupled interoperable coordination between interacting entities of the system, as well as the flexible and adaptable execution of the functionalities specified for each application. The outcome is a set of modules to help developers to face the challenges of cyber-physical systems.
Keywords: security of data; adaptable execution; doctoral thesis; flexible execution; interacting devices; self-adaptive cyber-physical systems; software developers; system support; technology-independent communication; uncoupled interoperable coordination; Actuators; Computer architecture; Context; Medical services; Middleware; Cyber-Physical Systems; Self-Adaptation (ID#: 15-7625)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165045&isnumber=7164869
Kiss, István; Genge, Bela; Haller, Piroska; Sebestyen, Gheorghe, “A Framework for Testing Stealthy Attacks in Energy Grids,” in Intelligent Computer Communication and Processing (ICCP), 2015 IEEE International Conference on, vol., no., pp. 553–560,
3–5 Sept. 2015. doi:10.1109/ICCP.2015.7312718
Abstract: The progressive integration of traditional Information and Communication Technologies (ICT) hardware and software into the supervisory control of modern Power Grids (PG) has given birth to a unique technological ecosystem. Modern ICT handles a wide variety of advantageous services in PG, but in turn exposes PG to significant cyber threats. To ensure security, PG use various anomaly detection modules to detect the malicious effects of cyber attacks. In many reported cases the newly appeared targeted cyber-physical attacks can remain stealthy even in presence of anomaly detection systems. In this paper we present a framework for elaborating stealthy attacks against the critical infrastructure of power grids. Using the proposed framework, experts can verify the effectiveness of the applied anomaly detection systems (ADS) either in real or simulated environments. The novelty of the technique relies in the fact that the developed “smart” power grid cyber attack (SPGCA) first reveals the devices which can be compromised causing only a limited effect observed by ADS and PG operators. Compromising low impact devices first conducts the PG to a more sensitive and near unstable state, which leads to high damages when the attacker at last compromises high impact devices, e.g. breaking high demand power lines to cause blackout. The presented technique should be used to strengthen the deployment of ADS and to define various security zones to defend PG against such intelligent cyber attacks. Experimental results based on the IEEE 14-bus electricity grid model demonstrate the effectiveness of the framework.
Keywords: Actuators; Phasor measurement units; Power grids; Process control; Sensors; Voltage measurement; Yttrium; Anomaly Detection; Control Variable; Cyber Attack; Impact Assessment; Observed Variable; Power Grid (ID#: 15-7626)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312718&isnumber=7312586
Ozvural, G.; Kurt, G.K., “Advanced Approaches for Wireless Sensor Network Applications and Cloud Analytics,” in Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1–5, 7–9 April 2015. doi:10.1109/ISSNIP.2015.7106979
Abstract: Although wireless sensor network applications are still at early stages of development in the industry, it is obvious that it will pervasively come true and billions of embedded microcomputers will become online for the purpose of remote sensing, actuation and sharing information. According to the estimations, there will be 50 billion connected sensors or things by the year 2020. As we are developing first to market wireless sensor-actuator network devices, we have chance to identify design parameters, define technical infrastructure and make an effort to meet scalable system requirements. In this manner, required research and development activities must involve several research directions such as massive scaling, creating information and big data, robustness, security, privacy and human-in-the-loop. In this study, wireless sensor networks and Internet of things concepts are not only investigated theoretically but also the proposed system is designed and implemented end-to-end. Low rate wireless personal area network sensor nodes with random network coding capability are used for remote sensing and actuation. Low throughput embedded IP gateway node is developed utilizing both random network coding at low rate wireless personal area network side and low overhead websocket protocol for cloud communications side. Service-oriented design pattern is proposed for wireless sensor network cloud data analytics.
Keywords: IP networks; Internet of Things; cloud computing; data analysis; microcomputers; network coding; personal area networks; protocols; random codes; remote sensing; service-oriented architecture; wireless sensor networks; Internet of things concept; actuation; cloud communications side; cloud data analytics; design parameter identification; embedded microcomputer; information sharing; low throughput embedded IP gateway; overhead websocket protocol; random network coding capability; service-oriented design pattern; wireless personal area network sensor node; wireless sensor-actuator network device; IP networks; Logic gates; Network coding; Protocols; Relays; Wireless sensor networks; Zigbee (ID#: 15-7627)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106979&isnumber=7106892
Pöhls, H.C., “JSON Sensor Signatures (JSS): End-to-End Integrity Protection from Constrained Device to IoT Application,” in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, vol., no.,
pp. 306–312, 8–10 July 2015. doi:10.1109/IMIS.2015.48
Abstract: Integrity of sensor readings or actuator commands is of paramount importance for a secure operation in the Internet-of-Things (IoT). Data from sensors might be stored, forwarded and processed by many different intermediate systems. In this paper we apply digital signatures to achieve end-to-end message level integrity for data in JSON. JSON has become very popular to represent data in the upper layers of the IoT domain. By signing JSON on the constrained device we extend the end-to-end integrity protection starting from the constrained device to any entity in the IoT data-processing chain. Just the JSON message’s contents including the enveloped signature and the data must be preserved. We reached our design goal to keep the original data accessible by legacy parsers. Hence, signing does not break parsing. We implemented an elliptic curve based signature algorithm on a class 1 (following RFC 7228) constrained device (Zolertia Z1:16-bit, MSP 430). Furthermore, we describe the challenges of end-to-end integrity when crossing from IoT to the Web and applications.
Keywords: Internet of Things; Java; data integrity; digital signatures; public key cryptography; IoT data-processing chain; JSON sensor signatures; actuator commands; elliptic curve based signature algorithm; end-to-end integrity protection; end-to-end message level integrity; enveloped signature; legacy parsers; sensor readings integrity; Data structures; Digital signatures; Elliptic curve cryptography; NIST; Payloads; XML; ECDSA; IoT; JSON; integrity (ID#: 15-7628)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284966&isnumber=7284886
Bhattacharyya, S.; Asada, H.H.; Triantafyllou, M.S., “Design Analysis of a Self Stabilizing Underwater Sub-Surface Inspection Robot Using Hydrodynamic Ground Effect,” in OCEANS 2015 - Genova, vol., no., pp. 1–7, 18–21 May 2015. doi:10.1109/OCEANS-Genova.2015.7271752
Abstract: In this paper we discuss a micro submersible robot which can move across an underwater target surface at proximity (~1mm) using the stabilizing effects of the boundary layer interaction with the external surface. Underwater surface and subsurface inspection is of immense value whether in infrastructure maintenance like oil pipelines, ship bottoms or in security and defense, for recognizing and identifying target threats. For subsurface inspection using ultrasound testing (UT), reliable contact is generally needed; but the same can be achieved by positioning the UT transceiver at an odd multiple of quarter wavelength distance away from the target. However, depending on the frequency of the UT, this could be in the order of mm, which is a challenging distance to stabilize the inspection robot by sole use of actuators. In this paper, we present the concept of a self stabilizing underwater robot by exploring ground effects, and analyze how the variation of the underbody design affect this stability. We make simple transitions from an ellipsoidal base to rectangular one and extend further with inclusion of protrusions on the base. The simple design translation explicitly demonstrate how flow dynamics and stability changes with minimal design variations and what parameters are of importance for achieving desired behaviors. The results on this paper are based mostly on simulations with the goal of using the same to decide on the correct experiments required to validate the observed phenomena.
Keywords: autonomous underwater vehicles; design engineering; hydrodynamics; inspection; microrobots; robot dynamics; stability; ultrasonic applications; UT; UT transceiver; actuators; boundary layer interaction; flow dynamics; ground effects; hydrodynamic ground effect; infrastructure maintenance; microsubmersible robot; oil pipelines; quarter wavelength distance; reliable contact; self stabilizing underwater sub-surface inspection robot design analysis; ship bottoms; stabilizing effects; subsurface inspection; ultrasound testing; underbody design variation; underwater target surface; Inspection; Nose; Optical surface waves; Robot sensing systems; Sea surface; Torque (ID#: 15-7629)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7271752&isnumber=7271237
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Artificial Neural Networks and Security 2015 |
Artificial neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming. What has attracted much interest in neural networks is the possibility of adaptive learning. Tasks such as function approximation, classification pattern and sequence recognition, anomaly detection, filtering, clustering, blind source separation and compression, and controls all have security implications. Cyber physical systems, resiliency, policy-based governance, and metrics are the Science of Security interests. The works cited here were presented in 2015.
Turčanik, M., “Packet Filtering by Artificial Neural Network,” in Military Technologies (ICMT), 2015 International Conference on, vol., no., pp. 1–4, 19–21 May 2015. doi:10.1109/MILTECHS.2015.7153739
Abstract: Efficient monitoring of the network is very important for an achievement of a security of today networks. The still growing speed of the links and the complexness of monitoring applications’ requests have showed some borders of mostly used methods for monitoring. The process of packet classification should be speeding up as much as possible. As a possible approach, an artificial neural network (ANN) could be used for packet filtering. The performance of the artificial neural network was validated by software implementation of ANN for given network configuration. The principles of artificial neural networks and the possibility of using artificial neural network in a computer network are presented in the article. Created training sets represent the information for firewall to enable or disable packets. The number of neurons in the artificial neural network and the number of the hidden layers are optimized based on the results in the simulation.
Keywords: computer networks; filters; neural nets; artificial neural network; computer network; network security; packet filtering; Artificial neural networks; Biological neural networks; Firewalls (computing); Information filtering; Neurons; Ports (Computers); firewall; packet filter (ID#: 15-7774)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153739&isnumber=7153638
Zhukov, A.; Tomin, N.; Sidorov, D.; Panasetsky, D.; Spirayev, V., “A Hybrid Artificial Neural Network for Voltage Security Evaluation in a Power System,” in Energy (IYCE), 2015 5th International Youth Conference on, vol., no., pp. 1–8, 27–30 May 2015. doi:10.1109/IYCE.2015.7180828
Abstract: A majority of recent large-scale blackouts have been the consequence of instabilities characterized by sudden voltage collapse phenomena. This paper presents a method for voltage instability monitoring in a power system with a hybrid artificial neural network which consist of a multilayer perceptron and the Kohonen neural network. The proposed method has a couple of the following functions: the Kohonen network is used to classify the system operating state; the Kohonen output patterns are used as inputs to train of a multilayer perceptron for identification of alarm states that are dangerous for the system security. The approach is targeting a blackout prevention scheme; given that the blackout signal is captured before it can collapse the power system. The proposed method is realized in R and demonstrated the modified IEEE One Area RTS-96 power system.
Keywords: multilayer perceptrons; power engineering computing; power system dynamic stability; power system measurement; power system reliability; power system security; self-organising feature maps; Kohonen neural network; blackout prevention scheme; hybrid artificial neural network; multilayer perceptron; power system; voltage collapse; voltage instability monitoring; voltage security evaluation; Mathematical model; Neural networks; Power system stability; Reactive power; Security; Stability criteria; artificial neural network; emergency; power security; voltage instability (ID#: 15-7775)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7180828&isnumber=7180726
Fatima, H.; Al-Turki, S.M.; Pradhan, S.K.; Dash, G.N., “Information Security: Artificial Immune Detectors in Neural Networks,” in Web Applications and Networking (WSWAN), 2015 2nd World Symposium on, vol., no., pp. 1–6, 21–23 March 2015. doi:10.1109/WSWAN.2015.7210300
Abstract: In today’s competitive world, computer security is at enormous demand due to tremendous amount of network attacks. These types of threats are significantly affecting the architectures of the network by gaining unauthorized access to the computer networks. The Information Security is therefore necessitates the decrease of such attacks. In this paper, a proposal has been laid down for establishing and analyzing an artificial immune neural network for securing the network architecture.
Keywords: artificial immune systems; authorisation; neural net architecture; artificial immune detectors; computer security; information security; network attacks; neural network architectures; unauthorized computer network access; Computer architecture; Computer science; Detectors; Immune system; Intrusion detection; Neural networks; Artificial Immune Networks; Information Security; Neural Networks (ID#: 15-7776)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210300&isnumber=7209078
Onotu, P.; Day, D.; Rodrigues, M.A., “Accurate Shellcode Recognition from Network Traffic Data Using Artificial Neural Nets,” in Electrical and Computer Engineering (CCECE), 2015 IEEE 28th Canadian Conference on, vol., no., pp. 355–360, 3–6 May 2015. doi:10.1109/CCECE.2015.7129302
Abstract: This paper presents an approach to shellcode recognition directly from network traffic data using a multi-layer perceptron with back-propagation learning algorithm. Using raw network data composed of a mixture of shellcode, image files, and DLL-Dynamic Link Library files, our proposed design was able to classify the three types of data with high accuracy and high precision with neither false positives nor false negatives. The proposed method comprises simple and fast pre-processing of raw data of a fixed length for each network data package and yields perfect results with 100% accuracy for the three data types considered. The research is significant in the context of network security and intrusion detection systems. Work is under way for real time recognition and fine-tuning the differentiation between various shellcodes.
Keywords: backpropagation; multilayer perceptrons; real-time systems; security of data; ANN; DLL-dynamic link library files; artificial neural nets; backpropagation learning algorithm; fine-tuning; image files; intrusion detection systems; multilayer perceptron; network data package; network security; network traffic data; raw network data; real time recognition; shellcode recognition; Algorithm design and analysis; Computers; Intrusion detection; Neural networks; Training; Transfer functions; Neural net; false positive; intrusion detection system; pattern recognition; shellcode (ID#: 15-7777)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129302&isnumber=7129089
D’Lima, N.; Mittal, J., “Password Authentication Using Keystroke Biometrics,” in Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, vol., no., pp. 1–6, 15–17 Jan. 2015. doi:10.1109/ICCICT.2015.7045681
Abstract: The majority of applications use a prompt for a username and password. Passwords are recommended to be unique, long, complex, alphanumeric and non-repetitive. These reasons that make passwords secure may prove to be a point of weakness. The complexity of the password provides a challenge for a user and they may choose to record it. This compromises the security of the password and takes away its advantage. An alternate method of security is Keystroke Biometrics. This approach uses the natural typing pattern of a user for authentication. This paper proposes a new method for reducing error rates and creating a robust technique. The new method makes use of multiple sensors to obtain information about a user. An artificial neural network is used to model a user’s behavior as well as for retraining the system. An alternate user verification mechanism is used in case a user is unable to match their typing pattern.
Keywords: authorisation; biometrics (access control); neural nets; pattern matching; artificial neural network; error rates; keystroke biometrics; password authentication; password security; robust security technique; typing pattern matching; user behavior; user natural typing pattern; user verification mechanism; Classification algorithms; Error analysis; Europe; Hardware; Monitoring; Support vector machines; Text recognition; Artificial Neural Networks; Authentication; Keystroke Biometrics; Password; Security (ID#: 15-7778)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045681&isnumber=7045627
Saabni, Raid, “Facial Expression Recognition Using Multi Radial Bases Function Networks and 2-D Gabor Filters,” in Digital Information Processing and Communications (ICDIPC), 2015 Fifth International Conference on, vol., no., pp. 225–230, 7–9 Oct. 2015. doi:10.1109/ICDIPC.2015.7323033
Abstract: Facial expression analysis and recognition have been researched since the 17’th century. The foundational studies on facial expressions, which have formed the basis of today’s research, can be traced back to few centuries ago. Precisely, a detailed note on the various expressions and movements of head muscles was given in 1649 by John Bulwer(1). Another important milestone in the study of facial expressions and human emotions, is the work done by the psychologist Paul Ekman(2) and his colleagues. This important work has been done in the 1970s and has a significant importance and large influence on the development of modern day automatic facial expression recognizers. This work lead to adapting and developing the comprehensive Facial Action Coding System(FACS), which has since then become the de-facto standard for facial expression recognition. Over the last decades, automatic facial expressions analysis has become an active research area that finds potential applications in fields such as Human-Computer Interfaces (HCI), Image Retrieval, Security and Human Emotion Analysis. Facial expressions are extremely important in any human interaction, and additional to emotions, it also reflects on other mental activities, social interaction and physiological signals. In this paper, we proposes an Artificial Neural Network (ANN) of two hidden layers, based on multiple Radial Bases Functions Networks (RBFN’s) to recognize facial expressions. The ANN, is trained on features extracted from images by applying a multi-scale and multi-orientation Gabor filters. We have considered the cases of subject independent/dependent facial expression recognition using The JAFFE and the CK+ benchmarks to evaluate the proposed model.
Keywords: Clustering algorithms; Face; Face detection; Face recognition; Feature extraction; Radial basis function networks; Training; Artificial Neural Networks; Facial Expression; Gabor Filter; RBFN; Subject Independent/Dependent Emotion Recognition (ID#: 15-7779)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7323033&isnumber=7322996
Neelam, Sahil; Sood, Sandeep; Mehmi, Sandeep; Dogra, Shikha., “Artificial Intelligence for Designing User Profiling System for Cloud Computing Security: Experiment,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, pp. 51–58, 19–20 March 2015. doi:10.1109/ICACEA.2015.7164645
Abstract: In Cloud Computing security, the existing mechanisms: Anti-virus programs, Authentications, Firewalls are not able to withstand the dynamic nature of threats. So, User Profiling System, which registers user's activities to analyze user's behavior, augments the security system to work in proactive and reactive manner and provides an enhanced security. This paper focuses on designing a User Profiling System for Cloud environment using Artificial Intelligence techniques and studies behavior (of User Profiling System) and proposes a new hybrid approach, which will deliver a comprehensive User Profiling System for Cloud Computing security.
Keywords: artificial intelligence; authorisation; cloud computing; firewalls; antivirus programs; artificial intelligence techniques; authentications; cloud computing security; cloud environment; firewalls; proactive manner; reactive manner; user activities; user behavior; user profiling system; Artificial intelligence; Cloud computing; Computational modeling; Fuzzy logic; Fuzzy systems; Genetic algorithms; Security; Artificial Intelligence; Artificial Neural Networks; Cloud Computing; Datacenters; Expert Systems; Genetics; Machine Learning; Multi-tenancy; Networking Systems; Pay-as-you-go Model (ID#: 15-7780)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164645&isnumber=7164643
Bellin Ribeiro, P.; Alexandre da Silva, L.; Pontara da Costa, K.A., “Spam Intrusion Detection in Computer Networks Using Intelligent Techniques,” in Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, vol., no.,
pp. 1357–1360, 11–15 May 2015. doi:10.1109/INM.2015.7140495
Abstract: Anomalies in computer networks has increased in the last decades and raised concern to create techniques to identify these unusual traffic patterns. This research aims to use data mining techniques in order to correctly identify these anomalies, particularly in spam detection, for it was applied an collection of machine learning algorithms for data mining tasks and an dataset called SPAMBASE to identify the best techniques for this type of anomaly.
Keywords: computer network security; data mining; learning (artificial intelligence); telecommunication traffic; unsolicited e-mail; SPAMBASE dataset; computer network anomaly; data mining technique; intelligent technique; machine learning algorithm; spam intrusion detection; traffic pattern identification; Bagging; Classification algorithms; Conferences; Data mining; Decision trees; Unsolicited electronic mail; Anomalies; Artificial Neural Networks; Computer networks; Data Mining; SPAMBASE; Weka Tool
(ID#: 15-7781)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140495&isnumber=7140257
Abbinaya, S.; Kumar, M. Senthil, “Software Effort and Risk Assessment Using Decision Table Trained by Neural Networks,” in Communications and Signal Processing (ICCSP), 2015 International Conference on, vol., no., pp. 1389–1394, 2–4 April 2015. doi:10.1109/ICCSP.2015.7322738
Abstract: Software effort estimations are based on prediction properties of system with attention to develop methodologies. Many organizations follow the risk management but the risk identification techniques will differ. In this paper, we focus on two effort estimation techniques such as use case point and function point are used to estimate the effort in the software development. The decision table is used to compare these two methods to analyze which method will produce the accurate result. The neural network is used to train the decision table with the use of back propagation training algorithm and compare these two effort estimation methods (use case point and function point) with the actual effort. By using the past project data, the estimation methods are compared. Similarly risk will be evaluated by using the summary of questionnaire received from the various software developers. Based on the report, we can also mitigate the risk in the future process.
Keywords: Algorithm design and analysis; Lead; Security; artificial neural network; back propagation; decision table; feed forward neural networks; function point; regression; risk evaluation; software effort estimation; use case point (ID#: 15-7782)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322738&isnumber=7322423
Alheeti, K.M.A.; Gruebler, A.; McDonald-Maier, K.D., “An Intrusion Detection System Against Malicious Attacks on the Communication Network of Driverless Cars,” in Consumer Communications and Networking Conference (CCNC), 2015 12th Annual IEEE, vol., no., pp. 916–921, 9–12 Jan. 2015. doi:10.1109/CCNC.2015.7158098
Abstract: Vehicular ad hoc networking (VANET) have become a significant technology in the current years because of the emerging generation of self-driving cars such as Google driverless cars. VANET have more vulnerabilities compared to other networks such as wired networks, because these networks are an autonomous collection of mobile vehicles and there is no fixed security infrastructure, no high dynamic topology and the open wireless medium makes them more vulnerable to attacks. It is important to design new approaches and mechanisms to rise the security these networks and protect them from attacks. In this paper, we design an intrusion detection mechanism for the VANETs using Artificial Neural Networks (ANNs) to detect Denial of Service (DoS) attacks. The main role of IDS is to detect the attack using a data generated from the network behavior such as a trace file. The IDSs use the features extracted from the trace file as auditable data. In this paper, we propose anomaly and misuse detection to detect the malicious attack.
Keywords: computer network security; feature extraction; neural nets; vehicular ad hoc networks; Denial of Service attack detection; DoS attack detection; IDS; VANET; artificial neural network; driverless car communication network; feature extraction; intrusion detection system; malicious attack; misuse detection; mobile vehicle autonomous collection; open wireless medium; self-driving car; vehicular ad hoc networking; Accuracy; Ad hoc networks; Artificial neural networks; Feature extraction; Security; Training; Vehicles; driverless car; security; vehicular ad hoc networks (ID#: 15-7783)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158098&isnumber=7157933
Kotenko, I.; Saenko, I.; Skorik, F.; Bushuev, S., “Neural Network Approach to Forecast the State of the Internet of Things Elements,” in Soft Computing and Measurements (SCM), 2015 XVIII International Conference on, vol., no., pp. 133–135,
19–21 May 2015. doi:10.1109/SCM.2015.7190434
Abstract: The paper presents the method to forecast the states of elements of the Internet of Things based on using an artificial neural network. The offered architecture of the neural network is a combination of a multilayered perceptron and a probabilistic neural network. For this reason, it provides high efficiency of decision-making. Results of an experimental assessment of the offered neural network on the accuracy of forecasting the states of elements of the Internet of Things are discussed.
Keywords: Internet of Things; decision making; multilayer perceptrons; neural net architecture; probability; artificial neural network; multilayered perceptron; probabilistic neural network; Artificial neural networks; Computer architecture; Forecasting; Internet of things; Probabilistic logic; Security; internet of things; neural network; state monitoring (ID#: 15-7784)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7190434&isnumber=7190390
Adenusi, D.; Alese, B.K.; Kuboye, B.M.; Thompson, A.F.-B., “Development of Cyber Situation Awareness Model,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1–11, 8–9 June 2015. doi:10.1109/CyberSA.2015.7166135
Abstract: This study designed and simulated cyber situation awareness model for gaining experience of cyberspace condition. This was with a view to timely detecting anomalous activities and taking proactive decision safeguard the cyberspace. The situation awareness model was modelled using Artificial Intelligence (AI) technique. The cyber situation perception sub-model of the situation awareness model was modelled using Artificial Neural Networks (ANN). The comprehension and projection submodels of the situation awareness model were modelled using Rule-Based Reasoning (RBR) techniques. The cyber situation perception sub-model was simulated in MATLAB 7.0 using standard intrusion dataset of KDD’99. The cyber situation perception sub-model was evaluated for threats detection accuracy using precision, recall and overall accuracy metrics. The simulation result obtained for the performance metrics showed that the cyber-situation sub-model of the cybersituation model better with increase in number of training data records. The cyber situation model designed was able to meet its overall goal of assisting network administrators to gain experience of cyberspace condition. The model was capable of sensing the cyberspace condition, perform analysis based on the sensed condition and predicting the near future condition of the cyberspace.
Keywords: artificial intelligence; inference mechanisms; knowledge based systems; mathematics computing; neural nets; security of data; AI technique; ANN; Matlab 7.0; RBR techniques; anomalous activities detection; artificial neural networks; cyber situation awareness model; cyberspace condition; proactive decision safeguard; rule-based reasoning; training data records; Artificial neural networks; Computational modeling; Computer security; Cyberspace; Data models; Intrusion detection; Mathematical model; Artificial Intelligence; Awareness; cyber-situation; cybersecurity; cyberspace (ID#: 15-7785)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166135&isnumber=7166109
Kodym, O.; Benes, F.; Svub, J., “EPC Application Framework in the Context of Internet of Things,” in Carpathian Control Conference (ICCC), 2015 16th International, vol., no., pp. 214–219, 27–30 May 2015. doi:10.1109/CarpathianCC.2015.7145076
Abstract: Internet of Things philosophy implementation in conditions of the existing communication networks requires new types of services and interoperability. Once of the desired innovations is communication between existing IP world and the new generation network. Not just networks of smart devices that may not always have IP connectivity, but also other RFID-labeled objects and sensors. Fulfilling the need for high-quality applications for further more specific parameters of these objects internet of things, as may be location, serial number, distinctive and unique characters/connections, can add a proper extension of the existing network and system infrastructure with new information and naming service. Their purpose is not only to assign a unique identifier to the object, but also allow users to new services use other information associated with the selected object. The technology that enables the data processing, filtering and storage is defined in the Electronic Product Code Application Framework (EPCAF) as RFID middleware and EPCIS. One of the implementations of these standards is the Open Source solution Fosstrak. We experimented with Fosstrak system that was developed on Massachusetts Institute of Technology (MIT) by an academic initiative but nowadays we are going to prove its benefits in the context of business environment. The project is aimed also on connection and linking between systems of the EPCIS class made by the ONS systems.
Keywords: IP networks; Internet of Things; filtering theory; middleware; open systems; product codes; radiofrequency identification; storage management; EPC application framework; EPCAF; EPCIS class; Fosstrak system; IP connectivity; IP world; Internet of Things; MIT; Massachusetts Institute of Technology; ONS system; RFID middleware; RFID-labeled object; academic initiative; business environment; communication network; data processing; electronic product code application framework; filtering; high-quality application; information service; interoperability; naming service; new generation network; open source solution Fosstrak; smart device; storage; system infrastructure; Artificial neural networks; Interoperability; Product codes; Standards; Technological innovation; Testing; Fosstrak; IPv6; IoT (Internet of Things); ONS (Object name services); RFID security (ID#: 15-7786)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145076&isnumber=7145033
Sagar, V.; Kumar, K., “A Symmetric Key Cryptography Using Genetic Algorithm and Error Back Propagation Neural Network,” in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, vol., no.,
pp. 1386–1391, 11–13 March 2015. doi: (not provided)
Abstract: In conventional security mechanism, cryptography is a process of information and data hiding from unauthorized access. It offers the unique possibility of certifiably secure data transmission among users at different remote locations. Cryptography is used to achieve availability, privacy and integrity over different networks. Usually, there are two categories of cryptography i.e. symmetric and asymmetric. In this paper, we have proposed a new symmetric key algorithm based on genetic algorithm (GA) and error back propagation neural network (EBP-NN). Genetic algorithm has been used for encryption and neural network has been used for decryption process. Consequently, this paper proposes an easy cryptographic secure algorithm for communication over the public computer networks.
Keywords: backpropagation; computer network security; cryptography; genetic algorithms; neural nets; EBP-NN; GA; certifiably secure data transmission; cryptographic secure algorithm; data hiding; data integrity; data privacy; decryption process; error back propagation neural network; genetic algorithm; information hiding; public computer networks; remote locations; symmetric key cryptography; unauthorized access; Artificial neural networks; Encryption; Genetic algorithms; Neurons; Receivers; genetic algorithm; symmetric key (ID#: 15-7787)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100476&isnumber=7100186
Singare, Y.P.; Tembhurkar, M., “Design of an Efficient Initial Access Authentication over MANET,” in Industrial Instrumentation and Control (ICIC), 2015 International Conference on, vol., no., pp. 1614–1619, 28–30 May 2015. doi:10.1109/IIC.2015.7151008
Abstract: Nowadays, the importance of Mobile Ad hoc Networks (MANETs) is growing rapidly especially in military and business applications. It is crucial to have a more efficient initial link setup mechanism. In this work, we propose an efficient initial access authentication protocol, which realizes the authentications and key distribution through least roundtrip messages. We propose efficient initial access authentication mechanism over MANET that is more efficient than any message authentication method in the literature. The key idea behind the proposed method is to provide efficient initial authentication as well as to provide secure message passing between Mobile user and authentication server. Furthermore, a simple and practical method is presented to make compatible with MANET.
Keywords: cryptographic protocols; mobile ad hoc networks; MANET; authentication server; initial access authentication protocol; key distribution; least roundtrip message; Ad hoc networks; Artificial neural networks; Authentication; Dictionaries; Maintenance engineering; Mobile computing; Protocols; Security (ID#: 15-7788)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7151008&isnumber=7150576
Salmeron, J.L., “A Fuzzy Grey Cognitive Maps-Based Intelligent Security System,” in Grey Systems and Intelligent Services (GSIS), 2015 IEEE International Conference on, vol., no., pp. 29–32, 18–20 Aug. 2015. doi:10.1109/GSIS.2015.7301813
Abstract: Fuzzy Grey Cognitive Map (FGCM) is an innovative soft computing technique mixing Fuzzy Cognitive Maps and Grey Systems Theory. FGCMs are supervised learning fuzzy-neural systems typically modeled with signed fuzzy grey weighted digraphs, generally involving feedbacks. It is hard to find an accurate mathematical model to describe this decision-making because it includes a high uncertainty and the factors involved interact each other. FGCMs are able to capture and imitate the nature of human being in describing, representing and developing models. They are good at processing fuzzy and grey information and have adaptive, intelligent features. This paper presents a FGCM-based decision support tool, which synthetically takes the related factors into account, offering objective parameters for selecting the fitter surveillance asset. The proposed method is robust, adaptive and simple.
Keywords: decision support systems; fuzzy neural nets; fuzzy set theory; graph theory; grey systems; learning (artificial intelligence); security of data; FGCM-based decision support tool; fitter surveillance asset selection; fuzzy grey cognitive maps-based intelligent security system; innovative soft computing technique; mathematical model; signed fuzzy grey weighted digraphs; supervised learning fuzzy-neural systems; Accuracy; Artificial neural networks; Computational modeling; Geology; Q measurement; Fuzzy Grey Cognitive Maps; Intelligent Security System; Security; simulation (ID#: 15-7789)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301813&isnumber=7301809
Nair, N.K.; Navin, K.S., “An Efficient Group Authentication Mechanism Supporting Key Confidentiality, Key Freshness and Key Authentication in Cloud Computing,” in Computation of Power, Energy Information and Communication (ICCPEIC), 2015 International Conference on, vol., no., pp. 0288–0292, 22–23 April 2015. doi:10.1109/ICCPEIC.2015.7259477
Abstract: A Group authentication emphasis on communication between the members of a group and then authenticates the members. The main purpose of group communication is to share and exchange ideas and messages with different members of the group. The messages are sent to each other in encrypted form to enhance security. The group manager has the responsibility of overall control over the group. A group key is there for each group which generates the session keys which are used by the group members to share the secret messages.
Keywords: cloud computing; cryptography; group authentication mechanism; group key; key authentication; key confidentiality; key freshness; session keys; Artificial neural networks; Cryptography; Reliability; Cloud computing; Encryption; Group Authentication; Key Confidentiality; Key Freshness (ID#: 15-7790)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7259477&isnumber=7259434
Ishitaki, Taro; Elmazi, Donald; Yi Liu; Oda, Tetsuya; Barolli, Leonard; Uchida, Kazunori, “Application of Neural Networks for Intrusion Detection in Tor Networks,” in Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, vol., no., pp. 67–72, 24–27 March 2015. doi:10.1109/WAINA.2015.136
Abstract: Due to the amount of anonymity afforded to users of the Tor infrastructure, Tor has become a useful tool for malicious users. With Tor, the users are able to compromise the non-repudiation principle of computer security. Also, the potentially hackers may launch attacks such as DDoS or identity theft behind Tor. For this reason, there are needed new systems and models to detect the intrusion in Tor networks. In this paper, we present the application of Neural Networks (NNs) for intrusion detection in Tor networks. We used the Back propagation NN and constructed a Tor server and a Deep Web browser (client). Then, the client sends the data browsing to the Tor server using the Tor network. We used Wireshark Network Analyzer to get the data and then use the Back propagation NN to make the approximation. The simulation results show that our simulation system has a good approximation and can be used for intrusion detection in Tor networks.
Keywords: backpropagation; computer network security; file servers; neural nets; online front-ends; telecommunication network routing; TOR network; The Onion Router; Tor server; Wireshark network analyzer; back propagation NN; computer security nonrepudiation principle; deep Web browser; intrusion detection; neural network; Approximation methods; Artificial neural networks; Intrusion detection; Neurons; Peer-to-peer computing; Servers; Deep Web; Intrusion Detection; Neural Networks; Tor Networks (ID#: 15-7791)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096149&isnumber=7096097
Narad, S.K.; Chavan, P.V., “Neural Network Based Group Authentication Using (n, n) Secret Sharing Scheme,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, vol., no., pp. 409–414, 19–20 March 2015. doi:10.1109/ICACEA.2015.7164739
Abstract: In recent days, usage of internet is increasing so; authentication becomes the most important security services for communication purpose. Keeping this into consideration, there is need of robust security services and schemes. This paper proposes Group Authentication authenticates all users at a time belonging to the same group. The (n, n) Group Authentication Scheme is very efficient since it authenticates all users if they are group members. If they are nonmembers, then it may be used as a preprocess and apply authentication before and it identifies the non-members. Also, if any of the users present in group authentication is absent then the group is not authenticated at all, as each share is distributed to each user. It results in best authenticated system as the Group Authentication is implemented with Neural Network. So it becomes complicated for hackers to hack each neuron in a neural network. The Neural Network based group authentication is specially designed for applications performing group activities using Shamir Secret Sharing Scheme.
Keywords: Internet; computer network security; neural nets; Shamir secret sharing scheme; authenticated system; group authentication; group authentication scheme; neural network; security services; Artificial neural networks; Authentication; Biological neural networks; Cryptography; Visualization; Backpropogation Neural Network; Group Authentication; Shamir Secret Sharing Scheme (ID#: 15-7792)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164739&isnumber=7164643
Gilmore, R.; Hanley, N.; O’Neill, M., “Neural Network Based Attack on a Masked Implementation of AES,” in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, pp. 106–111, 5–7 May 2015. doi:10.1109/HST.2015.7140247
Abstract: Masked implementations of cryptographic algorithms are often used in commercial embedded cryptographic devices to increase their resistance to side channel attacks. In this work we show how neural networks can be used to both identify the mask value, and to subsequently identify the secret key value with a single attack trace with high probability. We propose the use of a pre-processing step using principal component analysis (PCA) to significantly increase the success of the attack. We have developed a classifier that can correctly identify the mask for each trace, hence removing the security provided by that mask and reducing the attack to being equivalent to an attack against an unprotected implementation. The attack is performed on the freely available differential power analysis (DPA) contest data set to allow our work to be easily reproducible. We show that neural networks allow for a robust and efficient classification in the context of side-channel attacks.
Keywords: cryptography; neural nets; pattern classification; principal component analysis; AES; Advanced Encryption Standard; DPA; PCA; cryptographic algorithms; differential power analysis contest data set; embedded cryptographic devices; machine learning; mask value identification; masked implementation; neural network based attack; secret key value identification; side channel attacks; Artificial neural networks; Cryptography; Error analysis; Hardware; Power demand; Principal component analysis; Training; SCA; machine learning; masking; neural network (ID#: 15-7793)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140247&isnumber=7140225
Xiong Kai; Yin Mingyong; Li Wenkang; Jiang Hong, “A Rank Sequence Method for Detecting Black Hole Attack in Ad Hoc Network,” in Intelligent Computing and Internet of Things (ICIT), 2014 International Conference on, vol., no., pp. 155–159, 17–18 Jan. 2015. doi:10.1109/ICAIOT.2015.7111559
Abstract: This paper discusses one of the route security problems called the black hole attack. In the network, we can capture some AODV route tables to gain a rank sequences by using the FP-Growth, which is a data association rule mining. We choose the rank sequences for detecting the malicious node because the rank sequences are not sensitive to the noise interfered. A suspicious set consists of nodes which are selected by whether the rank of a node is changed in the sequence. Then, we use the DE-Cusum to distinguish the black hole route and normal one in the suspicious set. In this paper, the FP-Growth reflects an idea which is about reducing data dimensions. This algorithm excludes many normal nodes before the DE-Cusum detection because the normal node has a stable rank in a sequence. In the simulation, we use the NS2 to build a black hole attack scenario with 11 nodes. Simulation results show that the proposed algorithm can reduce much vain detection.
Keywords: ad hoc networks; computer crime; data mining; mobile computing; routing protocols; sensor fusion; telecommunication security; AODV route tables; DE-Cusum detection; FP-growth; ad hoc network; ad-hoc on-demand distance vector; black hole attack detection; black hole route; data association rule mining; data dimensions; malicious node detection; rank sequence method; route security problems; Artificial neural networks; Cryptography; Noise; AODV; Black hole attack; DE-Cusum; FP-Growth
(ID#: 15-7794)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7111559&isnumber=7111523
Pantola, P.; Bala, A.; Rana, P.S., “Consensus Based Ensemble Model for Spam Detection,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no., pp.1724–1727, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275862
Abstract: In machine learning, ensemble model is combining two or more models for obtaining the better prediction, accuracy and robustness as compared to individual model separately. Before getting ensemble model first we have to assign our training dataset into different models, after that we have to select the best model suited for our data sets. In this work we explored six machine learning parameter for the data set i.e. Accuracy, Receiver operating characteristics (ROC) curve, Confusion matrix, Sensitivity, Specificity and Kappa value. After that we implemented k fold validation to our best five models.
Keywords: feature selection; learning (artificial intelligence); security of data; unsolicited e-mail; Kappa value; ROC curve; confusion matrix; consensus based ensemble model; k fold validation; machine learning parameter; receiver operating characteristics curve; spam detection; Accuracy; Adaptation models; Analytical models; Artificial neural networks; Computational modeling; Data models; Vegetation; Feature selection; machine learning models; spams data set (ID#: 15-7795)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275862&isnumber=7275573
Stampar, M.; Fertalj, K., “Artificial Intelligence in Network Intrusion Detection,” in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, vol., no., pp. 1318–1323, 25–29 May 2015. doi:10.1109/MIPRO.2015.7160479
Abstract: In past, detection of network attacks has been almost solely done by human operators. They anticipated network anomalies in front of consoles, where based on their expert knowledge applied necessary security measures. With the exponential growth of network bandwidth, this task slowly demanded substantial improvements in both speed and accuracy. One proposed way how to achieve this is the usage of artificial intelligence (AI), progressive and promising computer science branch, particularly one of its sub-fields - machine learning (ML) - where main idea is learning from data. In this paper authors will try to give a general overview of AI algorithms, with main focus on their usage for network intrusion detection.
Keywords: computer network security; learning (artificial intelligence); AI algorithm; ML; artificial intelligence; expert knowledge; machine learning; network attacks detection; network bandwidth; network intrusion detection; Artificial intelligence; Artificial neural networks; Classification algorithms; Intrusion detection; Market research; Niobium; Support vector machines (ID#: 15-7796)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160479&isnumber=7160221
Elsayed, S.; Sarker, R.; Slay, J., “Evaluating the Performance of a Differential Evolution Algorithm in Anomaly Detection,” in Evolutionary Computation (CEC), 2015 IEEE Congress on, vol., no., pp. 2490–2497, 25–28 May 2015. doi:10.1109/CEC.2015.7257194
Abstract: During the last few eras, evolutionary algorithms have been adopted to tackle cyber-terrorism. Among them, genetic algorithms and genetic programming were popular choices. Recently, it has been shown that differential evolution was more successful in solving a wide range of optimization problems. However, a very limited number of research studies have been conducted for intrusion detection using differential evolution. In this paper, we will adapt differential evolution algorithm for anomaly detection, along with proposing a new fitness function to measure the quality of each individual in the population. The proposed method is trained and tested on the 10%KDD99 cup data and compared against existing methodologies. The results show the effectiveness of using differential evolution in detecting anomalies by achieving an average true positive rate of 100%, while the average false positive rate is only 0.582%.
Keywords: computer network security; genetic algorithms; Cyber terrorism; anomaly detection; differential evolution algorithm; fitness function; genetic programming; intrusion detection; optimisation; Artificial neural networks; Feature extraction; Indexes; Intrusion detection; Sociology; Statistics; Testing; differential evolution; intrusion detection systems (ID#: 15-7797)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7257194&isnumber=7256859
Schneider, M.; Ertel, W.; Palm, G., “Expected Similarity Estimation for Large Scale Anomaly Detection,” in Neural Networks (IJCNN), 2015 International Joint Conference on, vol., no., pp. 1–8, 12–17 July 2015. doi:10.1109/IJCNN.2015.7280331
Abstract: We propose a new algorithm named EXPected Similarity Estimation (EXPoSE) to approach the problem of anomaly detection (also known as one-class learning or outlier detection) which is based on the similarity between data points and the distribution of non-anomalous data. We formulate the problem as an inner product in a reproducing kernel Hilbert space to which we present approximations that allow its application to very large-scale datasets. More precisely, given a dataset with n instances, our proposed method requires O(n) training time and O(1) to make a prediction while spending only O(1) memory to store the learned model. Despite its abstract derivation our algorithm is simple and parameter free. We show on seven real datasets that our approach can compete with state of the art algorithms for anomaly detection.
Keywords: Hilbert spaces; learning (artificial intelligence); security of data; EXPoSE; data points; expected similarity estimation; kernel Hilbert space; large scale anomaly detection; one-class learning; outlier detection; Approximation methods; Artificial neural networks; Prediction algorithms; Spatial databases; Xenon (ID#: 15-7798)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280331&isnumber=7280295
Shaoning Pang; Yiming Peng; Ban, Tao; Inoue, Daisuke; Sarrafzadeh, Abdolhossein, “A Federated Network Online Network Traffics Analysis Engine for Cybersecurity,” in Neural Networks (IJCNN), 2015 International Joint Conference on, vol., no.,
pp. 1–8, 12–17 July 2015. doi:10.1109/IJCNN.2015.7280563
Abstract: Agent-oriented techniques are being increasingly used in a range of networking security applications. In this paper, we introduce FNTAE, a Federated Network Traffic Analysis Engine for real-time network intrusion detection. In FNTAE, each analysis engine is powered with an incremental learning agent, for capturing attack signatures in real-time, so that the abnormal traffics resulting from the new attacks are detected as soon as they occur. Owing to the effective knowledge sharing among multiple analysis engines, the integrated engine is theoretically guaranteed performing more effective than a centralized analysis system. We deployed and tested FNTAE in a real world network environment. The results demonstrate that FNTAE is a promising solution to improving system security through the identification of malicious network traffic.
Keywords: computer network security; learning (artificial intelligence); multi-agent systems; telecommunication traffic; FNTAE; abnormal traffics; agent-oriented techniques; attack signatures; centralized analysis system; cybersecurity; federated network online network traffics analysis engine; incremental learning agent; malicious network traffic; multiple analysis engines; networking security applications; real world network environment; system security; Artificial neural networks; Computer security; Engines; IP networks; Merging; Switches (ID#: 15-7799)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280563&isnumber=7280295
Hajdarevic, A.; Dzananovic, I.; Banjanovic-Mehmedovic, L.; Mehmedovic, F., “Anomaly Detection in Thermal Power Plant Using Probabilistic Neural Network,” in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, vol., no., pp. 1118–1123, 25–29 May 2015. doi:10.1109/MIPRO.2015.7160443
Abstract: Anomalies are integral part of every system’s behavior and sometimes cannot be avoided. Therefore it is very important to timely detect such anomalies in real-world running power plant system. Artificial neural networks are one of anomaly detection techniques. This paper gives a type of neural network (probabilistic) to solve the problem of anomaly detection in selected sections of thermal power plant. Selected sections are steam superheaters and steam drum. Inputs for neural networks are some of the most important process variables of these sections. It is noteworthy that all of the inputs are observable in the real system installed in thermal power plant, some of which represent normal behavior and some anomalies. In addition to the implementation of this network for anomaly detection, the effect of key parameter change on anomaly detection results is also shown. Results confirm that probabilistic neural network is excellent solution for anomaly detection problem, especially in real-time industrial applications.
Keywords: neural nets; power engineering computing; probability; security of data; thermal power stations; ANN; anomaly detection techniques; artificial neural networks; normal behavior; probabilistic neural network; process variables; real-time industrial applications; steam drum; steam superheaters; thermal power plant; Biological neural networks; Boilers; Power generation; Probabilistic logic; Probability density function; Training (ID#: 15-7800)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160443&isnumber=7160221
Esmaily, Jamal; Moradinezhad, Reza; Ghasemi, Jamal, “Intrusion Detection System Based on Multi-Layer Perceptron Neural Networks and Decision Tree,” in Information and Knowledge Technology (IKT), 2015 7th Conference on, vol., no., pp. 1–5, 26–28 May 2015. doi:10.1109/IKT.2015.7288736
Abstract: The growth of internet attacks is a major problem for today’s computer networks. Hence, implementing security methods to prevent such attacks is crucial for any computer network. With the help of Machine Learning and Data Mining techniques, Intrusion Detection Systems (IDS) are able to diagnose attacks and system anomalies more effectively. Though, most of the studied methods in this field, including Rule-based expert systems, are not able to successfully identify the attacks which have different patterns from expected ones. By using Artificial Neural Networks (ANNs), it is possible to identify the attacks and classify the data, even when the dataset is nonlinear, limited, or incomplete. In this paper, a method based on the combination of Decision Tree (DT) algorithm and Multi-Layer Perceptron (MLP) ANN is proposed which is able to identify attacks with high accuracy and reliability.
Keywords: Internet; computer network security; data mining; decision trees; learning (artificial intelligence); multilayer perceptrons; ANNs; DT; IDS; Internet attacks; MLP ANN; artificial neural networks; computer networks; data mining techniques; decision tree; intrusion detection system; machine learning; multilayer perceptron neural networks; rule-based expert systems; security methods; Algorithm design and analysis; Classification algorithms; Clustering algorithms; Decision trees; Intrusion detection; Neural networks; Support vector machines; Decision Tree; Intrusion Detection Systems; Machine Learning; Neural Networks (ID#: 15-7801)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288736&isnumber=7288662
Anandapriya, M.; Lakshmanan, B., “Anomaly Based Host Intrusion Detection System Using Semantic Based System Call Patterns,” in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, vol., no., pp. 1–4, 9–10 Jan. 2015. doi:10.1109/ISCO.2015.7282244
Abstract: The Host Based Intrusion Detection System (HIDS) is to prevent the host system from being compromised by intruders. To prevent the execution of malicious codes on the host, HIDS monitors the system audit and event logs. But the design of HIDS is very challenging due to the presence of high false alarm rate. This paper mainly focuses on reducing the problem of false alarm rate, using semantic based system call patterns. Here, we make use of the semantic approach to apply on the underlying kernel level system calls which can help understand the anomaly behavior. The semantic tool used is the data dictionary. The data dictionary containing every possible combinations of sequence of system call names of particular phrase length was constructed. The features satisfying the semantic hypothesis are extracted and then normalized. The normalized values are then given as input to the decision engine. The decision engine used is the Extreme Learning Machine - a new type of neural network. Performance was evaluated using the modern ADFA-LD dataset.
Keywords: database management systems; feature extraction; learning (artificial intelligence); neural nets; security of data; ADFA-LD dataset; HIDS; anomaly based host intrusion detection system; data dictionary; decision engine; extreme learning machine; kernel level system calls; neural network; semantic based system call patterns; Dictionaries; Engines; Feature extraction; Hidden Markov models; Intrusion detection; Semantics; Support vector machines; ADFA-LD; Anomaly; ELM; semantic phrases
(ID#: 15-7802)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282244&isnumber=7282219
Chih-Hung Hsieh; Yu-Siang Shen; Chao-Wen Li; Jain-Shing Wu, “iF2: An Interpretable Fuzzy Rule Filter for Web Log Post-Compromised Malicious Activity Monitoring,” in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, vol., no., pp. 130–137, 24–26 May 2015. doi:10.1109/AsiaJCIS.2015.19
Abstract: To alleviate the loads of tracking web log file by human effort, machine learning methods are now commonly used to analyze log data and to identify the pattern of malicious activities. Traditional kernel based techniques, like the neural network and the support vector machine (SVM), typically can deliver higher prediction accuracy. However, the user of a kernel based techniques normally cannot get an overall picture about the distribution of the data set. On the other hand, logic based techniques, such as the decision tree and the rule-based algorithm, feature the advantage of presenting a good summary about the distinctive characteristics of different classes of data such that they are more suitable to generate interpretable feedbacks to domain experts. In this study, a real web-access log dataset from a certain organization was collected. An efficient interpretable fuzzy rule filter (iF2) was proposed as a filter to analyze the data and to detect suspicious internet addresses from the normal ones. The historical information of each internet address recorded in web log file is summarized as multiple statistics. And the design process of iF2 is elaborately modeled as a parameter optimization problem which simultaneously considers 1) maximizing prediction accuracy, 2) minimizing number of used rules, and 3) minimizing number of selected statistics. Experimental results show that the fuzzy rule filter constructed with the proposed approach is capable of delivering superior prediction accuracy in comparison with the conventional logic based classifiers and the expectation maximization based kernel algorithm. On the other hand, though it cannot match the prediction accuracy delivered by the SVM, however, when facing real web log file where the ratio of positive and negative cases is extremely unbalanced, the proposed iF2 of having optimization flexibility results in a better recall rate and enjoys one major advantage due to providing the user with an overall picture of the underlying distributions.
Keywords: Internet; data mining; fuzzy set theory; learning (artificial intelligence); neural nets; pattern classification; statistical analysis; support vector machines; Internet address; SVM; Web log file tracking; Web log post-compromised malicious activity monitoring; Web-access log dataset; decision tree; expectation maximization based kernel algorithm; fuzzy rule filter; iF2; interpretable fuzzy rule filter; kernel based techniques; log data analysis; logic based classifiers; logic based techniques; machine learning methods; malicious activities; neural network; parameter optimization problem; recall rate; rule-based algorithm; support vector machine; Accuracy; Kernel; Monitoring; Optimization; Prediction algorithms; Support vector machines; Fuzzy Rule Based Filter; Machine Learning; Parameter Optimization; Pattern Recognition; Post-Compromised Threat Identification; Web Log Analysis (ID#: 15-7803)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153947&isnumber=7153836
Shin-Ying Huang; Fang Yu; Rua-Huan Tsaih; Yennun Huang, “Network-Traffic Anomaly Detection with Incremental Majority Learning,” in Neural Networks (IJCNN), 2015 International Joint Conference on, vol., no., pp. 1–8, 12–17 July 2015. doi:10.1109/IJCNN.2015.7280573
Abstract: Detecting anomaly behavior in large network traffic data has presented a great challenge in designing effective intrusion detection systems. We propose an adaptive model to learn majority patterns under a dynamic changing environment. We first propose unsupervised learning on data abstraction to extract essential features of samples. We then adopt incremental majority learning with iterative evolutions on fitting envelopes to characterize the majority of samples within moving windows. A network traffic sample is considered an anomaly if its abstract feature falls on the outside of the fitting envelope. We justify the effectiveness of the presented approach against 150000+ traffic samples from the NSL-KDD dataset in training and testing, demonstrating positive promise in detecting network attacks by identifying samples that have abnormal features.
Keywords: computer network security; data structures; iterative methods; learning (artificial intelligence); telecommunication traffic; NSL-KDD dataset; abnormal features; anomaly behavior; data abstraction; dynamic changing environment; fitting envelopes; incremental majority learning; intrusion detection systems; iterative evolutions; large network traffic data; network attacks; network-traffic anomaly detection; unsupervised learning; Adaptation models; Character recognition; Classification algorithms; Testing; incremental learning; intrusion detection system; neural networks; outlier detection (ID#: 15-7804)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280573&isnumber=7280295
Gaikwad, D.P.; Thool, R.C., “Intrusion Detection System Using Bagging Ensemble Method of Machine Learning,” in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no., pp. 291–295, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.61
Abstract: Intrusion detection system is widely used to protect and reduce damage to information system. It protects virtual and physical computer networks against threats and vulnerabilities. Presently, machine learning techniques are widely extended to implement effective intrusion detection system. Neural network, statistical models, rule learning, and ensemble methods are some of the kinds of machine learning methods for intrusion detection. Among them, ensemble methods of machine learning are known for good performance in learning process. Investigation of appropriate ensemble method is essential for building effective intrusion detection system. In this paper, a novel intrusion detection technique based on ensemble method of machine learning is proposed. The Bagging method of ensemble with REPTree as base class is used to implement intrusion detection system. The relevant features from NSL_KDD dataset are selected to improve the classification accuracy and reduce the false positive rate. The performance of proposed ensemble method is evaluated in term of classification accuracy, model building time and False Positives. The experimental results show that the Bagging ensemble with REPTree base class exhibits highest classification accuracy. One advantage of using Bagging method is that it takes less time to build the model. The proposed ensemble method provides competitively low false positives compared with other machine learning techniques.
Keywords: data analysis; learning (artificial intelligence); neural nets; security of data; statistical analysis; trees (mathematics); NSL-KDD dataset; REPTree; classification accuracy; intrusion detection system; machine learning techniques; neural network; physical computer networks; statistical models; using bagging ensemble method; virtual computer networks; Accuracy; Bagging; Classification algorithms; Feature extraction; Hidden Markov models; Intrusion detection; Training; Ensemble; False positives; Machine learning; intrusion detection (ID#: 15-7805)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155853&isnumber=7155781
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Channel Coding 2015 |
Channel coding, also known as Forward Error Correction, is a method for controlling errors in data transmissions over noisy or unreliable communication channels. For cybersecurity, this method can also be used to ensure data integrity, as some of the research cited below shows. The work cited here relates to the Science of Security problems of metrics, resiliency, and composability. These papers were published in 2015.
Dajiang Chen; Shaoquan Jiang; Zhiguang Qin, “Message Authentication Code over a Wiretap Channel,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 2301–2305, 14–19 June 2015. doi:10.1109/ISIT.2015.7282866
Abstract: Message Authentication Code (MAC) is a keyed function fK such that when Alice, who shares the secret K with Bob, sends fK(M) to the latter, Bob will be assured of the integrity and authenticity of M. Traditionally, it is assumed that the channel is noiseless. Unfortunately, Maurer showed that in this case an attacker can succeed with probability equation after authenticating ∓ messages, where H(K) is the entropy of K. In this paper, we consider the setting where the channel is noisy. Specifically, Alice and Bob are connected by a discrete memoryless channel (DMC) W1 and a noiseless but insecure channel. In addition, there is a DMC W2 between Alice and attacker Oscar. We regard the noisy channel as an expensive resource and define the authentication rate ρauth as the ratio of message length to the number n of channel W1 uses. The security of this model depends on the channel coding for fK(M). A natural coding scheme is to use the secrecy capacity achieving code of Csiszár and Körner. Intuitively, this is also the optimal strategy. However, we propose a coding scheme that achieves a higher ρauth. Our crucial point is that under a secrecy capacity code, Bob can fully recover fK(M) while in our model this is not necessary as we only need to detect the existence of the modification. How to detect the malicious modification without recovering fK(M) is the main contribution of this work. We achieve this through random coding techniques.
Keywords: channel coding; entropy codes; message authentication; discrete memoryless channel; entropy; message authentication code; wiretap channel; Authentication; Channel coding; Computational modeling; Cryptography; Message authentication; Noise measurement; information theoretical security (ID#: 15-7685)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282866&isnumber=7282397
Hongbo Si; Koyluoglu, O. Ozan; Vishwanath, Sriram, “Achieving Secrecy Without Any Instantaneous CSI: Polar Coding for Fading Wiretap Channels,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 2161–2165, 14–19 June 2015. doi:10.1109/ISIT.2015.7282838
Abstract: This paper presents a polar coding scheme for fading wiretap channels that achieves reliability as well as security without the knowledge of instantaneous channel state information at the transmitter. Specifically, a block fading model is considered for the wiretap channel that consists of a transmitter, a receiver, and an eavesdropper; and only the information regarding the statistics (i.e., distribution) of the channel state information is assumed at the transmitter. For this model, a coding scheme that hierarchically utilizes polar codes is presented in order to address channel state variation. In particular, on polarization of different binary symmetric channels over different fading blocks, each channel use (corresponding to a possibly different polarization) is modeled as an appropriate binary erasure channel over fading blocks. Polar codes are constructed for both coding over channel uses for each fading block and coding over fading blocks for certain channel uses. In order to guarantee security, message bits are transmitted such that they can be reliably decoded at the receiver, and random bits are introduced to exhaust the observations of the eavesdropper. It is shown that this coding scheme, without instantaneous channel state information at the transmitter, is secrecy capacity achieving for the corresponding fading binary symmetric wiretap channel.
Keywords: channel coding; fading channels; telecommunication security; binary erasure channel; block fading model; channel state variation; eavesdropper; fading binary symmetric wiretap channel; fading blocks; instantaneous channel state information; message bits; polar codes; polar coding scheme; random bits; receiver; secrecy capacity; transmitter; Decoding; Encoding; Fading; Receivers; Reliability; Security; Transmitters (ID#: 15-7686)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282838&isnumber=7282397
Song, Eva C.; Cuff, Paul; Poor, H. Vincent, “Joint Source-Channel Secrecy Using Hybrid Coding,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 2520–2524, 14–19 June 2015. doi:10.1109/ISIT.2015.7282910
Abstract: The secrecy performance of a source-channel model is studied in the context of lossy source compression over a noisy broadcast channel. The source is causally revealed to the eavesdropper during decoding. The fidelity of the transmission to the legitimate receiver and the secrecy performance at the eavesdropper are both measured by a distortion metric. Two achievability schemes using the technique of hybrid coding are analyzed and compared with an operationally separate source-channel coding scheme. A numerical example is provided and the comparison results show that the hybrid coding schemes outperform the operationally separate scheme.
Keywords: broadcast channels; combined source-channel coding; decoding; telecommunication security; wireless channels; achievability schemes; decoding; distortion metric; eavesdropper; hybrid coding; legitimate receiver; lossy source compression; noisy broadcast channel; secrecy performance; source-channel coding scheme; source-channel model; Distortion; Encoding; Optical fiber theory; Rate distortion theory; joint source-channel coding; likelihood encoder; secrecy; wiretap channel (ID#: 15-7687)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282910&isnumber=7282397
Yi-Peng Wei; Ulukus, Sennur, “Polar Coding for the General Wiretap Channel,” in Information Theory Workshop (ITW), 2015 IEEE, vol., no., pp. 1–5, April 26 2015–May 1 2015. doi:10.1109/ITW.2015.7133080
Abstract: Information-theoretic work for wiretap channels is mostly based on random coding schemes. Designing practical coding schemes to achieve information-theoretic security is an important problem. By applying two recently developed techniques for polar codes, namely, universal polar coding and polar coding for asymmetric channels, we propose a polar coding scheme to achieve the secrecy capacity of the general wiretap channel.
Keywords: channel capacity; channel coding; codes; radio receivers; radio transmitters; telecommunication security; asymmetric channels; general wiretap channel; information-theoretic security; legitimate receiver; legitimate transmitter; random coding schemes; secrecy capacity; universal polar coding; Decoding; Error probability; Indexes; Manganese; Reliability; Source coding (ID#: 15-7688)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133080&isnumber=7133075
Taieb, Mohamed Haj; Chouinard, Jean-Yves, “Enhancing Secrecy of the Gaussian Wiretap Channel using Rate Compatible LDPC Codes with Error Amplification,” in Information Theory (CWIT), 2015 IEEE 14th Canadian Workshop on, vol., no., pp. 41–45, 6–9 July 2015. doi:10.1109/CWIT.2015.7255148
Abstract: This paper proposes a physical layer coding scheme to secure communications over the Gaussian wiretap channel. This scheme is based on non-systematic Rate-Compatible Low-Density-Parity-Check (RC-LDPC) codes. The rate compatibility involves the presence of a feedback channel that allows transmission at the minimum rate required for legitimate successful decoding. Whenever the decoding is unsuccessful, a feedback request is sent back by the intended receiver, favoring the legitimate recipient over an unauthorized receiver (eavesdropper). The proposed coding scheme uses a finer granularity rate compatible code to increase the eavesdropper decoding failure rate. However, finer granularity also implies longer decoding delays. For this reason, a rate estimator based on the wiretap channel capacity is used. For this purpose, a set of packets is sent at once and then successive small packets are added subsequently as needed until successful decoding by the legitimate receiver is achieved. Since the secrecy level can be assessed through the bit error rate (BER) at the unintended receiver, an error amplifier is proposed to convert the loss of only few packets in the wiretap channel into much higher BERs for the eavesdroppers. Simulation results show the secrecy improvements obtained in terms of error amplification with the proposed coding scheme. Negative security gaps can also be achieved at the physical layer.
Keywords: Gaussian channels; channel capacity; channel coding; error statistics; parity check codes; telecommunication security; BER; Gaussian wiretap channel; RC-LDPC codes; bit error rate; eavesdropper decoding failure rate; enhancing secrecy; error amplification; feedback channel; granularity rate compatible code; nonsystematic rate compatible low density parity check codes; physical layer coding scheme; rate compatibility; rate estimator; secure communications; wiretap channel capacity; Bit error rate; Decoding; Encoding; Error probability; Parity check codes; Receivers; Security (ID#: 15-7689)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7255148&isnumber=7255133
Stuart, Celine Mary.; Deepthi, P.P., “Hardware Efficient Scheme for Generating Error Vector to Enhance the Performance of Secure Channel Code,” in Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, vol., no., pp. 1–5, 19–21 Feb. 2015. doi:10.1109/SPICES.2015.7091564
Abstract: Security, reliability and hardware complexity are the main issues to be addressed in resource constrained devices such as wireless sensor networks (WSNs). Secure channel coding schemes have been developed in literature to reduce the overall processing cost while providing security and reliability. The security of a channel coding scheme against various attacks is mainly decided by the nature of intentional error vectors added to the encoded data. The methods available in literature to generate random error vectors increase the encoding complexity for each message block. Also the error vectors generated are not able to provide much security. A novel method is proposed in this paper to generate intentional error vector with sufficient weight, so that the security of the secure channel code is increased by a large margin without causing any additional encoding complexity. Results show that the proposed model is effective in incorporating security in resource constrained sensor networks.
Keywords: channel coding; cryptography; parity check codes; wireless sensor networks; encoding complexity; error vector generation; intentional error vector; resource constrained sensor network; secure channel code; Complexity theory; Cryptography; Hamming weight; Hardware; Polynomials; Quantum cascade lasers; Cryptosystem; MV attack; QCLDPC; RN attack; ST attack (ID#: 15-7690)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091564&isnumber=7091354
Vatedka, Shashank; Kashyap, Navin, “Nested Lattice Codes for Secure Bidirectional Relaying with Asymmetric Channel Gains,” in Information Theory Workshop (ITW), 2015 IEEE, vol., no., pp. 1–5, April 26 2015–May 1 2015. doi:10.1109/ITW.2015.7133151
Abstract: The basic problem of secure bidirectional relaying involves two users who want to exchange messages via an intermediate “honest-but-curious” relay node. There is no direct link between the users; all communication must take place via the relay node. The links between the user nodes and the relay are wireless links with Gaussian noise. It is required that the users’ messages be kept secure from the relay. In prior work, we proposed coding schemes based on nested lattices for this problem, assuming that the channel gains from the two user nodes to the relay are identical. We also analyzed the power-rate tradeoff for secure and reliable message exchange using our coding schemes. In this paper, we extend our prior work to the case when the channel gains are not necessarily identical, and are known to the relay node but perhaps not to the users. We show that using our scheme, perfect secrecy can be obtained only for certain values of the channel gains, and analyze the power-rate tradeoff in these cases. We also make similar observations for our strongly-secure scheme.
Keywords: Gaussian noise; channel coding; wireless channels; asymmetric channel gains; bidirectional relaying security; coding scheme; intermediate honest-but-curious relay node; nested lattice codes; wireless links; AWGN channels; Encoding; Lattices; Relays; Reliability; Security (ID#: 15-7691)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133151&isnumber=7133075
Wei Kang; Nan Liu, “A Permutation-Based Code for the Wiretap Channel,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 2306–2310, 14–19 June 2015. doi:10.1109/ISIT.2015.7282867
Abstract: In this paper, we propose a permutation-based code for the wiretap channel. We begin with an arbitrary channel code from Alice to Bob and then perform a series of permutations to enlarge the code to achieve secrecy to Eve. We show that the proposed code achieves the same performance as the traditional random code, in the sense that it achieves the random coding bound for the probability of decoding error at Bob and an exponentially vanishing information leakage at Eve. Thus, the permutation-based code we propose offers an alternative method of code construction for the wiretap channel.
Keywords: channel coding; decoding; error statistics; random codes; telecommunication security; arbitrary channel code; decoding error probability; information leakage; permutation-based code; random coding; wiretap channel; Ciphers; Decoding; Electronic mail; Encoding; Iterative decoding; Tin; Zinc (ID#: 15-7692)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282867&isnumber=7282397
Deguchi, Kana; Isaka, Motohiko, “Approximate Performance Bound for Coding in Secret Key Agreement from the Gaussian Channel,” in Wireless Communications and Networking Conference (WCNC), 2015 IEEE, vol., no., pp. 458–463, 9–12 March 2015. doi:10.1109/WCNC.2015.7127513
Abstract: We analyze a coding scheme used in secret key agreement based on noisy resource for physical layer security. We discuss approximate performance bound for a variant of asymmetric Slepian-Wolf coding system, or source coding with side information at the decoder. Numerical results indicate that the derived bound provides accurate prediction of error probability when noisy resource is the binary-input Gaussian channel.
Keywords: Gaussian processes; approximation theory; cryptographic protocols; approximate performance bound; asymmetric Slepian-Wolf coding system; binary-input Gaussian channel; decoder; noisy resource; physical layer security; secret key agreement; source coding; Approximation methods; Conferences; Decoding; Encoding; Error probability; Noise measurement; Upper bound (ID#: 15-7693)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127513&isnumber=7127309
Bustin, Ronit; Schaefer, Rafael F.; Poor, H. Vincent; Shamai, Shlomo, “On MMSE Properties of Codes for the Gaussian Broadcast Channel with Confidential Messages,” in Communication Workshop (ICCW), 2015 IEEE International Conference on, vol., no., pp. 441–446, 8–12 June 2015. doi:10.1109/ICCW.2015.7247219
Abstract: This work examines the properties of code sequences for the degraded scalar Gaussian broadcast channel with confidential messages (BCC) in terms of the behavior of the mutual information and minimum mean-square error (MMSE) functions for all signal-to-noise ratios (SNRs). More specifically, the work focuses on both completely secure code sequences, meaning that the transmitted message to the stronger receiver, i.e., with higher SNR, is completely secure in the equivocation sense, and code sequences with maximum equivocation. In these two cases an alternative converse proof is provided which also depicts the exact behavior of the relevant MMSE functions for all SNRs, for “good”, capacity achieving, code sequences. This means that the amount of disturbance on unintended receivers that is inflicted by “good” code sequences is fully depicted. Moreover, the work also considers the effect that MMSE constraints, which limit the amount of disturbance on some unintended receiver, have on the capacity region of a completely secure code sequence. For maximum rate-pairs complying with the MMSE constraint, the behavior of the relevant MMSE functions for all SNRs is fully depicted.
Keywords: Gaussian channels; broadcast channels; codes; least mean squares methods; telecommunication security; BCC; MMSE properties; SNR; capacity region; confidential messages; degraded scalar Gaussian broadcast channel; maximum equivocation; maximum rate-pairs; minimum mean-square error; secure code sequence; signal-to-noise ratios; unintended receivers; Mutual information; Receivers; Reliability theory; Security; Signal to noise ratio; Wireless communication (ID#: 15-7694)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247219&isnumber=7247062
Ting-Ya Yang; Houshou Chen, “Graph Realization of Reed-Muller Codes for Data Hiding,” in Next-Generation Electronics (ISNE), 2015 International Symposium on, vol., no., pp. 1–4, 4–6 May 2015. doi:10.1109/ISNE.2015.7131977
Abstract: In recent years the information industry develops vigorously and the progress in technology results in thriving augmentation of internet and people pass messages mutually through network in large number and triggers the issue of information security. In order to protect the safety and reliability of message passing, the development of steganography is thereby generated. In this research we will aim at its embedding and the major point of investigation is how to ensure the quality of the host after embedding secret message that means how to lower its distortion. On the other hand we want to increase the embedding efficiency and in this way we can send much more messages. At the same time we also have to consider the problem of complexity as too complicated algorithm is not feasible. This thesis is data hiding of binary host image and the Reed-Muller Codes of linear block codes in the error-correcting codes is applied to conduct research on steganography. Decoding algorithm is presented to conduct simulation analysis and discussion aiming at the embedding rate and embedding efficiency.
Keywords: Reed-Muller codes; graph theory; image processing; steganography; Internet; binary host image; data hiding; graph realization; linear block codes; message passing; steganography; Algorithm design and analysis; Decoding; Distortion; Iterative decoding; Receivers; Sum product algorithm; Channel coding; Data Hiding; Min-sum algorithm; Reed-Miller Codes; Steganography; Sum-product algorithm (ID#: 15-7695)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131977&isnumber=7131937
Gulcu, T.C.; Barg, A., “Achieving Secrecy Capacity of the Wiretap Channel and Broadcast Channel with a Confidential Component,” in Information Theory Workshop (ITW), 2015 IEEE, vol., no., pp. 1–5, April 26 2015–May 1 2015. doi:10.1109/ITW.2015.7133098
Abstract: We show that capacity of the general (not necessarily degraded or symmetric) wiretap channel under a “strong secrecy constraint” can be achieved using an explicit scheme based on polar codes. We also extend our construction to the case of broadcast channels with confidential messages defined by Csiszár and Körner, achieving the entire capacity region of this communication model. This submission is an extended abstract of the paper by the same authors (see arXiv:1410.3422).
Keywords: broadcast channels; codes; telecommunication security; wireless channels; broadcast channel; communication model; polar codes; secrecy capacity; secrecy constraint; wiretap channel; Channel coding; Decoding; Receivers; Reliability; Security; Transmitters (ID#: 15-7696)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133098&isnumber=7133075
Son Hoang Dau; Wentu Song; Chau Yuen, “Weakly Secure MDS Codes for Simple Multiple Access Networks,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 1941–1945, 14–19 June 2015. doi:10.1109/ISIT.2015.7282794
Abstract: We consider a simple multiple access network (SMAN), where k sources of unit rates transmit their data to a common sink via n relays. Each relay is connected to the sink and to certain sources. A coding scheme (for the relays) is weakly secure if a passive adversary who eavesdrops on less than k relay-sink links cannot reconstruct the data from each source. We show that there exists a weakly secure maximum distance separable (MDS) coding scheme for the relays if and only if every subset of ℓ relays must be collectively connected to at least ℓ+1 sources, for all 0 <; ℓ <; k. Moreover, we prove that this condition can be verified in polynomial time in n and k. Finally, given a SMAN satisfying the aforementioned condition, we provide another polynomial time algorithm to trim the network until it has a sparsest set of source-relay links that still supports a weakly secure MDS coding scheme.
Keywords: codes; computational complexity; multi-access systems; radio access networks; relay networks (telecommunication); set theory; telecommunication security; SMAN; common sink; polynomial time algorithm; simple multiple access networks; source-relay links; unit rates; weakly secure MDS codes; weakly secure maximum distance separable coding scheme; Channel coding; Error correction codes; Network coding; Polynomials; Relays; Security (ID#: 15-7697)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282794&isnumber=7282397
Mojahedian, M.M.; Gohari, A.; Aref, M.R., “Perfectly Secure Index Coding,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 1432–1436, 14–19 June 2015. doi:10.1109/ISIT.2015.7282692
Abstract: In this paper, we investigate the index coding problem in the presence of an eavesdropper. Messages are to be sent from one transmitter to a number of legitimate receivers who have side information about the messages, and share a set of secret keys with the transmitter. We assume perfect secrecy, meaning that the eavesdropper should not be able to retrieve any information about the message set. This problem is a generalization of the Shannon’s cipher system. We study the minimum key lengths for zero-error and perfectly secure index coding problems.
Keywords: encoding; private key cryptography; radio receivers; radio transmitters; legitimate receivers; perfectly secure index coding; radio transmitter; secret keys; side information; Channel coding; Indexes; Network coding; Receivers; Transmitters; Index coding; Shannon cipher system; common and private keys; perfect secrecy; zero-error communication (ID#: 15-7698)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282692&isnumber=72823977
Bracher, A.; Hof, E.; Lapidoth, A., “Guessing Attacks on Distributed-Storage Systems,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 1585–1589, 14–19 June 2015. doi:10.1109/ISIT.2015.7282723
Abstract: We study the secrecy of a distributed-storage system for passwords. The encoder, Alice, observes a length-n password and describes it using δ s-bit hints, which she stores in different locations. The legitimate receiver, Bob, observes ν of those hints. In one scenario we require that the expected number of guesses it takes Bob to guess the password approach 1 as n tends to infinity, and in the other that the expected size of the shortest list that Bob must form to guarantee that it contain the password approach 1. The eavesdropper, Eve, sees η <; ν hints. Assuming that Alice cannot control which hints Bob and Eve observe, we characterize for each scenario the largest normalized (by n) exponent that we can guarantee for the expected number of guesses it takes Eve to guess the password.
Keywords: computer network security; digital storage; encoding; distributed storage systems; guessing attacks; length-n password; Channel coding; Cryptography; Decoding; Entropy; Receivers; Zinc (ID#: 15-7699)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282723&isnumber=7282397
Bassi, G.; Piantanida, P.; Shamai, S., “On the Capacity of the Wiretap Channel with Generalized Feedback,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 1154–1158, 14–19 June 2015. doi:10.1109/ISIT.2015.7282636
Abstract: It is well-known that feedback does not increase the capacity of point-to-point memoryless channels, however, its effect in secure communications is not fully understood yet. In this work, an achievable scheme for the wiretap channel with generalized feedback—based on joint source-channel coding—is presented. This scheme recovers previous results, thus it can be seen as a generalization and unification of several results in the field. Additionally, the Gaussian wiretap channel with noisy feedback is analyzed, and the scheme achieves positive secrecy rates even in unfavorable situations where the eavesdropper experiences a much better channel than the legitimate user.
Keywords: Gaussian channels; channel coding; memoryless systems; source coding; telecommunication security; Gaussian wiretap channel; generalized feedback; joint source-channel coding; noisy feedback; point-to-point memoryless channel; wiretap channel capacity; Decoding; Encoding; Joints; Manganese; Noise; Noise measurement; Zinc (ID#: 15-7700)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282636&isnumber=7282397
Shaofeng Zou; Yingbin Liang; Lifeng Lai; Shlomo Shamai, “Rate Splitting and Sharing for Degraded Broadcast Channel with Secrecy Outside a Bounded Range,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 1357–1361, 14–19 June 2015. doi:10.1109/ISIT.2015.7282677
Abstract: A four-receiver degraded broadcast channel with secrecy outside a bounded range is studied, over which a transmitter sends four messages to four receivers. In the model considered, the channel quality gradually degrades from receiver 4 to receiver 1, and receiver k is required to decode the first k messages for k = 1, ..., 4. Furthermore, message 3 is required to be secured from receiver 1, and message 4 is required to be secured from receivers 1 and 2. The secrecy capacity region is established. The achievable scheme includes not only superposition, binning and embedded coding used in previous studies, but also rate splitting and sharing particularly designed for this model, which is shown to be critical to further enlarge the achievable region and enable the development of the converse proof.
Keywords: broadcast channels; channel coding; decoding; electronic messaging; radio receivers; radio transmitters; binning coding; bounded range; channel quality gradual degradation; embedded coding; message decoding; radio transmitter; rate sharing; rate splitting; receiver degraded broadcast channel secrecy; secrecy capacity region; superposition coding; Decoding; Electronic mail; Encoding Indexes; Receivers; Security; Broadcast channel; secrecy capacity (ID#: 15-7701)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282677&isnumber=7282397
Ligong Wang; Wornell, Gregory W.; Lizhong Zheng, “Limits of Low-Probability-of-Detection Communication over a Discrete Memoryless Channel,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 2525–2529, 14–19 June 2015. doi:10.1109/ISIT.2015.7282911
Abstract: This paper considers the problem of communication over a discrete memoryless channel subject to the constraint that the probability that an adversary who observes the channel outputs can detect the communication is low. Specifically, the relative entropy between the output distributions when a codeword is transmitted and when no input is provided to the channel must be sufficiently small. For a channel whose output distribution induced by the zero input symbol is not a mixture of the output distributions induced by other input symbols, it is shown that the maximum number of bits that can be transmitted under this criterion scales like the square root of the blocklength. Exact expressions for the scaling constant are also derived.
Keywords: channel coding; entropy codes; signal detection; steganography; codeword transmission; discrete memoryless channel; entropy; low-probability-of-detection communication limits; scaling constant; zero input symbol; AWGN channels; Channel capacity; Memoryless systems; Receivers; Reliability theory; Transmitters; Fisher information; Low probability of detection; covert communication; information-theoretic security (ID#: 15-7702)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282911&isnumber=7282397
Hao Ge; Ruijie Xu; Berry, Randall A., “Secure Signaling Games for Gaussian Multiple Access Wiretap Channels,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 111–115, 14–19 June 2015. doi:10.1109/ISIT.2015.7282427
Abstract: A Gaussian multiple access wire-tap channel with confidential messages is studied, where multiple users attempt to transmit private messages to a legitimate receiver in the presence of an eavesdropper. While prior work focused on the case where the users were cooperative, we assume that each user is selfish and so are modeled as playing a non-cooperative game. We assume all users send a superposition of two Gaussian codebooks: one for their confidential messages and one for “filling” the eavesdropper’s channel. For such a scheme, we give a characterization of the achievable rate region defined by Tekin and Yener using polymatroid properties. We then use this to find the Nash equilibrium region for this non-cooperative game. Furthermore, we give algorithms for finding the best and worst Nash equilibria for a given channel.
Keywords: Gaussian processes; channel coding; game theory; multi-access systems; telecommunication security; telecommunication signaling; Gaussian codebooks; Gaussian multiple access wiretap channels; Nash equilibrium region; confidential messages; eavesdropper channel; noncooperative game; polymatroid properties; private messages; secure signaling games; Face; Games; Mathematical model; Nash equilibrium; Receivers; Resource management; Security (ID#: 15-7703)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282427&isnumber=7282397
Karmakar, S.; Ghosh, A., “Approximate Secrecy Capacity Region of an Asymmetric MAC Wiretap Channel Within 1/2 Bits,” in Information Theory (CWIT), 2015 IEEE 14th Canadian Workshop on, vol., no., pp. 88–92, 6–9 July 2015. doi:10.1109/CWIT.2015.7255159
Abstract: We consider a 2-user Gaussian Multiple-Access Wiretap channel (GMAC-WT), with the Eavesdropper (E) having access to signal from only one (say, T2) of the two transmitters. We characterize the capacity region of this channel approximately within. 5 bits, where the approximation is in terms of only T1 ’s rate or the sum-rate depending on the relative strength of the eavesdropper’s channel. However, the approximation is. 5 bits independent of channel coefficients or operating Signal-to-Noise Ratios (SNR). To prove this approximate result we propose two different coding schemes, namely, the power adaptation and the time sharing coding schemes and derive their corresponding achievable rate regions. Both of them use Gaussian input distribution. To establish the approximate capacity, we first derive supersets to the capacity region and then show that corresponding to each rate pair at the boundary of these supersets there exists an achievable rate pair in one of the aforementioned achievable rate regions which are within 1/2 bits to the former pair. In comparison to a very recent result (Xie and Ulukus, ISIT 2013) on a GMAC-WT showing the requirement of interference alignment (IA) to achieve even the degrees of freedom performance, the result of this paper is surprising: our channel model is an interesting variation of the GMAC-WT for which IA is not necessary and Gaussian signalling is sufficient to achieve the entire capacity region within. 5 bits.
Keywords: Gaussian channels; channel capacity; encoding; multi-access systems; multiuser channels; telecommunication security; Gaussian input distribution; Gaussian multiple access wiretap channel; approximate secrecy capacity region; asymmetric MAC wiretap channel; eavesdropper channel; power adaptation codes; time sharing codes; Approximation methods; Channel models; Conferences; Encoding; Noise measurement; Transmitters (ID#: 15-7704)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7255159&isnumber=7255133
Dey, B.K.; Jaggi, S.; Langberg, M., “Sufficiently Myopic Adversaries Are Blind,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 1164–1168, 14–19 June 2015. doi:10.1109/ISIT.2015.7282638
Abstract: In this work we consider the communication setting in which a sender, Alice, wishes to communicate with a receiver, Bob, over a channel controlled by an adversarial entity, Calvin, who is myopic. Roughly speaking, for blocklength n, the codeword Xn transmitted by Alice is corrupted by Calvin who must base his adversarial decisions, on which characters of Xn to corrupt and how to corrupt them, not on the entire view of the codeword Xn but on Zn, the image of Xn through a noisy memoryless channel. More specifically, our communication model may be described by two channels. A memoryless channel p(z|x) from Alice to Calvin, and an arbitrarily varying channel from Alice to Bob, p(y|x, s) governed by a states Sn determined by Calvin. In standard adversarial channels, the states Sn may depend on the codeword Xn, however in our setting Sn depends only on Calvin’s view Zn. The myopic channel captures a broad range of channels and bridges between the standard models of memoryless and adversarial (zero error) channels. In this work we present upper and lower bounds on the capacity of myopic channels. For a number of special cases of interest we show that our bounds are tight. We extend our results to the setting of secure communication in which we require that the transmitted message remain secret from Calvin. For example, we show that if (i) Calvin may flip at most a p fraction of the bits communicated between Alice and Bob, and (ii) Calvin views Xn through a binary symmetric channel with parameter q, then once Calvin is “sufficiently myopic” (in this case, when q > p), then the optimal communication rate is that of an adversary who is “blind” (that is, an adversary that does not see Xn at all), which is 1-H(p) for standard communication, and H(q)-H(p) for secure communication. A similar phenomenon exists for our general model of communication.
Keywords: channel coding; radio receivers; telecommunication control; telecommunication security; adversarial channels; adversarial decisions; adversarial entity; binary symmetric channel; channel control; codeword; communication model; myopic adversaries; myopic channel; noisy memoryless channel; optimal communication rate; secure communication; Channel capacity; Decoding; Encoding; Memoryless systems; Tin; Zinc; Arbitrarily Varying Channels; Information Theoretic Secrecy; Myopic Jamming (ID#: 15-7705)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282638&isnumber=7282397
Babaheidarian, P.; Salimi, S., “Compute-and-Forward Can Buy Secrecy Cheap,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 2475–2479, 14–19 June 2015. doi:10.1109/ISIT.2015.7282901
Abstract: We consider a Gaussian multiple access channel with K transmitters, a (intended) receiver and an external eavesdropper. The transmitters wish to reliably communicate with the receiver while concealing their messages from the eavesdropper. This scenario has been investigated in prior works using two different coding techniques; the random i.i.d. Gaussian coding and the signal alignment coding. Although, the latter offers promising results in a very high SNR regime, extending these results to the finite SNR regime is a challenging task. In this paper, we propose a new lattice alignment scheme based on the compute-and-forward framework which works at any finite SNR. We show that our achievable secure sum rate scales with log(SNR) and hence, in most SNR regimes, our scheme outperforms the random coding scheme in which the secure sum rate does not grow with power. Furthermore, we show that our result matches the prior work in the infinite SNR regime. Additionally, we analyze our result numerically.
Keywords: Gaussian channels; random codes; telecommunication security; Gaussian multiple access channel; achievable secure sum rate scales; coding techniques; compute-and-forward lattice alignment; finite signal-to-noise ratio; random Gaussian coding; signal alignment coding; Channel models; Decoding; Encoding; Lattices; Receivers; Security; Signal to noise ratio (ID#: 15-7706)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282901&isnumber=7282397
Wiese, M.; Nötzel, J.; Boche, H., “The Arbitrarily Varying Wiretap Channel — Communication Under Uncoordinated Attacks,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 2146–2150, 14–19 June 2015. doi:10.1109/ISIT.2015.7282835
Abstract: We give a complete characterization of the secrecy capacity of arbitrarily varying wiretap channels (AVWCs) with correlated random coding under a strong secrecy criterion where the eavesdropper may also know the correlated randomness. We obtain that the correlated random coding secrecy capacity is continuous as a function of the AVWC. We show that the deterministic coding secrecy capacity of the AVWC either equals 0 or the correlated random coding secrecy capacity. For the case that only a weak secrecy criterion is applied, a complete characterization of the corresponding secrecy capacity for deterministic codes is possible. In the proof of the secrecy capacity formula for correlated random codes, we apply an auxiliary channel which is compound from the sender to the intended receiver and varies arbitrarily from the sender to the eavesdropper. We discuss the relation between the usual mutual information secrecy criterion and a criterion formulated in terms of total variation distance, and investigate the robustness of the AVWC model.
Keywords: random codes; telecommunication channels; telecommunication security; AVWC model; arbitrarily varying wiretap channels; eavesdropper; random coding secrecy capacity; uncoordinated attacks; Compounds; Encoding; Jamming; Mutual information; Privacy; Receivers; Robustness (ID#: 15-7707)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282835&isnumber=7282397
Schaefer, R.F.; Khisti, A.; Poor, H.V., “How to Use Independent Secret Keys for Secure Broadcasting of Common Messages,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 1971–1975, 14–19 June 2015. doi:10.1109/ISIT.2015.7282800
Abstract: The broadcast channel with independent secret keys is studied. In this scenario, a common message has to be securely broadcast to two legitimate receivers in the presence of an eavesdropper. The transmitter shares with each legitimate receiver an independent secret key of arbitrary rate. These keys can either be used as one-time pads to encrypt the common message or can be interpreted as fictitious messages used as randomization resources for wiretap coding. Both approaches are discussed and the secrecy capacity is derived for various cases. Depending on the qualities of the legitimate and eavesdropper channels, either a one-time pad, wiretap coding, or a combination of both turns out to be capacity-achieving.
Keywords: broadcast channels; broadcast communication; encoding; private key cryptography; radio receivers; telecommunication security; broadcast channel; eavesdropper channels; independent secret keys; randomization resources; secrecy capacity; secure broadcasting; wiretap coding; Cryptography; Decoding; Encoding; Receivers; Transmitters; Zinc (ID#: 15-7708)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282800&isnumber=7282397
Si-Hyeon Lee; Khisti, Ashish, “The Degraded Gaussian Diamond-Wiretap Channel,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 106–110, 14–19 June 2015. doi:10.1109/ISIT.2015.7282426
Abstract: In this paper, we present nontrivial upper and lower bounds on the secrecy capacity of the degraded Gaussian diamond-wiretap channel and identify several ranges of channel parameters where these bounds coincide with useful intuitions. Furthermore, we investigate the effect of the presence of an eavesdropper on the capacity. We consider the following two scenarios regarding the availability of randomness: 1) a common randomness is available at the source and the two relays and 2) a randomness is available only at the source and there is no available randomness at the relays. We obtain the upper bound by taking into account the correlation between the two relay signals and the availability of randomness at each encoder. For the lower bound, we propose two types of coding schemes: 1) a decode-and-forward scheme where the relays cooperatively transmit the message and the fictitious message and 2) a partial DF scheme incorporated with multicoding in which each relay sends an independent partial message and the whole or partial fictitious message using dependent codewords.
Keywords: Gaussian channels; channel capacity; decode and forward communication; relays; telecommunication security; channel parameters; common randomness; decode-and-forward scheme; degraded Gaussian diamond-wiretap channel; dependent codewords; independent partial message; multicoding; nontrivial lower bounds; nontrivial upper bounds; partial DF scheme; secrecy capacity; source randomness; Diamonds; Encoding; Relays; Resource description framework; Wiretap channel; diamond channel (ID#: 15-7709)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282426&isnumber=7282397
Yanling Chen; Koyluoglu, O. Ozan; Sezgin, Aydin, “On the Individual Secrecy Rate Region for the Broadcast Channel with an External Eavesdropper,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 1347–1351, 14–19 June 2015. doi:10.1109/ISIT.2015.7282675
Abstract: This paper studies the problem of secure communication over broadcast channels under the lens of individual secrecy constraints (i.e., information leakage from each message to an eavesdropper is made vanishing). It is known that, for the communication over the degraded broadcast channels, the stronger receiver is able to decode the message of the weaker receiver. In the individual secrecy setting, the message for the weaker receiver can be further utilized to secure the partial message that is intended to the stronger receiver. With such a coding spirit, it is shown that more secret bits can be conveyed to the stronger receiver. In particular, for the corresponding Gaussian model, a constant gap (i.e., 0.5 bits within the individual secrecy capacity region) result is obtained. Overall, when compared with the joint secrecy constraint, the results allow for trading-off secrecy level and throughput in the system.
Keywords: broadcast channels; encoding; telecommunication security; Gaussian model;degraded broadcast channel; external eavesdropper; individual secrecy constraints; individual secrecy rate; information leakage; partial message security; secure communication; Decoding; Encoding; Entropy; Joints; Markov processes; Receivers; Zinc (ID#: 15-7710)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282675&isnumber=7282397
Balmahoon, R.; Ling Cheng, “Information Leakage of Heterogeneous Encoded Correlated Sequences over an Eavesdropped Channel,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 2949–2953, 14–19 June 2015. doi:10.1109/ISIT.2015.7282997
Abstract: Correlated sources are present in communication systems where protocols ensure that there is some predetermined information for sources. Here correlated sources across an eavesdropped channel that incorporate a heterogeneous encoding scheme and their effect on the information leakage when some channel information and a source have been wiretapped is investigated. The information leakage bounds for this scenario are provided. Further, an implementation method using a matrix partition approach is described.
Keywords: encoding; telecommunication security; correlated sources; eavesdropped channel; heterogeneous encoded correlated sequence; information leakage; matrix partition; wiretapped source; Decoding; Receivers; Security; Source coding; Uncertainty (ID#: 15-7711)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282997&isnumber=7282397
Jingbo Liu; Cuff, Paul; Verdu, Sergio, “Resolvability in Eγ with Applications to Lossy Compression and Wiretap Channels,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 755–759, 14–19 June 2015. doi:10.1109/ISIT.2015.7282556
Abstract: We study the amount of randomness needed for an input process to approximate a given output distribution of a channel in the Eγ distance. A general one-shot achievability bound for the precision of such an approximation is developed. In the i.i.d. setting where γ = exp(nE), a (nonnegative) randomness rate above infQU:D(QX||πX)≤E{D(QX||πX) + I(QU, QX|U) - E} is necessary and sufficient to asymptotically approximate the output distribution πX⊗n using the channel QX|U⊗n, where QU → QX|U → QX. The new resolvability result is then used to derive a oneshot upper bound on the error probability in the rate distortion problem; and a lower bound on the size of the eavesdropper list to include the actual message in the wiretap channel problem. Both bounds are asymptotically tight in i.i.d. settings.
Keywords: approximation theory; compressed sensing; distortion; error statistics; telecommunication security; eavesdropper list; error probability; lossy compression; one-shot achievability bound; rate distortion problem; wiretap channels; Approximation methods; Distortion; Entropy; Measurement; Memoryless systems; Source coding; TV (ID#: 15-7712)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282556&isnumber=7282397
Papadopoulos, A.; Czap, L.; Fragouli, C., “LP Formulations for Secrecy over Erasure Networks with Feedback,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 954–958, 14–19 June 2015. doi:10.1109/ISIT.2015.7282596
Abstract: We design polynomial time schemes for secure message transmission over arbitrary networks, in the presence of an eavesdropper, and where each edge corresponds to an erasure channel with public feedback. Our schemes are described through linear programming (LP) formulations, that explicitly select (possibly different) sets of paths for key-generation and message sending. Although our LPs are not always capacity-achieving, they outperform the best known alternatives in the literature, and extend to incorporate several interesting scenaria.
Keywords: cryptography; linear programming; telecommunication channels; telecommunication security; LP formulation; arbitrary network; erasure network; feedback; message sending; polynomial time scheme; secure message transmission; Automatic repeat request; Complexity theory; Encryption; Network coding (ID#: 15-7713)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282596&isnumber=7282397
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Cybersecurity Education 2015 |
As a discipline in higher education, cybersecurity is less than two decades old. But because of the large number of qualified professionals needed, many universities offer cybersecurity education in a variety of delivery formats—live, online, and hybrid. Much of the curriculum has been driven by NSTISSI standards written in the early 1990s. The articles cited here look at aspects of curriculum, methods, evaluation, and support technologies. They were published in 2015.
Salah, K.; Hammoud, M.; Zeadally, S., “Teaching Cybersecurity Using the Cloud,” in Learning Technologies, IEEE Transactions on, vol. 8, no. 4, pp. 383–392, Oct.–Dec. 1 2015. doi:10.1109/TLT.2015.2424692
Abstract: Cloud computing platforms can be highly attractive to conduct course assignments and empower students with valuable and indispensable hands-on experience. In particular, the cloud can offer teaching staff and students (whether local or remote) on-demand, elastic, dedicated, isolated, (virtually) unlimited, and easily configurable virtual machines. As such, employing cloud-based laboratories can have clear advantages over using classical ones, which impose major hindrances against fulfilling pedagogical objectives and do not scale well when the number of students and distant university campuses grows up. We show how the cloud paradigm can be leveraged to teach a cybersecurity course. Specifically, we share our experience when using cloud computing to teach a senior course on cybersecurity across two campuses via a virtual classroom equipped with live audio and video. Furthermore, based on this teaching experience, we propose guidelines that can be applied to teach similar computer science and engineering courses. We demonstrate how cloud-based laboratory exercises can greatly help students in acquiring crucial cybersecurity skills as well as cloud computing ones, which are in high demand nowadays. The cloud we used for this course was the Amazon Web Services (AWS) public cloud. However, our presented use cases and approaches are equally applicable to other available cloud platforms such as Rackspace and Google Compute Engine, among others.
Keywords: Web services; cloud computing; computer science education; educational courses; security of data; teaching; virtual machines; AWS public cloud; Amazon Web Services public cloud; Google Compute Engine; Rackspace; cloud computing platforms; cloud-based laboratories; computer engineering courses; computer science courses; cybersecurity; teaching; virtual classroom; Cloud computing; Computer crime; Computer security; Education; Network security; Amazon AWS; Cloud Computing; Computer Security; Cybersecurity; Education; Network Security; computer security; education; network security(ID#: 15-7714)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7089256&isnumber=4620077
Mishra, S.; Raj, R.K.; Romanowski, C.J.; Schneider, J.; Critelli, A., “On Building Cybersecurity Expertise in Critical Infrastructure Protection,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no.,
pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225263
Abstract: Cybersecurity professionals need training in critical infrastructure protection (CIP) to prepare them for solving problems in design, implementation, and maintenance of infrastructure assets. However, two major roadblocks exist: (1) the lack of necessary skills sets and (2) the frequent need for updates due to rapid changes in computing disciplines. To address these issues and build the needed expertise, this paper proposes a flexible training framework for integrating CIP into cybersecurity training. The foundation of this framework is a set of self-contained training modules; each module is a distinct unit for use by an instructor. Modules are meant to be integrated at different levels, with subsequent modules building on those presented earlier. As these modules are designed for frequent updating and/or replacement, the proposed approach is flexible. This paper develops the generalized CIP module-based training framework and outlines sample introductory and advanced training modules.
Keywords: computer science education; critical infrastructures; national security; security of data; training; CIP module-based training framework; building cybersecurity expertise; critical infrastructure protection; cybersecurity training; flexible training framework; infrastructure asset maintenance; self-contained training module; skill set; Computer security; Network topology; Routing protocols; Topology; Training; Wireless communication; cybersecurity; training modules (ID#: 15-7715)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225263&isnumber=7190491
Dark, M.; Mirkovic, J., “Evaluation Theory and Practice Applied to Cybersecurity Education,” in Security & Privacy, IEEE,
vol. 13, no. 2, pp. 75–80, Mar.–Apr. 2015. doi:10.1109/MSP.2015.27
Abstract: As more institutions, organizations, schools, and programs launch cybersecurity education programs in an attempt to meet needs that are emerging in a rapidly changing environment, evaluation will be important to ensure that programs are having the desired impact.
Keywords: educational institutions; security of data; cybersecurity education programs; cybersecurity environment; evaluation theory; schools; Computer security; Design methodology; Game theory; Performance evaluation; Program logic; Reliability; cybersecurity; evaluation design; formative evaluation; measurement; metrics; program logic; reliability; summative evaluation; validity (ID#: 15-7716)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085972&isnumber=7085640
Tunc, Cihan; Hariri, Salim; Montero, Fabian De La Peña; Fargo, Farah; Satam, Pratik, “CLaaS: Cybersecurity Lab as a Service—Design, Analysis, and Evaluation,” in Cloud and Autonomic Computing (ICCAC), 2015 International Conference on, vol., no., pp. 224–227, 21–25 Sept. 2015. doi:10.1109/ICCAC.2015.34
Abstract: The explosive growth of IT infrastructures, cloud systems, and Internet of Things (IoT) have resulted in complex systems that are extremely difficult to secure and protect against cyberattacks that are growing exponentially in the complexity and also in the number. Overcoming the cybersecurity challenges require cybersecurity environments supporting the development of innovative cybersecurity algorithms and evaluation of the experiments. In this paper, we present the design, analysis, and evaluation of the Cybersecurity Lab as a Service (CLaaS) which offers virtual cybersecurity experiments as a cloud service that can be accessed from anywhere and from any device (desktop, laptop, tablet, smart mobile device, etc.) with Internet connectivity. We exploit cloud computing systems and virtualization technologies to provide isolated and virtual cybersecurity experiments for vulnerability exploitation, launching cyberattacks, how cyber resources and services can be hardened, etc. We also present our performance evaluation and effectiveness of CLaaS experiments used by students.
Keywords: Cloud computing; Computer crime; IP networks; Servers; Virtualization; CLaaS; cybersecurity; education; virtual lab; virtualization (ID#: 15-7717)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312161&isnumber=7312127
Tunc, Cihan; Hariri, Salim; Montero, Fabian De La Peña; Fargo, Farah; Satam, Pratik; Al-Nashif, Youssif, “Teaching and Training Cybersecurity as a Cloud Service,” in Cloud and Autonomic Computing (ICCAC), 2015 International Conference on, vol., no.,
pp. 302–308, 21–25 Sept. 2015. doi:10.1109/ICCAC.2015.47
Abstract: The explosive growth of IT infrastructures, cloud systems, and Internet of Things (IoT) have resulted in complex systems that are extremely difficult to secure and protect against cyberattacks which are growing exponentially in complexity and in number. Overcoming the cybersecurity challenges is even more complicated due to the lack of training and widely available cybersecurity environments to experiment with and evaluate new cybersecurity methods. The goal of our research is to address these challenges by exploiting cloud services. In this paper, we present the design, analysis, and evaluation of a cloud service that we refer to as Cybersecurity Lab as a Service (CLaaS) which offers virtual cybersecurity experiments that can be accessed from anywhere and from any device (desktop, laptop, tablet, smart mobile device, etc.) with Internet connectivity. In CLaaS, we exploit cloud computing systems and virtualization technologies to provide virtual cybersecurity experiments and hands-on experiences on how vulnerabilities are exploited to launch cyberattacks, how they can be removed, and how cyber resources and services can be hardened or better protected. We also present our experimental results and evaluation of CLaaS virtual cybersecurity experiments that have been used by graduate students taking our cybersecurity class as well as by high school students participating in GenCyber camps.
Keywords: Cloud computing; Computer crime; Network interfaces; Protocols; Servers; CLaaS; and cloud computing; cybersecurity experiments; education; virtual cloud services; virtualization (ID#: 15-7718)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312173&isnumber=7312127
Samtani, S.; Chinn, R.; Hsinchun Chen, “Exploring Hacker Assets in Underground Forums,” in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, vol., no., pp. 31–36, 27–29 May 2015. doi:10.1109/ISI.2015.7165935
Abstract: Many large companies today face the risk of data breaches via malicious software, compromising their business. These types of attacks are usually executed using hacker assets. Researching hacker assets within underground communities can help identify the tools which may be used in a cyberattack, provide knowledge on how to implement and use such assets and assist in organizing tools in a manner conducive to ethical reuse and education. This study aims to understand the functions and characteristics of assets in hacker forums by applying classification and topic modeling techniques. This research contributes to hacker literature by gaining a deeper understanding of hacker assets in well-known forums and organizing them in a fashion conducive to educational reuse. Additionally, companies can apply our framework to forums of their choosing to extract their assets and appropriate functions.
Keywords: Internet; computer crime; pattern classification; attack types; classification techniques; cyberattack; data breaches; educational reuse; ethical reuse; hacker assets; hacker forums; malicious software; topic modeling techniques; underground communities; underground forums; Decision support systems; Feature extraction; Labeling; Resource management; Support vector machines; Tutorials; cybersecurity; topic modeling (ID#: 15-7719)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165935&isnumber=7165923
Bashir, Masooda; Lambert, April; Guo, Boyi; Memon, Nasir; Halevi, Tzipora, “Cybersecurity Competitions: The Human Angle,” in Security & Privacy, IEEE, vol. 13, no. 5, pp. 74–79, Sept.–Oct. 2015. doi:10.1109/MSP.2015.100
Abstract: As a first step in a larger research program, the authors surveyed Cybersecurity Awareness Week participants. By better understanding the characteristics of those who attend such events, they hope to design competitions that will inspire students to pursue cybersecurity careers.
Keywords: Computer crime; Computer security; Education; Engineering profession; Privacy; cybercrime; cybersecurity competitions; education; security (ID#: 15-7720)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7310819&isnumber=7310797
Rajamäki, J., “Cyber Security Education as a Tool for Trust-Building in Cross-Border Public Protection and Disaster Relief Operations,” in Global Engineering Education Conference (EDUCON), 2015 IEEE, vol., no., pp. 371–378, 18–20 March 2015. doi:10.1109/EDUCON.2015.7095999
Abstract: Public protection and disaster relief (PPDR) operations are increasingly more dependent on networks and data processing infrastructure. Incidents such as natural hazards and organized crime do not respect national boundaries. As a consequence, there is an increased need for European collaboration and information sharing related to public safety communications (PSC) and information exchange technologies and procedures - and trust is the keyword here. According to our studies, the topic “trust-building” could be seen as the most important issue with regard to multi-agency PPDR cooperation. Cyber security should be seen as a key enabler for the development and maintenance of trust in the digital world. It is important to complement the currently dominating “cyber security as a barrier” perspective by emphasizing the role of “cyber security as an enabler” of new business, interactions, and services - and recognizing that trust is a positive driver for growth. Public safety infrastructure is becoming more dependent on unpredictable cyber risks. Everywhere, present computing means that PPDR agencies do not know when they are using dependable devices or services, and there are chain reactions of unpredictable risks. If cyber security risks are not made ready, PPDR agencies, like all organizations, will face severe disasters over time. Investing in systems that improve confidence and trust can significantly reduce costs and improve the speed of interaction. From this perspective, cyber security should be seen as a key enabler for the development and maintenance of trust in the digital world, and it has the following themes: security technology, situation awareness, security management and resiliency. Education is the main driver for complementing the currently dominating “cyber security as a barrier” perspective by emphasizing the role of “cyber security as an enabler”.
Keywords: computer aided instruction; computer science education; emergency management; trusted computing; PPDR operation; PSC; cross-border public protection operation; cyber security education; cybersecurity-as-a-barrier perspective; cybersecurity-as-an-enabler perspective; disaster relief operation; information exchange; multiagency PPDR cooperation; public safety communications; resiliency theme; security management theme; security technology theme; situation awareness theme; trust building; Computer security; Education; Europe; Organizations; Safety; Standards organizations; cyber security; education; public protection and disaster relief; trust-building (ID#: 15-7721)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7095999&isnumber=7095933
Gestwicki, P.; Stumbaugh, K., “Observations and Opportunities in Cybersecurity Education Game Design,” in Computer Games: AI, Animation, Mobile, Multimedia, Educational and Serious Games (CGAMES), 2015, vol., no., pp. 131–137, 27–29 July 2015. doi:10.1109/CGames.2015.7272970
Abstract: We identify three challenges in cybersecurity education that could be addressed through game-based learning: conveying cybersecurity fundamentals, assessment of understanding, and recruitment and retention of professionals. By combining established epistemologies for cybersecurity with documented best practices for educational game design, we are able to define four research questions about the state of cybersecurity education games. Our attention is focused on games for ages 12-18 rather than adult learners or professional development. We analyze 21 games through the lens of our four research questions, including games that are explicitly designed to teach cybersecurity concepts as well as commercial titles with cybersecurity themes; in the absence of empirical evidence of these games’ efficacy, our analysis frames these games within educational game design theory. This analysis produces a three-tier taxonomy of games: those whose gameplay is not associated with cybersecurity education content (Type 1); those that integrate multiple-choice decisions only (Type 2); and those that integrate cybersecurity objectives into authentic gameplay activity (Type 3). This analysis reveals opportunities for new endeavors to incorporate multiple perspectives and to scaffold learners progression from the simple games to the more complex simulations.
Keywords: computer aided instruction; computer games; security of data; authentic gameplay activity; cybersecurity education content; cybersecurity education game design; cybersecurity fundamentals; cybersecurity objectives; cybersecurity themes; educational game design theory; game three-tier taxonomy; game-based learning; learners progression; multiple-choice decisions; professionals recruitment; professionals retention; Computer crime; Computers; Education; Games; Taxonomy; cybersecurity education; educational games; game analysis; game design (ID#: 15-7722)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272970&isnumber=7272892
Mirkovic, Jelena; Dark, Melissa; Wenliang Du; Vigna, Giovanni; Denning, Tamara, “Evaluating Cybersecurity Education Interventions: Three Case Studies,” in Security & Privacy, IEEE, vol. 13, no. 3, pp. 63–69, May–June 2015. doi:10.1109/MSP.2015.57
Abstract: The authors collaborate with cybersecurity faculty members from different universities to apply a five-step approach in designing an evaluation for education interventions. The goals of this exercise were to show how to design an evaluation for a real intervention from beginning to end, to highlight the common intervention goals and propose suitable evaluation instruments, and to discuss the expected investment of time and effort in preparing and performing the education evaluations.
Keywords: computer science education; educational institutions; security of data; cybersecurity education interventions; education evaluations; universities; Computer security; Education; Performance evaluation; Sociology; control-alt-hack; cybersecurity education; education intervention; evaluation design; iCTF; intervention evaluation; security; seed labs (ID#: 15-7723)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7118092&isnumber=7118073
Geoffrey L. Herman, Ronald Dodge; “Creating Assessment Tools for Cybersecurity Education,” (Abstract Only), in SIGCSE’15 Proceedings of the 46th ACM Technical Symposium on Computer Science Education, February 2015, Pages 696–696.
doi:10.1145/2676723.2691863
Abstract: Recent large-scale data breaches such as the credit card scandals of Target and Home Depot have significantly raised the public awareness of the importance of the security of their data and personal information. These incidents highlight a growing need and urgency to develop the cybersecurity infrastructure of our country and in the world. The development of ACM’s Computer Science Curriculum 2013 and the National Initiative for Cybersecurity Education framework further highlight the growing importance of cybersecurity in computing education. Critically, recent studies predict that there will be a significant demand for cybersecurity professionals in the coming years, yet there is a lack of rigorous evidence-based infrastructure to advise educators on how best to engage, inform, educate, nurture, and retain cybersecurity students and how best to structure cybersecurity curricula to prepare new professionals for careers in this field. The development of validated assessment tools of student learning provide one means for increasing the rigor with which we make pedagogical and curricular decisions. During this Birds of a Feather session, participants will engage in a structured dialogue to identify what assessment tools are needed to improve cybersecurity education. Further, participants will provide feedback on initial efforts to identify a core set of concepts and skills that will be essential for students’ success in cybersecurity fields.
Keywords: assessment, computer science education, concept inventories, cybersecurity (ID#: 15-7724)
URL: http://doi.acm.org/10.1145/2676723.2691863
Christopher Herr, Dennis Allen; “Video Games as a Training Tool to Prepare the Next Generation of Cyber Warriors,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015,
Pages 23–29. doi:10.1145/2751957.2751958
Abstract: There is a global shortage of more than 1 million skilled cybersecurity professionals needed to address current cybersecurity challenges [5]. Criminal organizations, nation-state adversaries, hacktavists, and numerous other threat actors continuously target business, government, and even critical infrastructure networks. Estimated losses from cyber crime and cyber espionage amount to hundreds of billions annually [4]. The need to build, maintain, and defend computing resources is greater than ever before. A novel approach to closing the cybersecurity workforce gap is to develop cutting-edge cybersecurity video games that (1) grab the attention of young adults, (2) build a solid foundation of information security knowledge and skills, (3) inform players of potential career paths, and (4) establish a passion that drives them through higher education and professional growth. Although some video games and other games do exist, no viable options are available that target high-school-age students and young adults that supply both a quality gaming experience and foster the gain of key cybersecurity knowledge and skills. Given the Department of Defense’s success with simulations and gaming technology, its sponsorship of a cybersecurity video game could prove extremely valuable in addressing the current and future needs for our next generation cyber warriors.
Keywords: cybersecurity education, cybersecurity game based learning, cybersecurity games, video games, video gaming (ID#: 15-7725)
URL: http://doi.acm.org/10.1145/2751957.2751958
Diana L. Burley, Barbara Endicott-Popovsky; “Focus Group: Developing a Resilient, Agile Cybersecurity Educational System (RACES),” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 13–14. doi:10.1145/2751957.2756530
Abstract: (not provided)
Keywords: accreditation, cybersecurity, it workforce (ID#: 15-7726)
URL: http://doi.acm.org/10.1145/2751957.2756530
Edward Sobiesk, Jean Blair, Gregory Conti, Michael Lanham, Howard Taylor; “Cyber Education: A Multi-Level, Multi-Discipline Approach,” in SIGITE ’15 Proceedings of the 16th Annual Conference on Information Technology Education, September 2015, Pages 43–47. doi:10.1145/2808006.2808038
Abstract: The purpose of this paper is to contribute to the emerging dialogue on the direction, content, and techniques involved in cyber education. The principle contributions of this work include a discussion on the definition of cyber and then a description of a multi-level, multi-discipline approach to cyber education with the goal of providing all educated individuals a level of cyber education appropriate for their role in society. Our work assumes cyber education includes technical and non-technical content at all levels. Our model formally integrates cyber throughout an institution’s entire curriculum including within the required general education program, cyber-related electives, cyber threads, cyber minors, cyber-related majors, and cyber enrichment opportunities, collectively providing the foundational knowledge, skills, and abilities needed to succeed in the 21st Century Cyber Domain. To demonstrate one way of instantiating our multi-level, multi-discipline approach, we describe how it is implemented at our institution. Overall, this paper serves as a call for further discussion, debate, and effort on the topic of cyber education as well as describing our innovative model for cyber pedagogy.
Keywords: cyber, cyber education paradigm, cyber security, multi-discipline cyber education, multi-level cyber education
(ID#: 15-7727)
URL: http://doi.acm.org/10.1145/2808006.2808038
Daniel Manson, Portia Pusey, Mark J. Hufe, James Jones, Daniel Likarish, Jason Pittman, David Tobey,” The Cybersecurity Competition Federation,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 109–112. doi:10.1145/2751957.2751980
Abstract: In a time of global crisis in cybersecurity, competitions and related activities are rapidly emerging to provide fun and engaging ways of developing and assessing cybersecurity knowledge and skills. However, there is no neutral organization that brings them together to promote collective efforts and address common issues. This paper will describe the rationale and process for developing the Cybersecurity Competition Federation (CCF) (National Science Foundation Award DUE- 134536) which was created to facilitate a community that promotes cybersecurity competitions and related activities. CCF’s vision is to maintain an engaged and thriving ecosystem of cybersecurity competitions and related activities to build career awareness and cybersecurity skill to address a global shortage of cybersecurity professionals.
Keywords: aptitude, competency model, critical incident, cyber defense competition, game balance, job performance model, ksa, talent management, vignette (ID#: 15-7728)
URL: http://doi.acm.org/10.1145/2751957.2751980
Sandro Fouché, Andrew H. Mangle; “Code Hunt as Platform for Gamification of Cybersecurity Training,” in CHESE 2015 Proceedings of the 1st International Workshop on Code Hunt Workshop on Educational Software Engineering, July 2015,
Pages 9–11. doi:10.1145/2792404.2792406
Abstract: The nation needs more cybersecurity professionals. Beyond just a general shortage, women, African Americans, and Latino Americans are underrepresented in the field. This not only contributes to the scarcity of qualified cybersecurity professionals, but the absence of diversity leads to a lack of perspective and differing viewpoints. Part of the problem is that cybersecurity suffers from barriers to entry that include expensive training, exclusionary culture, and the need for costly infrastructure. In order for students to start learning about cybersecurity, access to training, infrastructure and subject matter experts is imperative. The existing Code Hunt framework, used to help students master programming, could be a springboard to help reduce the challenges facing students interested in cybersecurity. Code Hunt offers gamification, community supported development, and a cloud infrastructure that provides an on-ramp to immediate learning. Leveraging Code Hunt’s structured gaming model can addresses these weaknesses and makes cybersecurity training more accessible to those without the means or inclination to participate in more traditional cybersecurity competitions.
Keywords: Cybersecurity, Education, Gamification, Software Testing (ID#: 15-7729)
URL: http://doi.acm.org/10.1145/2792404.2792406
Craig A. Stewart, Timothy M. Cockerill, Ian Foster, David Hancock, Nirav Merchant, Edwin Skidmore, Daniel Stanzione, James Taylor, Steven Tuecke, George Turner, Matthew Vaughn, Niall I. Gaffney; “Jetstream: A Self-Provisioned, Scalable Science and Engineering Cloud Environment,” in XSEDE ’15 Proceedings of the 2015 XSEDE Conference: Scientific Advancements Enabled by Enhanced Cyberinfrastructure, July 2015, Article No. 29. doi:10.1145/2792745.2792774
Abstract: Jetstream will be the first production cloud resource supporting general science and engineering research within the XD ecosystem. In this report we describe the motivation for proposing Jetstream, the configuration of the Jetstream system as funded by the NSF, the team that is implementing Jetstream, and the communities we expect to use this new system. Our hope and plan is that Jetstream, which will become available for production use in 2016, will aid thousands of researchers who need modest amounts of computing power interactively. The implementation of Jetstream should increase the size and disciplinary diversity of the US research community that makes use of the resources of the XD ecosystem.
Keywords: atmosphere, big data, cloud computing, long tail of science (ID#: 15-7730)
URL: http://doi.acm.org/10.1145/2792745.2792774
Richard S. Weiss, Stefan Boesen, James F. Sullivan, Michael E. Locasto, Jens Mache, Erik Nilsen; “Teaching Cybersecurity Analysis Skills in the Cloud,” in SIGCSE ’15 Proceedings of the 46th ACM Technical Symposium on Computer Science Education, February 2015, Pages 332–337. doi:10.1145/2676723.2677290
Abstract: This paper reports on the experience of using the EDURange framework, a cloud-based resource for hosting on-demand interactive cybersecurity scenarios. Our framework is designed especially for the needs of teaching faculty. The scenarios we have implemented each are designed specifically to nurture the development of analysis skills in students as a complement to both theoretical security concepts and specific software tools. Our infrastructure has two features that make it unique compared to other cybersecurity educational frameworks. First, EDURange is scalable because it is hosted on a commercial, large-scale cloud environment. Second, EDURange supplies instructors with the ability to dynamically change the parameters and characteristics of exercises so they can be replayed and adapted to multiple classes. Our framework has been used successfully in classes and workshops for students and faculty. We present our experiences building the system, testing it, and using feedback from surveys to improve the system and boost user interest.
Keywords: analysis skills, edurange, hacker curriculum, offensive security (ID#: 15-7731)
URL: http://doi.acm.org/10.1145/2676723.2677290
Indira R. Guzman, Thomas Hilton, Miguel (Mike) O. Villegas, Michelle Kaarst-Brown, Jason James, Ashraf Shirani, Shuyuan Mary Ho, Diane Lending; “Panel: Cybersecurity Workforce Development,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 15–17. doi:10.1145/2751957.2756529
Abstract: Information Officers (NASCIO) in the United States the number one strategic management priority in 2014 is security. It is therefore imperative for managers to have qualified IT security professionals in order to effectively secure the network infrastructure, protect information, diagnose and manage attacks remediating damage or losses and preparing for disaster recovery to prevent future security attacks. A single cyber security breach can cost a company hundreds of thousands of dollars. This increase in losses indicates that IT professionals have increased responsibilities in IT security within organizations. In this panel, we will discuss the range of factors that influence the development the cybersecurity workforce, the role that different stakeholders play to ensure IT security professionals are well qualified and have the necessary skills that they should have in order to perform an effective job of securing the network infrastructure of an organization (. In addition, we will share different strategies for addressing development needs of this increasingly needed cybersecurity workforce.
Keywords: accreditation, cybersecurity workforce, information security professionals, panel, professional certificates (ID#: 15-7732)
URL: http://doi.acm.org/10.1145/2751957.2756529
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Flow Control Integrity 2015 |
Control-flow attacks are pervasive. The research cited in this bibliography looks at control-flow integrity (CFI) in the context of cyber physical systems, the Smart Grid, and a variety of web applications. For the Science of Security community, CFI research has implications for resilience, composability, and governance. The work presented here was published in 2015.
Ryutov, T.; Almajali, A.; Neuman, C., “Modeling Security Policies for Mitigating the Risk of Load Altering Attacks on Smart Grid Systems,” in Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, vol., no., pp. 1–6, 13–13 April 2015. doi:10.1109/MSCPES.2015.7115393
Abstract: While demand response programs achieve energy efficiency and quality objectives, they bring potential security threats into the Smart Grid. An ability to influence load in the system provides the capability for an attacker to cause system failures and impacts the quality and integrity of the power delivered to customers. This paper presents a security mechanism that monitors and controls load according to security policies during normal system operation. The mechanism monitors, detects, and responds to load altering attacks. The authors examined security requirements of Smart Grid stakeholders and constructed a set of load control policies enforced by the mechanism. A proof of concept prototype was implemented and tested using the simulation environment. By enforcing the proposed policies in this prototype, the system is maintained in a safe state in the presence of load drop attacks.
Keywords: power system security; risk management; smart power grids; demand response programs; load altering attacks; load drop attacks; risk mitigation; security policies modeling; smart grid stakeholders; smart grid systems; Load flow control; Load modeling; Power quality; Safety; Security; Servers; Smart grids; cyber-physical; smart grid; security policy; simulation
(ID#: 15-7571)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7115393&isnumber=7115373
Hedin, D.; Bello, L.; Sabelfeld, A., “Value-Sensitive Hybrid Information Flow Control for a JavaScript-Like Language,” in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, vol., no., pp. 351–365, 13–17 July 2015. doi:10.1109/CSF.2015.31
Abstract: Secure integration of third-party code is one of the prime challenges for securing today’s web. Recent empirical studies give evidence of pervasive reliance on and excessive trust in third-party JavaScript, with no adequate security mechanism to limit the trust or the extent of its abuse. Information flow control is a promising approach for controlling the behavior of third-party code and enforcing confidentiality and integrity policies. While much progress has been made on static and dynamic approaches to information flow control, only recently their combinations have received attention. Purely static analysis falls short of addressing dynamic language features such as dynamic objects and dynamic code evaluation, while purely dynamic analysis suffers from inability to predict side effects in non-performed executions. This paper develops a value-sensitive hybrid mechanism for tracking information flow in a JavaScript-like language. The mechanism consists of a dynamic monitor empowered to invoke a static component on the fly. This enables us to achieve a sound yet permissive enforcement. We establish formal soundness results with respect to the security policy of non-interference. In addition, we demonstrate permissiveness by proving that we subsume the precision of purely static analysis and by presenting a collection of common programming patterns that indicate that our mechanism has potential to provide more permissiveness than dynamic mechanisms in practice.
Keywords: Java; program diagnostics; security of data; JavaScript-like language; common programming patterns; confidentiality policies; dynamic code evaluation; dynamic language features; dynamic objects; integrity policies; pervasive reliance; purely static analysis; security policy; third-party code; value-sensitive hybrid information flow control; Context; Monitoring; Performance analysis; Reactive power; Runtime; Security; Semantics; information flow; language-based security (ID#: 15-7572)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243744&isnumber=7243713
Bhardwaj, C., “Systematic Information Flow Control in mHealth Systems,” in Communication Systems and Networks (COMSNETS), 2015 7th International Conference on, vol., no., pp. 1–6, 6–10 Jan. 2015. doi:10.1109/COMSNETS.2015.7098736
Abstract: This paper argues that the security and integrity requirements of mHealth systems are best addressed by end-to-end information flow control (IFC). The paper extends proposals of decentralized IFC to a distributed smartphone-based mHealth system, identifying the basic threat model and the necessary trusted computing base. We show how the framework proposed can be integrated into an existing communication stack between a phalanx of sensors and an Android smartphone. The central idea of the framework involves systematically and automatically labelling data and metadata collected during medical encounters with security and integrity tags. These mechanisms provided can then be used for enforcing a wide variety of complex information flow control policies in diverse applications. The chief novelty over existing DIFC approaches is that users are relieved of having to create tags for each class of data and metadata that is collected in the system, thus making it user-friendly and scalable.
Keywords: Android (operating system); data integrity; human computer interaction; medical information systems; mobile computing; security of data; Android smart phone; communication stack; complex information flow control policies; data class; data labelling; decentralized IFC; distributed smart phone-based m-Health system; end-to-end information flow control; integrity tags; m-health system integrity requirements; m-health system security requirements; medical encounters; meta data collection; scalable system; security tags; sensor phalanx; systematic information flow control; threat model; trusted computing base; user-friendly system (ID#: 15-7573)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098736&isnumber=7098633
Zibordi de Paiva, O.; Ruggiero, W.V., “A Survey on Information Flow Control Mechanisms in Web Applications,” in High Performance Computing & Simulation (HPCS), 2015 International Conference on, vol., no., pp. 211–220, 20–24 July 2015. doi:10.1109/HPCSim.2015.7237042
Abstract: Web applications are nowadays ubiquitous channels that provide access to valuable information. However, web application security remains problematic, with Information Leakage, Cross-Site Scripting and SQL-Injection vulnerabilities - which all present threats to information - standing among the most common ones. On the other hand, Information Flow Control is a mature and well-studied area, providing techniques to ensure the confidentiality and integrity of information. Thus, numerous works were made proposing the use of these techniques to improve web application security. This paper provides a survey on some of these works that propose server-side only mechanisms, which operate in association with standard browsers. It also provides a brief overview of the information flow control techniques themselves. At the end, we draw a comparative scenario between the surveyed works, highlighting the environments for which they were designed and the security guarantees they provide, also suggesting directions in which they may evolve.
Keywords: Internet; SQL; security of data; SQL-injection vulnerability; Web application security; cross-site scripting; information confidentiality; information flow control mechanisms; information integrity; information leakage; server-side only mechanisms; standard browsers; ubiquitous channels; Browsers; Computer architecture; Context; Security; Standards; Web servers; Cross-Site Scripting; Information Flow Control; Information Leakage; SQL Injection; Web Application Security (ID#: 15-7574)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237042&isnumber=7237005
Arthur, W.; Mehne, B.; Das, R.; Austin, T., “Getting in Control of Your Control Flow with Control-Data Isolation,” in Code Generation and Optimization (CGO), 2015 IEEE/ACM International Symposium on, vol., no., pp. 79–90, 7-11 Feb. 2015. doi:10.1109/CGO.2015.7054189
Abstract: Computer security has become a central focus in the information age. Though enormous effort has been expended on ensuring secure computation, software exploitation remains a serious threat. The software attack surface provides many avenues for hijacking; however, most exploits ultimately rely on the successful execution of a control-flow attack. This pervasive diversion of control flow is made possible by the pollution of control flow structure with attacker-injected runtime data. Many control-flow attacks persist because the root of the problem remains: runtime data is allowed to enter the program counter. In this paper, we propose a novel approach: Control-Data Isolation. Our approach provides protection by going to the root of the problem and removing all of the operations that inject runtime data into program control. While previous work relies on CFG edge checking and labeling, these techniques remain vulnerable to attacks such as heap spray, read, or GOT attacks and in some cases suffer high overheads. Rather than addressing control-flow attacks by layering additional complexity, our work takes a subtractive approach; subtracting the primary cause of contemporary control-flow attacks. We demonstrate that control-data isolation can assure the integrity of the programmer’s CFG at runtime, while incurring average performance overheads of less than 7% for a wide range of benchmarks.
Keywords: computer crime; program control structures; CFG integrity; average performance overheads; computer security; contemporary control flow attacks; control-data isolation; hijacking; information age; program control; program counter; secure computation; software exploitation; software vulnerabilities; subtractive approach; Data models; Libraries; Process control; Radiation detectors; Runtime; Security; Software (ID#: 15-7575)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054189&isnumber=7054173
Chao Zhang; Niknami, M.; Chen, K.Z.; Chengyu Song; Zhaofeng Chen; Song, D., “JITScope: Protecting Web Users from Control-Flow Hijacking Attacks,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 567-575, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218424
Abstract: Web browsers are one of the most important enduser applications to browse, retrieve, and present Internet resources. Malicious or compromised resources may endanger Web users by hijacking web browsers to execute arbitrary malicious code in the victims’ systems. Unfortunately, the widely-adopted Just-In-Time compilation (JIT) optimization technique, which compiles source code to native code at runtime, significantly increases this risk. By exploiting JIT compiled code, attackers can bypass all currently deployed defenses. In this paper, we systematically investigate threats against JIT compiled code, and the challenges of protecting JIT compiled code. We propose a general defense solution, JITScope, to enforce Control-Flow Integrity (CFI) on both statically compiled and JIT compiled code. Our solution furthermore enforces the W⊕X policy on JIT compiled code, preventing the JIT compiled code from being overwritten by attackers. We show that our prototype implementation of JITScope on the popular Firefox web browser introduces a reasonably low performance overhead, while defeating existing real-world control flow hijacking attacks.
Keywords: Internet; data protection; online front-ends; source code (software); CFI; Firefox Web browser; Internet resources; JIT compiled code; JIT optimization technique; JITScope; W⊕X policy; Web user protection; arbitrary malicious code; control-flow hijacking attacks; control-flow integrity; just-in-time compilation; source code compilation; Browsers; Engines; Instruments; Layout; Runtime; Safety; Security (ID#: 15-7576)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218424&isnumber=7218353
Bichhawat, A., “Post-Dominator Analysis for Precisely Handling Implicit Flows,” in Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, vol. 2, pp. 787–789, 16–24 May 2015. doi:10.1109/ICSE.2015.250
Abstract: Most web applications today use JavaScript for including third-party scripts, advertisements etc., which pose a major security threat in the form of confidentiality and integrity violations. Dynamic information flow control helps address this issue of information stealing. Most of the approaches over-approximate when unstructured control flow comes into picture, thereby raising a lot of false alarms. We utilize the post-dominator analysis technique to determine the context of the program at a given point and prove that this approach is the most precise technique to handle implicit flows.
Keywords: Java; authoring languages; program diagnostics; security of data; JavaScript; Web applications; confidentiality violations; dynamic information flow control; implicit flow handling; integrity violations; post-dominator analysis technique; security threat; unstructured control flow; Computer languages; Conferences; Context; Lattices; Programmable logic arrays; Security; Software engineering (ID#: 15-7577)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7203071&isnumber=7202933
Davi, L.; Hanreich, M.; Paul, D.; Sadeghi, A.-R.; Koeberl, P.; Sullivan, D.; Arias, O.; Jin, Y., “HAFIX: Hardware-Assisted Flow Integrity eXtension,” in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, vol., no., pp.1–6, 8–12 June 2015. doi:10.1145/2744769.2744847
Abstract: Code-reuse attacks like return-oriented programming (ROP) pose a severe threat to modern software on diverse processor architectures. Designing practical and secure defenses against code-reuse attacks is highly challenging and currently subject to intense research. However, no secure and practical system-level solutions exist so far, since a large number of proposed defenses have been successfully bypassed. To tackle this attack, we present HAFIX (Hardware-Assisted Flow Integrity Extension), a defense against code-reuse attacks exploiting backward edges (returns). HAFIX provides fine-grained and practical protection, and serves as an enabling technology for future control-flow integrity instantiations. This paper presents the implementation and evaluation of HAFIX for the Intel® Siskiyou Peak and SPARC embedded system architectures, and demonstrates its security and efficiency in code-reuse protection while incurring only 2% performance overhead.
Keywords: data protection; software reusability; HAFIX; Intel Siskiyou Peak; ROP; SPARC embedded system architectures; backward edges; code-reuse attacks; code-reuse protection; control-flow integrity instantiations; hardware-assisted flow integrity extension; processor architectures; return-oriented programming; Benchmark testing; Computer architecture; Hardware; Pipelines; Program processors; Random access memory; Registers (ID#: 15-7578)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167258&isnumber=7167177
Evans, I.; Fingeret, S.; Gonzalez, J.; Otgonbaatar, U.; Tang, T.; Shrobe, H.; Sidiroglou-Douskos, S.; Rinard, M.; Okhravi, H., “Missing the Point(er): On the Effectiveness of Code Pointer Integrity,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 781–796, 17–21 May 2015. doi:10.1109/SP.2015.53
Abstract: Memory corruption attacks continue to be a major vector of attack for compromising modern systems. Numerous defenses have been proposed against memory corruption attacks, but they all have their limitations and weaknesses. Stronger defenses such as complete memory safety for legacy languages (C/C++) incur a large overhead, while weaker ones such as practical control flow integrity have been shown to be ineffective. A recent technique called code pointer integrity (CPI) promises to balance security and performance by focusing memory safety on code pointers thus preventing most control-hijacking attacks while maintaining low overhead. CPI protects access to code pointers by storing them in a safe region that is protected by instruction level isolation. On x86-32, this isolation is enforced by hardware, on x86-64 and ARM, isolation is enforced by information hiding. We show that, for architectures that do not support segmentation in which CPI relies on information hiding, CPI’s safe region can be leaked and then maliciously modified by using data pointer overwrites. We implement a proof-of-concept exploit against Nginx and successfully bypass CPI implementations that rely on information hiding in 6 seconds with 13 observed crashes. We also present an attack that generates no crashes and is able to bypass CPI in 98 hours. Our attack demonstrates the importance of adequately protecting secrets in security mechanisms and the dangers of relying on difficulty of guessing without guaranteeing the absence of memory leaks.
Keywords: data protection; security of data; ARM; C-C++; CPI safe region; code pointer integrity effectiveness; code pointer protection; control flow integrity; control-hijacking attacks; data pointer overwrites; information hiding; instruction level isolation; legacy languages; memory corruption attacks; memory safety; security mechanisms; time 98 hour; Computer crashes; Delays; Libraries; Safety; Security (ID#: 15-7579)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163060&isnumber=7163005
Andriesse, D.; Bos, H.; Slowinska, A., “Parallax: Implicit Code Integrity Verification Using Return-Oriented Programming,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 125–135, 22–25 June 2015. doi:10.1109/DSN.2015.12
Abstract: Parallax is a novel self-contained code integrity verification approach, that protects instructions by overlapping Return-Oriented Programming (ROP) gadgets with them. Our technique implicitly verifies integrity by translating selected code (verification code) into ROP code which uses gadgets scattered over the binary. Tampering with the protected instructions destroys the gadgets they contain, so that the verification code fails, thereby preventing the adversary from using the modified binary. Unlike prior solutions, Parallax does not rely on code checksumming, so it is not vulnerable to instruction cache modification attacks which affect checksumming techniques. Further, unlike previous algorithms which withstand such attacks, Parallax does not compute hashes of the execution state, and can thus protect code with non-deterministic state. Parallax limits performance overhead to the verification code, while the protected code executes at its normal speed. This allows us to protect performance-critical code, and confine the slowdown to other code regions. Our experiments show that Parallax can protect up to 90% of code bytes, including most control flow instructions, with a performance overhead of under 4%.
Keywords: object-oriented programming; program verification; software performance evaluation; Parallax; ROP code; ROP gadgets; code checksumming technique; control flow instructions; implicit code integrity verification; instruction cache modification attacks; nondeterministic state; performance-critical code protection; return-oriented programming; self-contained code integrity verification approach; verification code; Debugging; Detectors; Programming; Registers; Runtime; Semantics; Software; Tamperproofing; code verification; reverse engineering (ID#: 15-7580)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266844&isnumber=7266818
Li, W.; Zhang, W.; Gu, D.; Cao, Y.; Tao, Z.; Zhou, Z.; Liu, Y.; Liu, Z., “Impossible Differential Fault Analysis on the LED Lightweight Cryptosystem in the Vehicular Ad-hoc Networks,” in Dependable and Secure Computing, IEEE Transactions on, vol. 13, no.1, pp. 84–92, Jan–Feb. 1 2016. doi:10.1109/TDSC.2015.2449849
Abstract: With the advancement and deployment of leading-edge telecommunication technologies for sensing and collecting traffic related information, the vehicular ad-hoc networks (VANETs) have emerged as a new application scenario that is envisioned to revolutionize the human driving experiences and traffic flow control systems. To avoid any possible malicious attack and resource abuse, employing lightweight cryptosystems is widely recognized as one of the most effective approaches for the VANETs to achieve confidentiality, integrity and authentication. As a typical substitution-permutation network lightweight cryptosystem, LED supports 64-bit and 128-bit secret keys, which are flexible to provide security for the RFID and other highly-constrained devices in the VANETs. Since its introduction, some research of fault analysis has been devoted to attacking the last three rounds of LED. It is an open problem to know whether provoking faults at a former round of LED allows recovering the secret key. In this paper, we give an answer to this problem by showing a novel impossible differential fault analysis on one round earlier of all LED keysize variants. Mathematical analysis and simulating experiments show that the attack could recover the 64-bit and 128-bit secret keys of LED by introducing 48 faults and 96 faults in average, respectively. The result in this study describes that LED is vulnerable to a half byte impossible differential fault analysis. It will be beneficial to the analysis of the same type of other iterated lightweight cryptosystems in the VANETs.
Keywords: Ciphers; Circuit faults; Encryption; Light emitting diodes; Schedules; LED; RFID, VANET, Vehicular Ad-hoc Networks; impossible differential fault analysis; lightweight cryptosystems (ID#: 15-7581)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7134781&isnumber=4358699
Jamshidifar, A.A.; Jovcic, D., “3-Level Cascaded Voltage Source Converters Controller with Dispatcher Droop Feedback for Direct Current Transmission Grids,” in Generation, Transmission & Distribution, IET, vol. 9, no. 6, pp. 571–579, 20 Apr. 2015. doi:10.1049/iet-gtd.2014.0348
Abstract: The future direct current (DC) grids will require additional control functions on voltage source converters (VSC) in order to ensure stability and integrity of DC grids under wide range of disturbances. This study proposes a 3-level cascaded control topology for all the VSC and DC/DC converters in DC grids. The inner control level regulates local current which prevents converter overload. The middle control level uses fast proportional integral feedback control of local DC voltage on each terminal which is essential for the grid stability. The hard limits (suggested ±5%) on voltage reference will ensure that DC voltage at all terminals is kept within narrow band under all contingencies. At the highest level, each station follows power reference which is received from the dispatcher. It is proposed to locate voltage droop power reference adjustment at a central dispatcher, to maintain average DC voltage in the grid and to ensure optimal power flow in the grid. This slow control function has minimal impact on stability. Performance of the proposed control is tested on PSCAD/EMTDC model of the CIGRE B4 DC grid test system. A number of severe outages are simulated and both steady-state variables and transient responses are observed and compared against conventional droop control method. The comparison verifies superior performance of the proposed control topology.
Keywords: DC-DC power convertors; HVDC power transmission; PI control; electric current control; feedback; load flow; power grids; power system stability; voltage control; 3-level cascaded control topology; 3-level cascaded voltage source converter controller; CIGRE B4 DC grid test system; DC grids; DC-DC converters; PSCAD-EMTDC model; VSC converters; central dispatcher; control function; control functions; converter overload; direct current transmission grids; dispatcher droop feedback; droop control method; fast proportional integral feedback control; grid stability; local DC voltage control; local current control optimal power flow; steady-state variables; transient responses; voltage droop power reference adjustment (ID#: 15-7582)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086379&isnumber=7086363
Sedghi, H.; Jonckheere, E., “Statistical Structure Learning to Ensure Data Integrity in Smart Grid,” in Smart Grid, IEEE Transactions on, vol. 6, no. 4, pp. 1924–1933, July 2015. doi:10.1109/TSG.2015.2403329
Abstract: Robust control and management of the grid relies on accurate data. Both phasor measurement units and remote terminal units are prone to false data injection attacks. Thus, it is crucial to have a mechanism for fast and accurate detection of tampered data—both for preventing attacks that may lead to blackouts, and for routine monitoring and control of current and future grids. We propose a decentralized false data injection detection scheme based on the Markov graph of the bus phase angles. We utilize the conditional covariance test CMIT to learn the structure of the grid. Using the dc power flow model, we show that, under normal circumstances, the Markov graph of the voltage angles is consistent with the power grid graph. Therefore, a discrepancy between the calculated Markov graph and learned structure should trigger the alarm. Our method can detect the most recent stealthy deception attack on the power grid that assumes knowledge of the bus-branch model of the system and is capable of deceiving the state estimator; hence damaging power network control, monitoring, demand response, and pricing scheme. Specifically, under the stealthy deception attack, the Markov graph of phase angles changes. In addition to detecting a state of attack, our method can detect the set of attacked nodes. To the best of our knowledge, our remedy is the first to comprehensively detect this sophisticated attack and it does not need additional hardware. Moreover, it is successful no matter the size of the attacked subset. Simulation of various power networks confirms our claims.
Keywords: Markov processes; control engineering computing; data integrity; graph theory; learning (artificial intelligence); phasor measurement; power engineering computing; power system control; power system management; power system security; robust control; security of data; smart power grids; statistical testing; CMIT conditional covariance test; Markov graph; bus phase angles; bus-branch model; decentralized false data injection detection scheme; demand response; false data injection attacks; grid management; phasor measurement units; power grid graph; power network control; power networks; pricing scheme; remote terminal units; robust control; routine monitoring; smart grid; state estimator; statistical structure learning; stealthy deception attack; tampered data detection; Measurement uncertainty; Monitoring; Phasor measurement units; Power grids; Random variables; Vectors; Bus phase angles; conditional covariance test; false data injection detection; structure learning (ID#: 15-7583)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058419&isnumber=7128463
Zhong, Wenbin; Chang, Wenlong; Rubio, Luis; Xichun Luo, “Reconfigurable Software Architecture for a Hybrid Micro Machine Tool,” in Automation and Computing (ICAC), 2015 21st International Conference on, vol., no., pp. 1–4, 11–12 Sept. 2015. doi:10.1109/IConAC.2015.7313994
Abstract: Hybrid micro machine tools are increasingly in demand for manufacturing microproducts made of hard-to-machine materials, such as ceramic air bearing, bio-implants and power electronics substrates etc. These machines can realize hybrid machining processes which combine one or two non-conventional machining techniques such as EDM, ECM, laser machining, etc. and conventional machining techniques such as turning, grinding, milling on one machine bed. Hybrid machine tool developers tend to mix and match components from multiple vendors for the best value and performance. The system integrity is usually at the second priority at the initial design phase, which generally leads to very complex and inflexible system. This paper proposes a reconfigurable control software, architecture for a hybrid micro machine tool, which combines laser-assisted machining and 5-axis micro-milling as well as incorporating a material handling system and advanced on-machine sensors. The architecture uses finite state machine (FSM) for hardware control and data flow. FSM simplifies the system integration and allows a flexible architecture that can be easily ported to similar applications. Furthermore, component-based technology is employed to encapsulate changes for different modules to realize “plug-and-play”. The benefits of using the software architecture include reduced lead time and lower cost of development.
Keywords: component-based technology; finite state machine; hybrid micro machine tool; reconfigurable software architecture (ID#: 15-7584)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313994&isnumber=7313638
de Amorim, A.A.; Dénès, M.; Giannarakis, N.; Hritcu, C.; Pierce, B.C.; Spector-Zabusky, A.; Tolmach, A., “Micro-Policies: Formally Verified, Tag-Based Security Monitors,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 813–830, 17–21 May 2015. doi:10.1109/SP.2015.55
Abstract: Recent advances in hardware design have demonstrated mechanisms allowing a wide range of low-level security policies (or micro-policies) to be expressed using rules on metadata tags. We propose a methodology for defining and reasoning about such tag-based reference monitors in terms of a high-level “symbolic machine” and we use this methodology to define and formally verify micro-policies for dynamic sealing, compartmentalization, control-flow integrity, and memory safety, in addition, we show how to use the tagging mechanism to protect its own integrity. For each micro-policy, we prove by refinement that the symbolic machine instantiated with the policy’s rules embodies a high-level specification characterizing a useful security property. Last, we show how the symbolic machine itself can be implemented in terms of a hardware rule cache and a software controller.
Keywords: cache storage; inference mechanisms; meta data; security of data; formally verified security monitors; hardware design; hardware rule cache; metadata tags; micro-policies; reasoning; software controller; tag-based reference monitors; tag-based security monitors; Concrete; Hardware; Monitoring; Registers; Safety; Transfer functions (ID#: 15-7585)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163062&isnumber=7163005
Garg, G.; Garg, R., “Detecting Anomalies Efficiently in SDN Using Adaptive Mechanism,” in Advanced Computing & Communication Technologies (ACCT), 2015 Fifth International Conference on, vol., no., pp. 367–370, 21–22 Feb. 2015. doi:10.1109/ACCT.2015.98
Abstract: Monitoring and measurement of network traffic flows in SDN is key requirement for maintaining the integrity of our data in network. It plays a vital role in management task of SDN controller for controlling the traffic. Anomaly detection considered as one of the important issues while monitoring the traffic. More efficiently we detect the anomalies, easier it will be for us, to manage the traffic. However we have to consider the workload, response time and overhead on network while applying the network monitoring policies, so that our network perform with similar efficiency. To reduce the overhead, it is required to perform analysis on certain portion of traffic instead of analyzing each and every packet in the network. This paper presents an adaptive mechanism for dynamically updating the policies for aggregation of flow entries and anomaly detection, so that monitoring overhead can be reduced and anomalies can be detected with greater accuracy. In previous work, rules for expansion and contraction of aggregation policies according to adaptive behavior are defined. This paper represents a work towards reducing the complexity of dynamic algorithm for updating policies of flow counting rules for anomaly detection.
Keywords: computer network security; software defined networking; telecommunication traffic; SDN; adaptive mechanism; anomaly detection; dynamic algorithm complexity reduction; flow counting rules; flow entry aggregation; network traffic monitoring; overhead monitoring; overhead reduction; Aggregates; Algorithm design and analysis; Complexity theory; Contracts; Heuristic algorithms; Monitoring; Telecommunication traffic; Anomaly detection; Network management; Network traffic monitoring; flow-counting; traffic-aggregation (ID#: 15-7586)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7079109&isnumber=7079031
Yaghini, P.M.; Eghbal, A.; Khayambashi, M.; Bagherzadeh, N., “Coupling Mitigation in 3-D Multiple-Stacked Devices,” in Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. 23, no. 12, pp. 2931–2944, Dec. 2015. doi:10.1109/TVLSI.2014.2379263
Abstract: A 3-D multiple-stacked IC has been proposed to support energy efficiency for data center operations as dynamic RAM (DRAM) scaling improves annually. 3-D multiple-stacked IC is a single package containing multiple dies, stacked together, using through-silicon via (TSV) technology. Despite the advantages of 3-D design, fault occurrence rate increases with feature-size reduction of logic devices, which gets worse for 3-D stacked designs. TSV coupling is one of the main reliability issues for 3-D multiple-stacked IC data TSVs. It has large disruptive effects on signal integrity and transmission delay. In this paper, we first characterize the inductance parasitics in contemporary TSVs, and then we analyze and present a classification for inductive coupling cases. Next, we devise a coding algorithm to mitigate the TSV-to-TSV inductive coupling. The coding method controls the current flow direction in TSVs by adjusting the data bit streams at run time to minimize the inductive coupling effects. After performing formal analyses on the efficiency scalability of devised algorithm, an enhanced approach supporting larger bus sizes is proposed. Our experimental results show that the proposed coding algorithm yields significant improvements, while its hardware-implemented encoder results in tangible latency, power consumption, and area.
Keywords: Capacitance; Couplings; Encoding; Inductance; Reliability; Through-silicon vias; 3-D; 3-D multiple-stacked IC; coupling; reliability; signal integrity (SI); through-silicon via (TSV) (ID#: 15-7587)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7001101&isnumber=4359553
Rekha, P.M.; Dakshayini, M., “Dynamic Network Configuration and Virtual Management Protocol for Open Switch in Cloud Environment,” in Advance Computing Conference (IACC), 2015 IEEE International, vol., no., pp. 143–148, 12–13 June 2015. doi:10.1109/IADCC.2015.7154687
Abstract: Cloud data center have to accommodate many users with isolated and independent networks in the distributed environment to support multi-tenant network and integrity. User applications are stored separately in virtual network. To support huge network traffic data center networks require software to change physically connected devices into virtual networks. Software defined networking is a new paradigm allows networks easily programmable that helps in controlling and managing virtual network devices. Flow decisions are made based upon real-time analysis of network consumption statistics with software defined networking. Managing these virtual networks is the real challenge for the network administrators. In this paper, we propose a novel network management approach between the controller and virtual switches and provide QoS of virtual LAN in the distributed cloud environment. This approach provides a protocol to deploy in Cloud Data center network environments using OpenFlow architecture in switches. Our approach delivers two different features, dynamic network configuration and Virtual management protocol between controller and open vswitch. This technique is very useful for cloud network administrators to provide better network services in cloud data center to multi-tenant users quickly.
Keywords: cloud computing; computer centres; protocols; software defined networking; switching networks; virtualisation; OpenFlow architecture; QoS; cloud data center network environments; cloud network administrators; data center networks; distributed cloud environment; dynamic network configuration; independent networks; multitenant network; network consumption statistics; network management approach; open switch; real-time analysis; software defined networking; user applications; virtual LAN; virtual management protocol; virtual network devices; virtual switches; Ports (Computers); Protocols; Quality of service; Software defined networking; Switches; Virtual machining; OpenvSwitch; Virtual management protocol; dynamic network configuration; virtual networking traffic (ID#: 15-7588)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154687&isnumber=7154658
Carpenter, D.R.; Willingham, J., “Objectionable Currents Associated with Shock, Fire and Destruction of Equipment,” in Electrical Safety Workshop (ESW), 2015 IEEE IAS, pp. 1–11, 26–30 Jan. 2015. doi:10.1109/ESW.2015.7094952
Abstract: This paper identifies objectionable currents in relation to improper wiring methods which result in fire, shock and the destruction of equipment. This information will assist those responsible for electrical safety, reliability and production. It also presents an understanding of the source and misapplication that often associate with objectionable currents. The information contained in this document should be useful threefold: 1) How to control objectionable currents which are responsible for the destruction of sensitive electronic equipment, shock hazards and fire hazards within a premise wiring system, based on testing models discovered with experiments. 2) Determine the reliability of existing accepted wiring methods. Experiments and tests were explored to prove or disprove existing best practice, codes, standards and theorems. 3) As a tutorial to properly apply theory, codes and standards, plus the proven reasons to why, where and how best practices and standards are apply.
Keywords: electric shocks; electronic equipment testing; fires; hazards; wiring; electrical production; electrical reliability; electrical safety; equipment destruction; equipment fire; equipment shock; fire hazards; improper wiring methods; objectionable currents; sensitive electronic equipment; shock hazards; Bonding; Conductors; Current measurement; Fasteners; Grounding; Metals; Wiring; Equipment Ground, Protective Ground or Grounding Conductor — the conductor required to facilitate an overcurrent protection device when a ground fault occurs. Adapted from NFPA 70 section 100 Definitions; Neutral or Grounded Circuit Conductor — a system or circuit conductor that is intentionally grounded. Adapted from NFPA 70 section 100 Definitions; Objectionable Current — the term objectionable current and stray current are used interchangeably in most nomenclature. Objectionable currents are considered to be objectionable when currents are flowing in a conductive path which is unintended and undesirable. This does not include electrical noise such as differential, transverse or common mode noise. Premise Wiring System — wiring on the secondary side of service equipment. Adapted from NFPA 70 section 100 Definitions; Stray Current — the term stray current is similar to the term objectionable current because it is reference to currents which flow in unintended paths. Stray currents include electrical noise. (ID#: 15-7589)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7094952&isnumber=7094862
Sathya, R.; Thangarajan, R., “Efficient Anomaly Detection and Mitigation in Software Defined Networking Environment,” in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp. 479–484, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124952
Abstract: A Computer network or data communication network is a telecommunication network that allows computers to exchange data. Computer networks are typically built from a large number of network devices such as routers, switches and numerous types of middle boxes with many complex protocols implemented on them. They need to accomplish very complex tasks with access to very limited tools. As a result, network management and performance tuning is quite challenging. Software-Defined Networking (SDN) is an emerging architecture purporting to be adaptable, cost-effective, dynamic and manageable pursuing to be suitable for the high-bandwidth, changing nature of today’s applications. SDN architectures decouples network control and forwarding functions, making network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services. The network security is a prominent feature of the network ensuring accountability, confidentiality, integrity, and protection against many external and internal threats. An Intrusion Detection System (IDS) is a type of security software designed to automatically alert administrators when someone or something is trying to compromise information system through malicious activities or through security policy violations. Security violation in SDN environment needs to be identified to prevent the system from an attack. The proposed work aims to detect the attacks on SDN environment. Detecting anomalies on SDN environment will be more manageable and efficient.
Keywords: computer network management; computer network security; software defined networking; IDS; SDN architectures; anomaly detection; anomaly mitigation; complex protocols; computer networks; data communication network; external threats; forwarding functions; internal threats; intrusion detection system; malicious activities; network accountability; network confidentiality; network control; network control functions; network devices; network integrity; network management; network performance tuning; network protection; network security; network services; security policy violations; security software; software defined networking environment; telecommunication network; Classification algorithms; Computer architecture; Computer networks; Control systems; Entropy; Feature extraction; Protocols; Entropy based detection; Feature Selection; Flow Table; Intrusion Detection System; Software Defined Networking (ID#: 15-7590)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124952&isnumber=7124722
Di Hu; Yongxue Yu; Ferrario, Antonio; Bayet, Olivier; Lin Shen; Nimmagadda, Ravi; Bonardi, Felice; Matus, Francis, “System Power Noise Analysis Using Modulated CPM,” in Electromagnetic Compatibility and Signal Integrity, 2015 IEEE Symposium on, vol., no., pp. 265–270, 15–21 March 2015. doi:10.1109/EMCSI.2015.7107697
Abstract: As the semiconductor industry advances to ever smaller technology nodes, the power distribution network (PDN) is becoming an essential design factor to ensure system performance and reliability. The time domain simulations typically utilize the chip power model (CPM), generated by Ansys RedHawk, as the current load. The typical CPM only includes current consumption in a few clock cycles, which includes the high frequencies components (several hundreds of MHz), but losing mid to low frequencies. This paper describes a modulated CPM (MCPM) design and signoff process for PDN. The first step is frequency domain analysis of PDN to identify the die-package resonance frequency. Then the chip gate level simulation is performed over an extended period of time to generate the VPD (Value Change Dump plus) file, with realistic low to mid frequency current components. This information is then used to modulate the CPM as the current load for the system level time domain noise simulations. This PI analysis flow was validated using a set of three test cases, with reasonable simulation-measurement correlation achieved. This analysis flow enables more effective power/ground plane layout optimization and capacitor optimization in a timely manner.
Keywords: capacitors; chip scale packaging; circuit optimisation; distribution networks; frequency-domain analysis; integrated circuit layout; power integrated circuits; semiconductor industry; time-domain analysis; Ansys RedHawk; PDN; VPD; capacitor optimization; chip gate level simulation; chip power model; clock cycles; current consumption; die-package resonance frequency; frequency current components; frequency domain analysis; modulated CPM; power distribution network; power-ground plane layout optimization; signoff process; system power noise analysis; time domain noise simulations; time domain simulations; value change dump plus; Impedance; Load modeling; Noise; Noise measurement; Time-domain analysis; Voltage control; Voltage measurement; MCPM; PCB; correlation; measurement; package; power integrity; simulation (ID#: 15-7591)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107697&isnumber=7107640
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Middleware Security 2015 |
Middleware facilitates distributed processing and is of significant interest to the security world with the development of cloud and mobile applications. It is important to the Science of Security community relative to resilience, policy-based governance, and composability. The articles cited here were presented or published in 2015.
Talpur, S.R.; Abdalla, S.; Kechadi, T., “Towards Middleware Security Framework for Next Generation Data Centers Connectivity,” in Science and Information Conference (SAI), 2015, vol., no., pp. 1277–1283, 28–30 July 2015. doi:10.1109/SAI.2015.7237308
Abstract: Data Center as a Service (DCaaS) facilitates to clients as an alternate outsourced physical data center, the expectations of business community to fully automate these data centers to run smoothly. Geographically Distributed Data Centers and their connectivity has major role in next generation data centers. In order to deploy the reliable connections between Distributed Data Centers, the SDN based security and logical firewalls are attractive and enviable. We present the middleware security framework for software defined data centers interconnectivity, the proposed security framework will be based on some learning processes, which will reduce the complexity and manage very large number of secure connections in real-world data centers. In this paper we will focus on two main objectives; (1) proposing simple and yet scalable techniques for security and analysis, (2) Implementing and evaluating these techniques on real-world data centers.
Keywords: cloud computing; computer centres; firewalls; middleware; security of data; software defined networking; Data Center as a Service; SDN based security; geographically distributed data centers; logical firewalls; middleware security framework; next generation data centers connectivity; outsourced physical data center; real-world data centers; software defined data centers interconnectivity; Distributed databases; Optical switches; Routing; Security; Servers; Software; DCI (Data Center Inter-connectivity); DCaaS; Distributed Firewall; OpenFlow; SDDC; SDN; Virtual Networking (ID#: 15-7747)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237308&isnumber=7237120
Dayal, A.; Tbaileh, A.; Yi Deng; Shukla, S., “Distributed VSCADA: An Integrated Heterogeneous Framework for Power System Utility Security Modeling and Simulation,” in Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, vol., no., pp. 1–6, 13–13 April 2015. doi:10.1109/MSCPES.2015.7115408
Abstract: The economic machinery of the United States is reliant on complex large-scale cyber-physical systems which include electric power grids, oil and gas systems, transportation systems, etc. Protection of these systems and their control from security threats and improvement of the robustness and resilience of these systems, are important goals. Since all these systems have Supervisory Control and Data Acquisition (SCADA) in their control centers, a number of test beds have been developed at various laboratories. Usually on such test beds, people are trained to operate and protect these critical systems. In this paper, we describe a virtualized distributed test bed that we developed for modeling and simulating SCADA applications and to carry out related security research. The test bed is a virtualized by integrating various heterogeneous simulation components. This test bed can be reconfigured to simulate the SCADA of a power system, or a transportation system or any other critical systems, provided a back-end domain specific simulator for such systems are attached to it. In this paper, we describe how we created a scalable architecture capable of simulating larger infrastructures and by integrating communication models to simulate different network protocols. We also developed a series of middleware packages that integrates various simulation platforms into our test bed using the Python scripting language. To validate the usability of the test bed, we briefly describe how a power system SCADA scenario can be modeled and simulated in our test bed.
Keywords: SCADA systems; authoring languages; control engineering computing; middleware; power system security; power system simulation; Python scripting language; back-end domain specific simulator; complex large-scale cyber-physical systems; distributed VSCADA; economic machinery; heterogeneous simulation components; integrated heterogeneous framework; middleware packages; network protocols; power system utility security modeling; power system utility security simulation platform; supervisory control and data acquisition; system protection; transportation system; virtualized distributed test bed; Databases; Load modeling; Power systems; Protocols; SCADA systems; Servers; Software; Cyber Physical Systems; Cyber-Security; Distributed Systems; NetworkSimulation; SCADA (ID#: 15-7748)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7115408&isnumber=7115373
Heimgaertner, F.; Hoefling, M.; Vieira, B.; Poll, E.; Menth, M., “A Security Architecture for the Publish/Subscribe C-DAX Middleware,” in Communication Workshop (ICCW), 2015 IEEE International Conference on, vol., no., pp. 2616–2621, 8–12 June 2015. doi:10.1109/ICCW.2015.7247573
Abstract: The limited scalability, reliability, and security of today’s utility communication infrastructures are main obstacles for the deployment of smart grid applications. The C-DAX project aims at providing a cyber-secure publish/subscribe middleware tailored to the needs of smart grids. C-DAX provides end-to-end security, and scalable and resilient communication among participants in a smart grid. This work presents the C-DAX security architecture, and proposes different key distribution mechanisms. Security properties are defined for control plane and data plane communication, and their underlying mechanisms are explained. The presented work is partially implemented in the C-DAX prototype and will be deployed in a field trial.
Keywords: middleware; power engineering computing; power system security; security of data; smart power grids; software architecture; C-DAX project; control plane communication; cyber-secure publish/subscribe middleware; data plane communication; end-to-end security; key distribution mechanisms; publish/subscribe C-DAX middleware; reliability; resilient communication; scalability; scalable communication; security architecture; security properties; smart grid applications; utility communication infrastructures; Authentication; Encryption; Middleware; Public key; Smart grids (ID#: 15-7749)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247573&isnumber=7247062
Kypus, L.; Vojtech, L.; Kvarda, L., “Qualitative and Security Parameters Inside Middleware Centric Heterogeneous RFID/IoT Networks, On-Tag Approach,” in Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, vol., no., pp. 21–25, 9–11 July 2015. doi:10.1109/TSP.2015.7296217
Abstract: Work presented in the paper started as preliminary research, and analysis, ended as testing of radio frequency identification (RFID) middlewares. The intention was to get better insight into the architecture and functionalities with respect to its impact to overall quality of service (QoS). Main part of paper focuses on lack of QoS awareness due to missing classification of data originated from tags and from the very beginning of the delivery process. Method we used to evaluate did follow up on existing researches in area of QoS for RFID, combining them with new proposal from standard ISO 25010 regarding - Quality Requirements and Evaluation, system and software quality models. The idea is to enhance application identification area in user memory bank with encoded QoS flags and security attributes. The proof of concept of on-tag specified classes and attributes is able to manage and intentionally influence applications and data processing behavior.
Keywords: middleware; quality of service; radiofrequency identification; software quality; telecommunication computing; IoT networks; QoS awareness; middleware centric heterogeneous RFID network; on-tag approach; quality requirements; radio frequency identification middlewares; software quality models; standard ISO 25010; Ecosystems; Middleware; Protocols; Quality of service; Radiofrequency identification; Security; Standards; Application identification; IoT; QoS flags; RFID; Security attributes (ID#: 15-7750)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296217&isnumber=7296206
DongHyuk Lee; Namje Park; DooHo Choi, “Inter-Vessel Traffic Service Data Exchange Format Protocol Security Enhancement of User Authentication Scheme in Mobile VTS Middleware Platform,” in Network Operations and Management Symposium (APNOMS), 2015 17th Asia-Pacific, vol., no., pp. 527–529, 19–21 Aug. 2015. doi:10.1109/APNOMS.2015.7275405
Abstract: The IVEF protocol developed by IALA is focused on the technical implementation. But, It does not contain the description of the information encryption for IVEF protocol data protection. The vessel traffic information is high sensitive information. So a necessary part of the absolute reliability of the information client and data security. This paper suggests the authentication protocol to increase the security of the VTS systems using the main certification server and IVEF.
Keywords: cryptographic protocols; marine communication; middleware; mobile communication; IALA; IVEF protocol data protection; VTS systems; authentication protocol; data security; information client; information encryption; inter-vessel traffic service data exchange format protocol security enhancement; main certification server; mobile VTS middleware platform; vessel traffic information; Authentication; Cryptography; Integrated circuits; Protocols; Servers; Smart phones; IVEF; Protocol; VTS (ID#: 15-7751)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275405&isnumber=7275336
Ouedraogo, W.F.; Biennier, F.; Merle, P., “Optimizing Service Protection with Model Driven Security@run.time,” in Service-Oriented System Engineering (SOSE), 2015 IEEE Symposium on, vol., no., pp. 50–58, March 30 2015–April 3 2015. doi:10.1109/SOSE.2015.50
Abstract: Enterprises are more and more involved in collaborative business. This leads to open and outsourcing all or part of their information system (IS) to create collaborative processes by composing business services picked in each partner IS and to take advantage of Cloud computing. Business services outsourcing and their dynamic collaboration context can bring lost of control on IS and new security risks can occur. This leads to inconsistent protection allowing competitors to access to unauthorized information. To address this issue, systematic security service invocations may be added, without paying attention to the business context leading to costly over protection. To address this issue, an adaptive security service model deployment is required to provide a business service consistent protection by taking into account the collaboration context (business service data criticity, partners involved in the collaboration, etc.), and the cloud deployment and execution environment. In this paper, we propose an adaptive security model based on MDS@run.time, the marriage of Model Driven Security (MDS) and Models@run.time approaches, allowing to select at runtime the appropriate security components to apply. The MDS approach is used to generate security policies, which are interpreted at runtime and load appropriate security mechanisms depending on the context (which takes advantage of the Models@run.time approach) ensuring business process end to end protection. A proof of concept prototype is built on top of the OW2 FraSCAti middleware, validating our proposition efficiency. Our experiments and simulations show that MDS@run.time improves the system efficiency when the over-protection risk rate increases.
Keywords: cloud computing; groupware; information systems; middleware; outsourcing; risk management; security of data; software performance evaluation; MDS@run.time; Models@run.time; OW2 FraSCAti middleware; adaptive security service model deployment; business service consistent protection; business service data criticity; business services outsourcing; cloud deployment; collaborative business; dynamic collaboration context; execution environment; information system; model driven security@run.time; over-protection risk rate; security policy generation; security risks; service protection optimization; system efficiency improvement; Authentication; Business; Collaboration; Context; Context modeling; Middleware (ID#: 15-7752)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133513&isnumber=7133490
Silva, A.; Rosa, N., “FIrM: Functional Middleware with Support to Multi-tenancy,” in Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, vol., no., pp. 650–657, 24–27 March 2015. doi:10.1109/AINA.2015.249
Abstract: The use of middleware systems to support multi-tenancy applications in cloud computing environments can help to decrease the application costs by reducing the hardware infrastructure and the amount of software license required to run a software, and facilitating the its maintenance. However, the design and implementation of middleware systems that support multi-tenancy feature is complex due challenges such as hardware sharing, security, scalability, configuration per tenant and tenant isolation. Furthermore, developers generally implement middleware systems through general-purpose object-oriented languages without taking the benefits of using a language that allows a higher level of abstraction and concision on writing concurrent and parallel systems. In this paper, we present FIrM (Functional Middleware), a cloud computing middleware implemented in Haskell, which allows multi-tenant-aware remote procedure calls. In order to evaluate FIrM, we carry out a performance evaluation that shows the impact of the multi-tenancy mechanisms on the performance of the applications.
Keywords: cloud computing; cost reduction; functional programming; middleware; object-oriented languages; software maintenance; software performance evaluation; FIrM; Functional Middleware; Haskell; application cost reduction; cloud computing environments; configuration-per-tenant; general-purpose object-oriented languages; hardware infrastructure; hardware sharing; multitenancy applications; multitenant-aware remote procedure calls; performance evaluation; scalability; security; software license; tenant isolation; Conferences; Cloud computing; Functional programming; Middleware; Multi-tenancy (ID#: 15-7753)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098034&isnumber=7097928
Papadopoulos, G., “Challenges in the Design and Implementation of Wireless Sensor Networks: A Holistic Approach-Development and Planning Tools, Middleware, Power Efficiency, Interoperability,” in Embedded Computing (MECO), 2015 4th Mediterranean Conference on, vol., no., pp. 1–3, 14–18 June 2015. doi:10.1109/MECO.2015.7181857
Abstract: Wireless Sensor Networks (WSNs) constitute a networking area with promising impact in the environment, health, security, industrial applications and more. Each of these presents different requirements, regarding system performance and QoS, and involves a variety of mechanisms such as routing and MAC protocols, algorithms, scheduling policies, security, OS, all of which are residing over the HW, the sensors, actuators and the Radio Tx/Rx. Furthermore, they encompass special characteristics, such as constrained energy, CPU and memory resources, multi-hop communication, leading to a few steps higher the required special knowledge. Although the status of WSNs is nearing the stage of maturity and wide-spread use, the issue of their sustainability hinges upon the implementation of some features of paramount importance: Low power consumption to achieve long operational life-time for battery-powered unattended WSN nodes, joint optimization of connectivity and energy efficiency leading to best-effort utilization of constrained radios and minimum energy cost, self-calibration and self-healing to recover from failures and errors to which WSNs are prone, efficient data aggregation lessening the traffic load in constrained WSNs, programmable and reconfigurable stations allowing for long life-cycle development, system security enabling protection of data and system operation, short development time making more efficient the time-to-market process and simple installation and maintenance procedures for wider acceptance. Despite the considerable research and important advances in WSNs, large scale application of the technology is still hindered by technical, complexity and cost impediments. Ongoing R&D is addressing these shortcomings by focusing on energy harvesting, middleware, network intelligence, standardization, network reliability, adaptability and scalability. However, for efficient WSN development, deployment, testing, and maintenance, a holistic unified approach is necessary which will address the above WSN challenges by developing an integrated platform for smart environments with built-in user friendliness, practicality and efficiency. This platform will enable the user to evaluate his design by identifying critical features and application requirements, to verify by adopting design indicators and to ensure ease of development and long life cycle by incorporating flexibility, expandability and reusability. These design requirements can be accomplished to a significant extent via an integration tool that provides a multiple level framework of functionality composition and adaptation for a complex WSN environment consisting of heterogeneous platform technologies, establishing a software infrastructure which couples the different views and engineering disciplines involved in the development of such a complex system, by means of the accurate definition of all necessary rules and the design of the ‘glue-logic’ which will guarantee the correctness of composition of the various building blocks. Furthermore, to attain an enhanced efficiency, the design/development tool must facilitate consistency control as well as evaluate the selections made by the user and, based on specific criteria, provide feedback on errors concerning consistency and compatibility as well as warnings on potentially less optimal user selections. Finally, the WSN planning tool will provide answers to fundamental issues such as the number of nodes needed to meet overall system objectives, the deployment of these nodes to optimize network performance and the adjustment of network topology and sensor node placement in case of changes in data sources and network malfunctioning.
Keywords: computer network reliability; computer network security; data protection; energy conservation; energy harvesting; middleware; open systems; optimisation; quality of service; sensor placement; telecommunication network planning; telecommunication network topology; telecommunication power management; telecommunication traffic; time to market; wireless sensor networks; QoS; WSN reliability; constrained radio best-effort utilization; data aggregation; data security enabling protection; design-development tool; energy efficiency; energy harvesting; failure recovery; heterogeneous platform technology; holistic unified approach; interoperability; network intelligence; network topology adjustment; power consumption; power efficiency; sensor node placement; time-to-market process; traffic load; wireless sensor network planning tools; Electrical engineering; Embedded computing; Europe; Security; Wireless sensor networks (ID#: 15-7754)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181857&isnumber=7181853
Billure, R.; Tayur, V.M.; Mahesh, V., “Internet of Things — A Study on the Security Challenges,” in Advance Computing Conference (IACC), 2015 IEEE International, vol., no., pp. 247–252, 12–13 June 2015. doi:10.1109/IADCC.2015.7154707
Abstract: The vision of Internet of Things (IoT) is to enable devices to collaborate with each other on the Internet. Multiple devices collaborating with each other have opened up various opportunities in multitude of areas. It has presented unique set of challenges in scaling the Internet, techniques for identification of the devices, power efficient algorithms and communication protocols. Always connected devices have access to private sensitive information and any breach in them is a huge security risk. The IoT environment is composed of the hardware, software and middleware components making it a complex system to manage and secure. The objective of this paper is to present the challenges in IoT related to security, its challenges and recent developments through a comprehensive review of the literature.
Keywords: Internet of Things; data privacy; middleware; security of data; IoT; hardware component; information privacy; security risk; software component; Computers; Jamming; Lead; Middleware; Radiofrequency identification; Reliability; Security; security in IOT (ID#: 15-7755)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154707&isnumber=7154658
Singh, J.; Pasquier, T.F.J.-M.; Bacon, J.; Eyers, D., “Integrating Messaging Middleware and Information Flow Control,” in Cloud Engineering (IC2E), 2015 IEEE International Conference on, vol., no., pp. 54–59, 9–13 March 2015. doi:10.1109/IC2E.2015.13
Abstract: Security is an ongoing challenge in cloud computing. Currently, cloud consumers have few mechanisms for managing their data within the cloud provider’s infrastructure. Information Flow Control (IFC) involves attaching labels to data, to govern its flow throughout a system. We have worked on kernel-level IFC enforcement to protect data flows within a virtual machine (VM). This paper makes the case for, and demonstrates the feasibility of an IFC-enabled messaging middleware, to enforce IFC within and across applications, containers, VMs, and hosts. We detail how such middleware can integrate with local (kernel) enforcement mechanisms, and highlight the benefits of separating data management policy from application/service-logic.
Keywords: cloud computing; data protection; middleware; security of data; virtual machines; VM; application logic; cloud consumers; cloud provider infrastructure; data flow protection; data management policy; information flow control; kernel enforcement mechanisms; kernel-level IFC enforcement; local enforcement mechanisms; messaging middleware integration; service-logic; virtual machine; Cloud computing; Context; Kernel; Runtime; Security; Servers; Information Flow Control; distributed systems; policy; security (ID#: 15-7756)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092899&isnumber=7092808
Shi-Wei Zhao; Ze-Wen Cao; Wen-Sen Liu, “OSIA: Open Source Intelligence Analysis System Based on Cloud Computing and Domestic Platform,” in Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, vol., no., pp. 371–375, 24–26 April 2015. doi:10.1109/ICISCE.2015.89
Abstract: Information safety is significant for state security, especially for intelligence service. OSIA (open source intelligence analyzing) system based on cloud computing and domestic platform is designed and implemented in this paper. For the sake of the security and utility of OSIA, all of the middleware and involved OS are compatible with domestic software. OSIA system concentrates on analyzing open source text intelligence and adopts self-designed distributed crawler system so that a closed circle is formed from intelligence acquisition to analysis process and push service. This paper also illustrates some typical applications of anti-terrorist, such as the “organizational member discovery” based on Stanford parser and cluster algorithm, the “member relation exhibition” based on paralleled PageRank algorithm and the like. The results of experiences show that the OSIA system is suitable for large scale textual intelligence analysis.
Keywords: cloud computing; data mining; grammars; middleware; parallel algorithms; public domain software; security of data; text analysis; OS; OSIA system; Stanford parser; antiterrorist; cluster algorithm; domestic platform; domestic software; information safety; intelligence acquisition; intelligence service; large scale textual intelligence analysis; member relation exhibition; middleware; open source intelligence analysis system; open source text intelligence; organizational member discovery; paralleled PageRank algorithm; push service; self-designed distributed crawler system; Algorithm design and analysis; Artificial intelligence; Crawlers; Operating systems; Security; Servers; Text mining; domestic platform; intelligence analysis system; text mining (ID#: 15-7757)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120629&isnumber=7120439
Hancock, M.B.; Varela, C.A., “Augmenting Performance for Distributed Cloud Storage,” in Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on, vol., no., pp. 1189–1192, 4–7 May 2015. doi:10.1109/CCGrid.2015.124
Abstract: The device people use to capture multimedia has changed over the years with the rise of smart phones. Smart phones are readily available, easy to use, and capture multimedia with high quality. While consumers capture all of this media, the storage requirements are not changing significantly. Therefore, people look towards cloud storage solutions. The typical consumer stores files within a single provider. They want a solution that is quick to access, reliable, and secure. Using multiple providers can reduce cost and improve overall performance. We present a middleware framework called Distributed Indexed Storage in the Cloud (DISC) to improve all aspects a user expects in a cloud provider. The process of uploading and downloading is essentially transparent to the user. The upload and download performance happens simultaneously by distributing a subset of the file across multiple cloud providers that it deems fit based on policies. Reliability is another important feature of DISC. To improve reliability, we propose a solution that replicates the same subset of the file across different providers. This is beneficial when one provider is unresponsive, the data can be pulled from another provider with the same subset. Security has great importance when dealing with consumers data. We inherently gain security when improving reliability. Since the file is distributed using subsets, not one provider has the full file. In our experiment, performance improvements are observed when delivering and retrieving files compared to the standard approach. The results are promising, saving upwards of eight seconds in processing time. With the expansion of more cloud providers, the results are expected to improve.
Keywords: cloud computing; middleware; multimedia systems; smart phones; storage management; DISC; augmenting performance; distributed cloud storage; distributed indexed storage in the cloud; middleware framework; multimedia; Bandwidth; Cloud computing; Instruction sets; Multimedia communication; Reliability; Security; Throughput; cloud services (ID#: 15-7758)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152618&isnumber=7152455
Casoni, M.; Grazia, C.A.; Klapez, M.; Patriciello, N., “Towards Emergency Networks Security with Per-Flow Queue Rate Management,” in Pervasive Computing and Communication Workshops (PerCom Workshops), 2015 IEEE International Conference on, vol., no., pp. 493–498, 23–27 March 2015. doi:10.1109/PERCOMW.2015.7134087
Abstract: When statistical multiplexing is used to provide connectivity to a number of client hosts through a high-delay link, the original TCP as well as TCP variants born to improve performance on those links often provide poor performance and sub-optimal QoS properties. To guarantee intra-protocol fairness, inter-protocol friendliness, low queues utilization and optimal throughput in mission-critical scenarios, Congestion Control Middleware Layer (C2ML) has been proposed as a tool for centralized and collaborative resource management. However, C2ML offers only very limited security guarantees. Because emergencies may be natural or man-provoked, in the latter case there may be interest to cut out legitimate users from the communication networks that support disaster recovery operations. In this paper we present Queue Rate Management (QRM), an Active Queue Management scheme able to provide protection from Resource Exhaustion Attacks in scenarios where access to the shared link is controlled by C2ML; the proposed algorithm checks whether a node is exceeding its allowed rate, and consequently decides whether to keep or drop packets coming from that node. We mathematically prove that with QRM the gateway queue size can never exceed the Bandwidth-Delay Product of the channel. Furthermore, we use the ns-3 simulator to compare QRM with CoDel and RED, showing how QRM provides better performance in terms of both throughput and QoS guarantees when employed with C2ML.
Keywords: business continuity; computer network management; computer network performance evaluation; computer network security; queueing theory; statistical multiplexing; telecommunication congestion control; transport protocols; C2ML; CoDel; QRM; RED; active queue management scheme; bandwidth-delay product; centralized resource management; collaborative resource management; congestion control middleware layer; disaster recovery operations; emergency network security; high-delay link; interprotocol friendliness; intraprotocol fairness; mission-critical scenarios; ns-3 simulator; per-flow queue rate management; queue rate management; resource exhaustion attacks; statistical multiplexing; suboptimal QoS properties; Bandwidth; Delays; Emergency services; IP networks; Logic gates; Queueing analysis; Throughput; AQM; Congestion control; Emergency Networks; Middleware; Queueing Delay; Satellite (ID#: 15-7759)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7134087&isnumber=7133953
Memon, S.; Riedel, M.; Koeritz, C.; Grimshaw, A., “Interoperable Job Execution and Data Access through UNICORE and the Global Federated File System,” in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, vol., no., pp. 269–274, 25–29 May 2015. doi:10.1109/MIPRO.2015.7160278
Abstract: Computing middlewares play a vital role for abstracting complexities of backend resources by providing a seamless access to heterogeneous execution management services. Scientific communities are taking advantage of such technologies to focus on science rather than dealing with technical intricacies of accessing resources. Multi-disciplinary communities often bring dynamic requirements which are not trivial to realize. Specifically, to attain massivley parallel data processing on supercomputing resources which require an access to large data sets from widely distributed and dynamic sources located across organizational boundaries. In order to support this abstract scenario, we bring a combination that integrates UNICORE middleware and the Global Federated File System. Furthermore, the paper gives architectural and implementation perspective of UNICORE extension and its interaction with Global Federated File System space through computing, data and security standards.
Keywords: file organisation; information retrieval; middleware; parallel processing; UNICORE middleware; backend resource complexity abstracting; data access; global federated file system; heterogeneous execution management services; interoperable job execution; multidisciplinary community; organizational boundary; parallel data processing; security standards; supercomputing resources; Communities; File systems; Security; Servers; Standards; Web services (ID#: 15-7760)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160278&isnumber=7160221
Scandurra, P.; Psaila, G.; Capilla, R.; Mirandola, R., “Challenges and Assessment in Migrating IT Legacy Applications to the Cloud,” in Maintenance and Evolution of Service-Oriented and Cloud-Based Environments (MESOCA), 2015 IEEE 9th International Symposium on the, vol., no., pp. 7–14, 2–2 Oct. 2015. doi:10.1109/MESOCA.2015.7328120
Abstract: The incessant trend where software engineers need to redesign legacy systems adopting a service-centric engineering approach brings new challenges for software architects and developers. Today, engineering and deploying software as a service requires specific Internet protocols, middleware and languages that often complicate the interoperability of software at all levels. Moreover, cloud computing demands stringent quality requirements, such as security, scalability, and interoperability among others, to provide services and data across networks more efficiently. As software engineers must face the problem to redesign and redeploy systems as services, we explore in this paper the challenges found during the migration of an existing system to a cloud solution and based on a set of quality requirements that includes the vendor Lock-in factor. We also present a set of assessment activities and guidelines to support migration to the Cloud by adopting SOA and Cloud modeling standards and tools.
Keywords: Business; Cloud computing; Interoperability; Scalability; Service-oriented architecture; Software as a service; Cloud computing; cloud migration; cloud modeling; interoperability; portability; service-centric engineering; service-oriented architecture; service-oriented computing; vendor Lock-in (ID#: 15-7761)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328120&isnumber=7328114
Bonino, D.; Alizo, M.T.D.; Alapetite, A.; Gilbert, T.; Axling, M.; Udsen, H.; Soto, J.A.C.; Spirito, M., “ALMANAC: Internet of Things for Smart Cities,” in Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on, vol., no., pp. 309–316, 24–26 Aug. 2015. doi:10.1109/FiCloud.2015.32
Abstract: Smart cities advocate future environments where sensor pervasiveness, data delivery and exchange, and information mash-up enable better support of every aspect of (social) life in human settlements. As this vision matures, evolves and is shaped against several application scenarios, and adoption perspectives, a common need for scalable, pervasive, flexible and replicable infrastructures emerges. Such a need is currently fostering new design efforts to grant performance, reuse and interoperability while avoiding knowledge silos typical of early efforts on similar top is, e.g. Automation in buildings and homes. This paper introduces a federated smart city platform (SCP) developed in the context of the ALMANAC FP7 EU project and discusses lessons learned during the first experimental application of the platform to a smart waste management scenario in a medium-sized, European city. The ALMANAC SCP aims to integrate Internet of Things (IoT), capillary networks and metro access networks to offer smart services to the citizens, and thus enable Smart City processes. The key element of the SCP is a middleware supporting semantic interoperability of heterogeneous resources, devices, services and data management. The platform is built upon a dynamic federation of private and public networks, while supporting end-to-end security and privacy. Furthermore, it also enables the integration of services that, although being natively external to the platform itself, allow enriching the set of data and information used by the Smart City applications supported.
Keywords: Internet of Things; data privacy; middleware; open systems; smart cities; waste management; ALMANAC FP7 EU project; European city; Internet of Things; capillary networks; data management; end-to-end privacy; end-to-end security; heterogeneous devices; heterogeneous resources; heterogeneous services; metro access networks; middleware; private networks; public networks; semantic interoperability; sensor pervasiveness; smart city platform; smart waste management scenario; Cities and towns; Context; Data integration; Metadata; Semantics; Smart cities; federation; internet of things; platform; smart city (ID#: 15-7762)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300833&isnumber=7300539
Hoefling, M.; Heimgaertner, F.; Menth, M.; Katsaros, K.V.; Romano, P.; Zanni, L.; Kamel, G., “Enabling Resilient Smart Grid Communication over the Information-centric C-DAX Middleware,” in Networked Systems (NetSys), 2015 International Conference and Workshops on, vol., no., pp. 1–8, 9–12 March 2015. doi:10.1109/NetSys.2015.7089080
Abstract: Limited scalability, reliability, and security of todays utility communication infrastructures are main obstacles to the deployment of smart grid applications. The C-DAX project aims at providing and investigating a communication middleware for smart grids to address these problems, applying the information-centric networking and publish/subscribe paradigm. We briefly describe the C-DAX architecture, and extend it with a flexible resilience concept, based on resilient data forwarding and data redundancy. Different levels of resilience support are defined, and their underlying mechanisms are described. Experiments show fast and reliable performance of the resilience mechanism.
Keywords: middleware; power engineering computing; smart power grids; communication middleware; data redundancy; flexible resilience concept; information-centric C-DAX middleware; information-centric networking; publish/subscribe paradigm; resilient data forwarding; resilient smart grid communication; smart grids; utility communication infrastructures; Delays; Monitoring; Reliability; Resilience; Security; Subscriptions; Synchronization (ID#: 15-7763)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7089080&isnumber=7089054
Bruno José Olivieri de Souza; Endler, M., “Coordinating Movement Within Swarms of UAVs Through Mobile Networks,” in Pervasive Computing and Communication Workshops (PerCom Workshops), 2015 IEEE International Conference on, vol., no., pp. 154–159, 23–27 March 2015. doi:10.1109/PERCOMW.2015.7134011
Abstract: Unmanned Aerial Vehicles (UAV) have several uses in civilians and military applications, such as search and rescue missions, cartography and terrain exploration, industrial plant control, surveillance, public security, firefight, and others. Swarms of UAVs may further increase the effectiveness of these tasks, since they enable larger coverage, more accurate or redundant sensed data, fault tolerance, etc. Swarms of aerial robots require real-time coordination, which is just a specific case of M2M collaboration. But one of the biggest challenges of UAV swarming is that this real-time coordination has to happen in a wide-area setting where it is expensive, or even impossible, to set up a dedicated wireless infrastructure for this purpose. Instead, one has to resort to conventional 3G/4G wireless networks, where communication latencies are in the range of 50-150 ms. In this paper we tackle the problem of UAV swarm formation and maintenance in areas covered by such mobile network, and propose a bandwidth-efficient multi-robot coordination algorithm for these settings. The coordination algorithm was implemented on the top of our mobile middleware SDDL, uses its group-cast communication capability, and was tested with simulated UAVs.
Keywords: 3G mobile communication; 4G mobile communication; autonomous aerial vehicles; control engineering computing; middleware; mobile computing; multi-robot systems; 3G-4G wireless networks; M2M collaboration; SDDL; UAV swarming; aerial robots; bandwidth-efficient multirobot coordination algorithm; cartography; civilians; communication latencies; dedicated wireless infrastructure; fault tolerance; firefight; group-cast communication capability; industrial plant control; military applications; mobile middleware; mobile networks; movement coordination; public security; redundant sensed data; search and rescue missions; surveillance; terrain exploration; unmanned aerial vehicles; wide-area setting; Collaboration; Middleware; Mobile communication; Mobile computing; Monitoring; Protocols; Smart phones; UAVs; machine-to-machine collaboration; mobile networks; movement coordination; pervasive system; swarms of mobile robots (ID#: 15-7764)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7134011&isnumber=7133953
Indalecio, G.; Gómez-Folgar, F.; García-Loureiro, A.J., “Comparison of State-of-the-Art Distributed Computing Frameworks with the GWM,” in Electron Devices (CDE), 2015 10th Spanish Conference on, vol., no., pp. 1–4, 11–13 Feb. 2015. doi:10.1109/CDE.2015.7087480
Abstract: We have analysed the landscape of heterogeneous computing solutions in order to understand and explain the position of our application, the General Workload Manager, in that landscape. We have classified several applications in the following groups: Grid middleware, Grid powered applications, Cloud computing, and modern lightweight solutions. We have successfully analysed the characteristics of those groups and found similar characteristics in our application, which allows for a better comprehension of both the landscape of existing solutions and the General Workload Manager.
Keywords: cloud computing; grid computing; middleware; resource allocation; distributed computing framework; general workload manager; grid middleware; grid powered application; Cloud computing; Computational modeling; Computers; Electron devices; Libraries; Security (ID#: 15-7765)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087480&isnumber=7087435
Jarmakiewicz, J.; Podlasek, T., “Design and Implementation of Multilevel Security Subsystem Based on XACML and WEB Services,” in Military Communications and Information Systems (ICMCIS), 2015 International Conference on, vol., no., pp. 1–8, 18–19 May 2015. doi:10.1109/ICMCIS.2015.7158686
Abstract: Controlled sharing of confidential information in military environment, especially as a part of joint and coalition forces, is an important mean to achieve the network-centricity goals. During last few years a technology for building the Service-Oriented Architecture has been developed. The Service-Oriented Architecture maps the concept of distributed service-oriented processing. It is a good application framework for integration of heterogeneous military systems. However, these systems could process the confidential data divided onto hierarchical classification levels. We can rise up the question: can Service-Oriented Architecture serve as a middleware layer to integrate such systems? The paper presents selected cases of information systems cooperation in systems federation. We developed the functional mechanisms according to XACML architecture and we proposed necessary attributes for users and data, what enabled to control information exchange and to authorize users to access sensitive information resources. The developed MLS implementations were tested in terms of interoperability in the consortium and domestic test environment. In June 2012, both the implementations services were successfully tested in an international test environment during testing of interoperability with foreign partners (Germany) and NC3A agency in the NATO Secret network during CWIX 2012 exercises.
Keywords: Web services; XML; authorisation; information systems; open systems; program testing; service-oriented architecture; NATO Secret network; SOA; Web services; XACML architecture; eXtensible Access Control Markup Language; information exchange; information systems cooperation; interoperability testing; multilevel security subsystem; service-oriented architecture; systems federation; Authentication; Databases; Sensitivity; Servers; Service-oriented architecture; C4I Systems; Common Operating Picture; Information sharing; Multi Level Security; SOA; WEB Services; XACML (ID#: 15-7766)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158686&isnumber=7158667
Lu Songtao; Qi Ming, “The Sender Controlled Security Model for Message Service,” in Autonomous Decentralized Systems (ISADS), 2015 IEEE Twelfth International Symposium on, vol., no., pp. 187–191, 25–27 March 2015. doi:10.1109/ISADS.2015.46
Abstract: Publish/subscribe pattern is a reliable way of message service. Compared with Store-and-forward mode and Web Service request/response mode, the way of asynchronous message delivery is better in flexibility and scalability. The message is pushed by message server to the subscriber so that the message consumer can get message without request. Access control is managed by the message server in the traditional message service model, but it is not suitable in complex SWIM (System Wide Information Management). SWIM is a very complex and huge system, the message sender and the message server are probably not controlled by a same department, so it is difficult to guarantee the fairness and security of the access control in the message service. In order to improve the credibility of information security transmission between different departments, the message service security model based on the sender control is proposed in this paper according to taking JMS publish/subscribe model as an example.
Keywords: authorisation; information management; message passing; middleware; JMS publish/subscribe model; SWIM; access control; asynchronous message delivery; information security transmission; message server; message service security model; sender controlled security model; system wide information management; Access control; Authentication; Encryption; Message service; Servers; XML; SWIM; message sender access control; message service; publish subscribe (ID#: 15-7767)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098257&isnumber=7098213
Allan Delon Barbosa Araújo; Paulo Caetano da Silva, “PERSEC — Middleware for Multiple Encryption in Web Services,” in Information Technology - New Generations (ITNG), 2015 12th International Conference on, vol., no., pp. 609–614, 13–15 April 2015. doi:10.1109/ITNG.2015.101
Abstract: Web services represent a way to share data and shall be treated as a solution for interoperability between heterogeneous systems. However, by having a public infrastructure, subject to attacks, security issues have become indispensable and challenging. To ensure the security of these applications, some safety specifications are generally used, such as the XML signature, XML encryption and WS-Security specifications. However, the application of these specifications may degrade the performance of these systems, especially the specification responsible for encryption, the XML encryption. This study besides investigating the causes of this problem, propose a solution designed to reduce the impact of this degradation, through the combined use of different cryptographic algorithms to encrypt SOAP messages.
Keywords: Web services; cryptography; middleware; open systems; PERSEC; SOAP message encryption; WS-security specification; Web services; XML encryption; XML signature; cryptographic algorithms; heterogeneous systems; interoperability; public infrastructure; safety specifications; Encryption; Safety; Simple object access protocol; XML; SOAP; XML schema; XML security specifications; cryptographic algorithms; performance (ID#: 15-7768)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113540&isnumber=7113432
Bhatnagar, R.; Patel, J., “Scady: A Scalable & Dynamic Toolkit for Enhanced Performance in Grid Computing,” in Pervasive Computing (ICPC), 2015 International Conference on, vol., no., pp. 1–5, 8–10 Jan. 2015. doi:10.1109/PERVASIVE.2015.7087085
Abstract: Grid computing has resulted in faster execution of applications. Day by day the requirement for execution speed of applications is increasing. To achieve this, suitable middleware is needed and to be configured requirement specific. An application can be executed faster only when there is no node failure or network failure. Moreover, at run time, if a node fails, there should be some alternative arrangement to reconfigure other node to complete the task which was supposed to be done by the failed node. Thus, a middleware is needed that can be configured according to the requirements of the application. Such a middleware would help in increasing the performance of the application. In most of the middleware available presently, dynamic configuration is not facilitated. This has become one of the reasons for the failure of adoption or implementation of Grid in many organizations. This paper analyzes the present GUI Grid - Alchemi with its components and challenges. It also proposes a toolkit-Scady which solves the challenges of Alchemi and provides higher performance. Two experiments are done and their result analysis shows the performance of the proposed toolkit.
Keywords: graphical user interfaces; grid computing; middleware; performance evaluation; Alchemi; GUI grid; Scady; dynamic toolkit; enhanced grid computing performance; network failure; scalable toolkit; Computers; Graphical user interfaces; Grid computing; High performance computing; Middleware; Peer-to-peer computing; Security; Executors; Managers; Node failure; Reallocation; Session (ID#: 15-7769)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087085&isnumber=7086957
Catuogno, L.; Turchi, S., “The Dark Side of the Interconnection: Security and Privacy in the Web of Things,” in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, vol., no., pp. 205–212, 8–10 July 2015. doi:10.1109/IMIS.2015.86
Abstract: The Web of Things (WoT) promises to dramatically boost the potentiality of interconnecting smart and physical devices over the Internet as it not only enhances ergonomics and productivity of the Internet of Things (IoT), but it also introduces new capabilities for device interoperation and data aggregation and analysis. These advances pose the challenge of preserving data security and privacy (S&P), as well as the reliability of the overall infrastructure. Deploying existing S&P solutions and technologies in the WoT is not straightforward because of its potential vastness, its intrinsic inhomogeneity and the wide variety of involved entities and interests. In such scenario, every choice comes from a non-trivial trade-off among different aspects including security, availability and legal issues. In this paper, we investigate the nature of this trade-off, pointing out the different kinds of S&P issues and surveying some of the available solutions. In addition, we discuss the major issues raised while securing an existing WoT infrastructure.
Keywords: Internet of Things; data analysis; data privacy; security of data; telecommunication network reliability; IoT; S&P; Web of Things; WoT; data aggregation; data security and privacy; device interoperation; Access control; Intelligent sensors; Internet of things; Middleware; Privacy; Security; Web of Things (ID#: 15-7770)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284949&isnumber=7284886
Kodym, O.; Benes, F.; Svub, J., “EPC Application Framework in the Context of Internet of Things,” in Carpathian Control Conference (ICCC), 2015 16th International, vol., no., pp. 214–219, 27–30 May 2015. doi:10.1109/CarpathianCC.2015.7145076
Abstract: Internet of Things philosophy implementation in conditions of the existing communication networks requires new types of services and interoperability. Once of the desired innovations is communication between existing IP world and the new generation network. Not just networks of smart devices that may not always have IP connectivity, but also other RFID-labeled objects and sensors. Fulfilling the need for high-quality applications for further more specific parameters of these objects internet of things, as may be location, serial number, distinctive and unique characters/connections, can add a proper extension of the existing network and system infrastructure with new information and naming service. Their purpose is not only to assign a unique identifier to the object, but also allow users to new services use other information associated with the selected object. The technology that enables the data processing, filtering and storage is defined in the Electronic Product Code Application Framework (EPCAF) as RFID middleware and EPCIS. One of the implementations of these standards is the Open Source solution Fosstrak. We experimented with Fosstrak system that was developed on Massachusetts Institute of Technology (MIT) by an academic initiative but nowadays we are going to prove its benefits in the context of business environment. The project is aimed also on connection and linking between systems of the EPCIS class made by the ONS systems.
Keywords: IP networks; Internet of Things; filtering theory; middleware; open systems; product codes; radiofrequency identification; storage management; EPC application framework; EPCAF; EPCIS class; Fosstrak system; IP connectivity; IP world; MIT; Massachusetts Institute of Technology; ONS system; RFID middleware; RFID-labeled object; academic initiative; business environment; communication network; data processing; electronic product code application framework; filtering; high-quality application; information service; interoperability; naming service; new generation network; open source solution Fosstrak; smart device; storage; system infrastructure; Artificial neural networks; Interoperability; Product codes; Standards; Technological innovation; Testing; Fosstrak; IPv6; IoT (Internet of Things); ONS (Object name services); RFID security (ID#: 15-7771)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145076&isnumber=7145033
Maia, M.E.F.; Andrade, R.M.C., “System Support for Self-Adaptive Cyber-Physical Systems,” in Distributed Computing in Sensor Systems (DCOSS), 2015 International Conference on, vol., no., pp. 214–215, 10–12 June 2015. doi:10.1109/DCOSS.2015.33
Abstract: As the number of interacting devices and the complexity of cyber-physical systems increases, self-adaptation is a natural solution to address challenges faced by software developers. To provide a systematic and unified solution to support the development and execution of cyber-physical systems, this doctoral thesis proposes the creation of an environment that offers mechanisms to facilitate the technology-independent communication and uncoupled interoperable coordination between interacting entities of the system, as well as the flexible and adaptable execution of the functionalities specified for each application. The outcome is a set of modules to help developers to face the challenges of cyber-physical systems.
Keywords: security of data; adaptable execution; doctoral thesis; flexible execution; interacting devices; self-adaptive cyber-physical systems; software developers; system support; technology-independent communication; uncoupled interoperable coordination; Actuators; Computer architecture; Context; Medical services; Middleware; Cyber-Physical Systems; Middleware; Self-Adaptation (ID#: 15-7772)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165045&isnumber=7164869
Luzuriaga, J.E.; Perez, M.; Boronat, P.; Cano, J.C.; Calafate, C.; Manzoni, P., “A Comparative Evaluation of AMQP and MQTT Protocols over Unstable and Mobile Networks,” in Consumer Communications and Networking Conference (CCNC), 2015 12th Annual IEEE, vol., no., pp. 931–936, 9–12 Jan. 2015. doi:10.1109/CCNC.2015.7158101
Abstract: Message oriented middleware (MOM) refers to the software infrastructure supporting sending and receiving messages between distributed systems. AMQP and MQTT are the two most relevant protocols in this context. They are extensively used for exchanging messages since they provide an abstraction of the different participating system entities, alleviating their coordination and simplifying the communication programming details. These protocols, however, have not been thoroughly tested in the context of mobile or dynamic networks like vehicular networks. In this paper we present an experimental evaluation of both protocols in such scenarios, characterizing their behavior in terms of message loss, latency, jitter and saturation boundary values. Based on the results obtained, we provide criteria of applicability of these protocols, and we assess their performance and viability. This evaluation is of interest for the upcoming applications of MOM, especially to systems related to the Internet of Things.
Keywords: Internet of Things; jitter; middleware; mobile radio; protocols; queueing theory; radiotelemetry; AMQP protocol; MOM; MQTT protocol; advanced message queuing protocol; distributed system; message oriented middleware; message queuing telemetry transport; mobile network; saturation boundary value; Jitter; Method of moments; Mobile communication; Mobile computing; Production; Protocols; Security (ID#: 15-7773)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158101&isnumber=7157933
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Quantum Computing Security 2015 |
While quantum computing is still in its early stage of development, large-scale quantum computers promise to be able to solve certain problems much more quickly than any classical computer using the best currently known algorithms. Quantum algorithms, such as Simon’s algorithm, run faster than any possible probabilistic classical algorithm. For the Science of Security, the speed, capacity, and flexibility of qubits over digital processing offer still greater promise and relate to the hard problems of resilience, predictive metrics, and composability. They are a hard problem of interest to cryptography. The research work presented here was published in 2015.
Krawec, W.O., “Security Proof of a Semi-Quantum Key Distribution Protocol,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 686–690, 14–19 June 2015. doi:10.1109/ISIT.2015.7282542
Abstract: Semi-quantum key distribution protocols are designed to allow two users to establish a secure secret key when one of the two users is limited to performing certain “classical” operations. There have been several such protocols developed recently, however, due to their reliance on a two-way quantum communication channel (and thus, the attacker’s opportunity to interact with the qubit twice), their security analysis is difficult and little is known concerning how secure they are compared to their fully quantum counterparts. In this paper we prove the unconditional security of a particular semi-quantum protocol and derive an expression for its key rate, in the asymptotic scenario.
Keywords: cryptographic protocols; quantum cryptography; telecommunication channels; telecommunication security; key rate; qubit; secure secret key; security analysis; security proof; semi quantum key distribution protocols; two-way quantum communication channel; Atmospheric measurements; Entropy; Error analysis; Noise; Particle measurements; Protocols; Security; Cryptography; Quantum Computing (ID#: 15-7834)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282542&isnumber=7282397
DiVincenzo, D.P., “The Memory Problem of Quantum Information Processing,” in Proceedings of the IEEE, vol. 103, no. 8,
pp. 1417–1425, Aug. 2015. doi:10.1109/JPROC.2015.2432125
Abstract: In quantum information processing, the fundamental rules of information representation are different than in the classical setting. The fundamental unretrievability of some forms of information from quantum memory enable unique capabilities that enhance privacy and security. Unique correlations between quantum bits, referred to as quantum entanglement, enable fundamentally faster algorithms for important computational problems. Quantum bits are very delicate, and require extraordinarily low noise levels in order that they can be stored successfully. However, the long-term storage of quantum information is not hopeless, with relatively new discoveries of unique features of quantum entanglement showing that effective use of redundancy should make possible the solution of the quantum memory problem. Laboratory capabilities are just starting to make it possible to test these ideas, and a clear concept of the architectural solutions to scalable quantum computing is emerging.
Keywords: quantum computing; quantum entanglement; redundancy; information representation rules; laboratory capabilities; noise levels; privacy enhancement; quantum bits; quantum information processing; quantum information storage; quantum memory problem; security enhancement; Information processing; Information representation; Memory management; Photonics; Quantum computing; Quantum entanglement; Reliability; Information representation; information technology; (ID#: 15-7835)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7137628&isnumber=7158995
Mogos, G., “Software Implementation of Bechmann-Pasquinucci and Peres Protocol for Qutrits,” in Networks, Computers and Communications (ISNCC), 2015 International Symposium on, vol., no., pp. 1–5, 13–15 May 2015. doi:10.1109/ISNCC.2015.7238589
Abstract: The main goals of cryptography are for a sender and a receiver to be able to communicate in a way that is unintelligible to third parties, and for the authentication of messages to prove that they were not altered in transit. Both of these goals can be accomplished with provable security if sender and receiver are in possession of shared, the secret key. This paper presents a software-prototype of the Bechmann-Pasquinucci and Peres protocol for qutrits, on two cases: with and without cyber-attack (the Intercept-Resend attack). Presence of the enemy is determined by calculating the errors obtained at the end of transmission through quantum channel. The method Quantum Trit Error Rate (QTER) for detecting enemy can be applied to the majority key distribution systems, each system having its own acceptable error rate.
Keywords: cryptographic protocols; error statistics; message authentication; private key cryptography; quantum cryptography; Bechmann-Pasquinucci protocol; Peres protocol; QTR; cryptography; cyber-attack; intercept-resend attack; majority key distribution systems; message authentication; quantum channel; quantum trit error rate; qutrits; secret key; software-prototype; Error analysis; Protocols; Receivers; quantum computing (ID#: 15-7836)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7238589&isnumber=7238567
Clupek, V.; Malina, L.; Zeman, V., “Secure Digital Archiving in Post-Quantum Era,” in Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, vol., no., pp. 622–626, 9–11 July 2015. doi:10.1109/TSP.2015.7296338
Abstract: This article introduces a solution of secure digital archiving in the post-quantum era. The basic tool at secure digital archiving of the electronic documents is the signature schemes, which are used at creation of the certificates and the timestamps. This article deals with the question of security of the signature schemes in the post-quantum era and introduces the post-quantum signature schemes, which will be resistant to the attacks leading by both conventional and quantum computers. The conventional signature schemes, based on factorization or discrete logarithm problem, in case of implementing the Shor algorithm on a quantum computer will be easy to break through. The main point of this article is the proposal of the solution of secure digital archiving with using the secure post-quantum signature schemes.
Keywords: digital signatures; document handling; information retrieval systems; quantum computing; Shor algorithm; attack resistance; certificates; discrete logarithm problem; electronic document; factorization; quantum computer; secure digital archiving; secure post-quantum signature scheme; timestamps; Computers; Digital signatures; Lattices; Public key; Quantum computing; Digital Signature; Post-Quantum Cryptography; Secure Digital Archiving; Security (ID#: 15-7837)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296338&isnumber=7296206
H. Sasaki; R. Matsumoto; T. Uyematsu, “Key Rate of the B92 Quantum Key Distribution Protocol with Finite Qubits,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 696–699, 14–19 June 2015. doi:10.1109/ISIT.2015.7282544
Abstract: The key rate of the B92 quantum key distribution protocol had not been reported before this research when the number of qubits is finite. We compute it by using the security analysis framework proposed by Scarani and Renner in 2008.
Keywords: quantum cryptography; B92 quantum key distribution protocol; finite qubits; key rate; security analysis framework; Channel estimation; Convex functions; Estimation; Minimization; Protocols; Quantum computing; Security; B92; quantum key distribution (ID#: 15-7838)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282544&isnumber=7282397
Xiaoqing Tan; Siting Cheng; Jin Li; Zhihong Feng, “Quantum Key Distribution Protocol Using Quantum Fourier Transform,” in Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, vol., no.,
pp. 96–101, 24–27 March 2015. doi:10.1109/WAINA.2015.8
Abstract: A quantum key distribution protocol is proposed base on the discrete quantum Fourier transform. In our protocol, we perform Fourier transform on each particle of the sequence to encode the qubits and insert sufficient decoy photons into the sequence for preventing eavesdropping. Furthermore, we prove the security of this protocol with its immunization to intercept-measurement attack, intercept-resend attack and entanglement-measurement attack. Then, we analyse the efficiency of the protocol, the efficiency of our protocol is about 25% that higher than many other protocols. Also, the proposed protocol has another advantage that it is completely compatible with quantum computation and more easy to realize in the distributed quantum secure computation.
Keywords: cryptographic protocols; discrete Fourier transforms; quantum cryptography; discrete quantum Fourier transform; distributed quantum secure computation; eavesdropping; immunization; intercept-measurement attack; intercept-resend attack; quantum key distribution protocol; Atmospheric measurements; Fourier transforms; Particle measurements; Photonics; Protocols; Quantum computing; Security; Intercept-resend attack; Quantum Fourier transform; Quantum key distribution; Unitary operation (ID#: 15-7839)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096154&isnumber=7096097
Bos, J.W.; Costello, C.; Naehrig, M.; Stebila, D., “Post-Quantum Key Exchange for the TLS Protocol from the Ring Learning with Errors Problem,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 553–570, 17–21 May 2015. doi:10.1109/SP.2015.40
Abstract: Lattice-based cryptographic primitives are believed to offer resilience against attacks by quantum computers. We demonstrate the practicality of post-quantum key exchange by constructing cipher suites for the Transport Layer Security (TLS) protocol that provide key exchange based on the ring learning with errors (R-LWE) problem, we accompany these cipher suites with a rigorous proof of security. Our approach ties lattice-based key exchange together with traditional authentication using RSA or elliptic curve digital signatures: the post-quantum key exchange provides forward secrecy against future quantum attackers, while authentication can be provided using RSA keys that are issued by today’s commercial certificate authorities, smoothing the path to adoption. Our cryptographically secure implementation, aimed at the 128-bit security level, reveals that the performance price when switching from non-quantum-safe key exchange is not too high. With our R-LWE cipher suites integrated into the Open SSL library and using the Apache web server on a 2-core desktop computer, we could serve 506 RLWE-ECDSA-AES128-GCM-SHA256 HTTPS connections per second for a 10 KiB payload. Compared to elliptic curve Diffie-Hellman, this means an 8 KiB increased handshake size and a reduction in throughput of only 21%. This demonstrates that provably secure post-quantum key-exchange can already be considered practical.
Keywords: cryptographic protocols; digital signatures; public key cryptography; quantum cryptography; 2-core desktop computer; Apache Web server; R-LWE cipher suites; RLWE-ECDSA-AES128-GCM-SHA256 HTTPS; RSA keys; TLS protocol; authentication; commercial certificate authority; elliptic curve Diffie-Hellman; elliptic curve digital signatures; handshake size; lattice-based cryptographic primitives; lattice-based key exchange; nonquantum-safe key exchange; open SSL library; post-quantum key exchange; quantum attackers; quantum computers; ring learning with error problem; security level; transport layer security protocol; Authentication; Computers; Cryptography; Lattices; Protocols; Quantum computing; Transport Layer Security (TLS); key exchange; learning with errors; post-quantum (ID#: 15-7840)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163047&isnumber=7163005
Baokang Zhao; Ziling Wei ; Bo Liu; Su Jinshu; You Ilsun, “Providing Adaptive Quality of Security in Quantum Networks,” in Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, vol., no., pp. 440–445, 19–20 Aug. 2015. doi: (not provided)
Abstract: Recently, several Quantum Key Distribution (QKD) networks, such as Tokyo QKD, SECOQC, have been built to evaluate the quantum based OTP (One Time Pad) secure communication. As an ideal unconditional secure technique, OTP requires the key rate the same as the information rate. However, comparing with high speed information traffic (Gbps), the key generation rate of QKD is very poor (Kbps). Therefore, in practical QKD networks, it is difficult to support numerous applications and multiple users simultaneously. To address this issue, we argue that it is more practical to provide quality of security instead of OTP in quantum networks. We further propose ASM, an Adaptive Security Selection Mechanism for quantum networks based on the Analytic Hierarchy Process (AHP). In ASM, services can select an appropriate encryption algorithm that satisfies the proper security level and performance metrics under the limit of the key generation rate. We also implement ASM under our RT-QKD platform, and evaluate its performance. Experimental results demonstrate that ASM can select the optimal algorithm to meet the requirement of security and performance under an acceptable cost.
Keywords: Algorithm design and analysis; Analytic hierarchy process; Encryption; Information rates; Quantum computing; Real-time systems; Analytic Hierarchy Process; Quality of security; Quantum Key Distribution (ID#: 15-7841)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332609&isnumber=7332527
Pilaram, H.; Eghlidos, T., “An Efficient Lattice Based Multi-stage Secret Sharing Scheme,” in Dependable and Secure Computing, IEEE Transactions on, vol. PP, no. 99, pp.1–1, May 2015. doi:10.1109/TDSC.2015.2432800
Abstract: In this paper, we construct a lattice based (t; n) threshold multi-stage secret sharing (MSSS) scheme according to Ajtai’s construction for one-way functions. In an MSSS scheme, the authorized subsets of participants can recover a subset of secrets at each stage while other secrets remain undisclosed. In this paper, each secret is a vector from a t-dimensional lattice and the basis of each lattice is kept private. A t-subset of n participants can recover the secret(s) using their assigned shares. Using a lattice based oneway function, even after some secrets are revealed, the computational security of the unrecovered secrets is provided against quantum computers. The scheme is multi-use in the sense that to share a new set of secrets, it is sufficient to renew some public information such that a new share distribution is no longer required. Furthermore, the scheme is verifiable meaning that the participants can verify the shares received from the dealer and the recovered secrets from the combiner, using public information.
Keywords: Computers; Lattices; Public key; Quantum computing; Resistance; Lattice Based Cryptography; Multi-stage secret sharing; Multi-use secret sharing; Verifiability (ID#: 15-7842)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7110574&isnumber=4358699
Abushgra, A.; Elleithy, K., “Initiated Decoy States in Quantum Key Distribution Protocol by 3 Ways Channel,” in Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island, vol., no., pp. 1–5, 1–1 May 2015. doi:10.1109/LISAT.2015.7160178
Abstract: After decades of research, computer scientists have in recent years come close to reaching substantive results which prove the usability of quantum key distribution (QKD). Several QKD protocols and different schemes have surfaced since the last century. Additionally, some of these protocols were created in new algorithms and up until now, have been proven to be secure; however, other scientists only made modifications to previous original protocols. This paper seeks to create a new scheme in QKD that will communicate between two parties and will give them a high level of security against any well-known attacks while handling both of parties in a manner that will reduce their dependency on both classic communication and the classical channel.
Keywords: cryptographic protocols; quantum cryptography; 3 way channel; QKD protocols; quantum key distribution protocol; security protocols; Authentication; Photonics; Protocols; Quantum computing; EPR pair; Entanglement state; QKD attacks; Quantum key distribution (ID#: 15-7843)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160178&isnumber=7160171
M.Y Abubakar; Low Tang Jung; Oi Mean Foong, “Two Channel Quantum Security Modelling Focusing on Quantum Key Distribution Technique,” in IT Convergence and Security (ICITCS), 2015 5th International Conference on, vol., no., pp. 1–5,
24–27 Aug. 2015. doi:10.1109/ICITCS.2015.7293032
Abstract: The work presents in this paper proposes to solve the existing issue of initial qubit (primary key) lost due to an attack by eavesdropper, causing the quantum bit error rate (QBER) to be high which may leaked enough information to the eavesdropper during secret key sharing in network communication. We intend to greatly reduce the QBER to a reasonable percentage that will make the key sharing communication more secured and effective. We use the dual quantum channels against the traditional single quantum channel. The dual channels give an upper hand to reduce the chances of error caused by eavesdropper. Simulations were conducted for varying the noise factor which is a measure of the presence of an eavesdropper. The results where compared between our proposed method and the one quantum channel model. Our method shows an almost half of the QBER reduced during the secret key sharing session.
Keywords: error statistics; private key cryptography; quantum cryptography; QBER; eavesdropper; initial qubit; network communication; quantum bit error rate; quantum key distribution technique; secret key sharing session; two channel quantum security modelling; Computers; Photonics; Protocols; Quantum computing; Quantum cryptography; Receivers (ID#: 15-7844)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293032&isnumber=7292885
Vidya, K.; Abinaya, A., “Secure Data Access Control for Multi-Authority Quantum Based Cloud Storage,” in Computing and Communications Technologies (ICCCT), 2015 International Conference on, vol., no., pp. 387–391, 26–27 Feb. 2015. doi:10.1109/ICCCT2.2015.7292781
Abstract: An efficient way of ensuring security in cloud is to give secure data access control among untrusted cloud server. Hence to improve the security, a new system could be introduced such as Quantum Security Scheme which invokes Quantum gates for encryption purpose. Quantum cryptography has been rapidly developing these days due to its efficient service which is provided by means of key generation and key distribution. Quantum Ciphertext-Policy Attribute Based Encryption (QCP-ABE) is a promising technique for data access control on encrypted data. This scheme also achieves mutual authentication among those authorities involving in the system and it also achieves both forward and backward security.
Keywords: authorisation; cloud computing; message authentication; quantum cryptography; quantum gates; storage management; QCP-ABE; backward security; cloud security; data access control security; forward security; key distribution; key generation; multiauthority quantum based cloud storage; mutual authentication; quantum ciphertext-policy attribute based encryption; quantum cryptography; quantum gates; quantum security scheme; untrusted cloud server; Cloud computing; Encryption; Logic gates; Quantum computing; Servers; Attribute based encryption; QCP-ABE; Quantum cryptography; data access control (ID#: 15-7845)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292781&isnumber=7292708
Verbauwhede, I.; Balasch, J.; Roy, S.S.; Van Herrewege, A., “24.1 Circuit Challenges from Cryptography,” in Solid-State Circuits Conference (ISSCC) — Digest of Technical Papers, 2015 IEEE International, vol., no., pp. 1–2, 22–26 Feb. 2015. doi:10.1109/ISSCC.2015.7063109
Abstract: Implementing cryptography and security into integrated circuits is somehow similar to applications in other fields. We have to worry about comparable optimization goals: area, power, energy, throughput and/or latency. Moore’s law helps to attain these goals. However, it also gives the attackers more computational power to break cryptographic algorithms. On top of this, quantum computers may become soon a reality, so that novel, very computationally demanding “post-quantum” cryptographic algorithms need implementation. Finally, there is a third dimension to the problem: implementations have to be resistant against physical attacks and countermeasures increase the cost. This paper demonstrates with actual data how these conflicting challenges are being addressed.
Keywords: optimisation; quantum computing; quantum cryptography; Moore law; circuit challenges; cryptographic algorithms; cryptography; integrated circuit security; optimization goals; quantum computers; CMOS integrated circuits; Cryptography; Field programmable gate arrays; Polynomials; Random access memory; Resistance (ID#: 15-7846)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7063109&isnumber=7062838
Brumen, B.; Taneski, V., “Moore’s Curse on Textual Passwords,” in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, vol., no., pp.1360–1365, 25–29 May 2015. doi:10.1109/MIPRO.2015.7160486
Abstract: Passwords are still the predominant way of authentication in information systems, and are mostly at user’s responsibility. They conceive, use, re-use, abuse and forget passwords. In absence of strict password policies and at minimum required user training, passwords tend to be short, easy to remember, connected to the user’s personal or professional life and consequently easy to break. The additional problem with passwords is their aging: Moore’s law is affecting the available computing power to crack passwords and those deemed secure today may easily be broken in the near future. The aim of this paper is to study various scenarios of the effect the Moore’s law is having on passwords and their security. In addition, advancements in other fields, e.g. quantum computing and Internet of Things, are taken into the account. We analyzed various password types and the lengths required to withstand an off-line brute-force attack. The analysis was performed under various scenarios and combinations thereof: the Moore’s law will continue to be in the effect for years to come with varying parameters, quantum computing will become feasible, improvements in hash tables computations will speed up the cracking process, and others. Results: The paper shows the minimum password length in characters for each password type under various scenarios. Even the most optimistic scenario shows that the minimum required password length today should be of 11 randomly drawn characters, rendering most of the passwords inappropriate due to their poor memorability. The current textual passwords are cursed by the Moore’s law and other advancements in the field. Soon, classical textual passwords will need to be replaced by other mechanisms, which are, fortunately, already emerging.
Keywords: message authentication; Internet of Things; Moore curse; authentication; information systems; offline brute-force attack; password types; personal life; professional life; quantum computing; textual passwords; user responsibility; Computational modeling; Hardware; Presses; Psychology; Security; US Department of Transportation (ID#: 15-7847)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160486&isnumber=7160221
Omer K. Jasim Mohammad; S. Abbas; El-Horbaty, E.-S.M.; Salem, A.-B.M., “Securing Cloud Computing Environment Using a New Trend of Cryptography,” in Cloud Computing (ICCC), 2015 International Conference on, vol., no., pp. 1–8, 26–29 April 2015. doi:10.1109/CLOUDCOMP.2015.7149654
Abstract: Cloud computing is an internet-based computing, where shared resources, software, and information are provided with consumers on-demand. They guarantee a way to share distributed resources and services that belong to different organizations. In order to build secure cloud environment, data security and cryptography must be assured to share data through distributed environment. So, this paper provides more flexibility and secured communication environment by deploying a new cryptographic service. This service entails both Quantum Key Distribution (QKD) and enhanced version of Advanced Encryption Standard (AES). Moreover, this service solves the key distribution and key management problems in cloud environment which emerged through the two implemented modes, on-line and off-line modes.
Keywords: cloud computing; quantum cryptography; AES; Internet-based computing; QKD; advanced encryption standard; cloud computing environment security; consumer on-demand; cryptographic service; cryptography; data security; distributed environment; information sharing; key management problems; off-line modes; on-line modes; quantum key distribution; resource sharing; software sharing; Algorithm design and analysis; Cloud computing; Computational modeling; Encryption (ID#: 15-7848)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149654&isnumber=7149613
Aysu, A.; Schaumont, P., “Precomputation Methods for Hash-based Signatures on Energy-Harvesting Platforms,” in Computers, IEEE Transactions on, vol. PP, no. 99, pp.1–1, November 2015. doi:10.1109/TC.2015.2500570
Abstract: Energy-harvesting techniques can be combined with wireless embedded sensors to obtain battery-free platforms with an extended lifetime. Although energy-harvesting offers a continuous supply of energy, the delivery rate is typically limited to a few Joules per day. This is a severe constraint to the achievable computing throughput on the embedded sensor node, and to the achievable latency obtained from applications running on those nodes. In this paper, we address these constraints with precomputation. The idea is to reduce the amount of computations required in response to application inputs, by partitioning the algorithm in an offline part, computed before the inputs are available, and an online part, computed in response to the actual input. We show that this technique works well on hash-based cryptographic signatures, which have a complex key generation for each new message that requires a signature. By precomputing the key-material, and by storing it as run-time coupons in non-volatile memory, there is a drastic reduction of the run-time energy needs for a signature, and a drastic reduction of the run-time latency to generate it. For a Winternitz hash-based scheme at 84-bit quantum security level on a MSP430 microcontroller, we measured a run-time energy reduction of 11.9 and a run-time latency reduction of 23.5.
Keywords: Energy consumption; Optimization; Public key; Sensors; Supercapacitors; Yttrium; Energy Harvesting Platforms; Hashbased Signatures; Precomputation (ID#: 15-7849)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328726&isnumber=4358213
Mingwu Zhang.; Yudi Zhang; Yixin Su; Qiong Huang; Yi Mu, “Attribute-Based Hash Proof System Under Learning-with-Errors Assumption in Obfuscator-Free and Leakage-Resilient Environments,” in Systems Journal, IEEE , vol. PP, no. 99, pp.1–9, July 2015. doi:10.1109/JSYST.2015.2435518
Abstract: Node attributes such as MAC and IP addresses, and even GPS position, can be considered as exclusive identity in the distributed networks such as cloud computing platform, wireless body area networks, and Internet of Things. Nodes can exchange or transmit some important information in the networks. However, with the openness and exposure of node in the networks, the communications between the nodes are facing a lot of security issues. In particular, sensitive information may be leaked to the attackers in the presence of side-channel attacks, memory leakages, and time attacks. In this paper, we present a new notion of attribute-based hash proof system (maths f ABmbox–HPS) in the bounded key-leakage model, to be resistant to the possible quantum attackers. The notion of maths f ABmbox–HPS s is so attractive and powerful and can be considered as implicit proofs of membership for languages. We also give a construction of maths f ABmbox–HPS in lattices and prove the security of indistinguishability of valid and invalid ciphertext and leakage smoothness under the decisional learning-with-errors assumption. We also provide the general leakage-resilient attribute-based encryption construction using maths f ABmbox–HPS as the primitive without indistinguishable obfuscator. Finally, we discuss some extensions to improve the schemes in larger space for the message, larger alphabet for the attribute, and arbitrary access structure for the policy, respectively. We also give the performance evaluation in theoretic analysis and practical computation.
Keywords: Encryption; Games; Lattices; Random variables; Zinc; Attribute-based encryption (ABE); hash proof system (HPS); lattice-based cryptosystem; leakage resilience; learning with errors (ID#: 15-7850)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145396&isnumber=4357939
Verma, K.K.; Kumar, P.; Tomar, A., “Analysis of Moving Object Detection and Tracking in Video Surveillance System,” in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, vol., no., pp. 1758–1762,
11–13 March 2015. doi: (not provided)
Abstract: In real world application, video security is becoming more important now-a-days due to the happening of unwanted events in our surroundings. Moving object detection is a challenging task in low resolution video, variable lightening conditions and in crowed area due to the limitation of pattern recognition techniques and it loses many important details in the visual appearance of the moving object. In this paper we propose a review on unusual event detection in video surveillance system. Video surveillance system might be used for enhancing the security in various organizations, academic institutions and many more areas.
Keywords: object detection; object tracking; security of data; video surveillance; moving object detection; moving object tracking; pattern recognition techniques; unusual event detection; video security; video surveillance system; visual appearance; Cameras; Event detection; Object detection; Object tracking; Video surveillance; Unusual event detection; Video surveillance; low resolution video; moving object detection; variable lightening conditions (ID#: 15-7851)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100549&isnumber=7100186
Chahar, U.S.; Chatterjee, K., “A Novel Differential Phase Shift Quantum Key Distribution Scheme for Secure Communication,” in Computing and Communications Technologies (ICCCT), 2015 International Conference on, vol., no.,
pp. 156–159, 26–27 Feb. 2015. doi:10.1109/ICCCT2.2015.7292737
Abstract: Quantum key distribution is used for secure communication between two parties for generation of secret key. Differential Phase Shift Quantum Key Distribution is a new and unique QKD protocol that is different from traditional ones, providing simplicity and practicality. This paper presents Delay Selected DPS-QKD scheme in which it uses a weak coherent pulse train, and features simple configuration and efficient use of the time domain. All detected photon participate to form a secure key bits and resulting in a higher key creation efficiency.
Keywords: cryptographic protocols; differential phase shift keying; quantum cryptography; telecommunication security; time-domain analysis; QKD protocol; coherent pulse train; delay selected DPS-QKD scheme; differential phase shift quantum key distribution scheme; secret key generation; secure communication; secure key bits; time domain analysis; Delays; Detectors; Differential phase shift keying; Photonics; Protocols; Security; Differential Phase Shift; Differential phase shift keying protocol; Quantum Key Distribution (ID#: 15-7852)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292737&isnumber=7292708
Hussain, W.; Hussain, F.K.; Hussain, O.K., “Transmitting Scalable Video Streaming over Wireless Ad Hoc Networks,” in Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, vol., no., pp. 201–206,
24–27 March 2015. doi:10.1109/AINA.2015.186
Abstract: Due to the rapid increase in the use of social networking websites and applications, the need to stream video over wireless networks has increased. There are a number of considerations when transmitting streaming video between the nodes connected through wireless networks, such as throughput, the size of the multimedia file, response time, delay, scalability and loss of data. The scalability of ad-hoc networks needs to be analyzed by considering various aspects, such as self-organization, security, routing flexibility, availability of bandwidth, data distribution, Quality of Service, throughput, response time and efficiency. In this paper, we discuss the existing approaches to multimedia routing and transmission over wireless ad-hoc networks by considering scalability. The study draws several conclusions and makes recommendations for future directions.
Keywords: ad hoc networks; quality of service; routing protocols; social networking (online); telecommunication security; video streaming; ad-hoc routing protocols; bandwidth availability; data distribution; data loss; delay; flexibility; multimedia file size; quality-of-service; response time; scalable video streaming transmission; security; self-organization; social networking Websites; throughput; wireless ad-hoc networks; Ad hoc networks; Mobile computing; Routing; Routing protocols; Scalability; Streaming media; Ad-hoc routing protocols; Scalability (ID#: 15-7853)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097971&isnumber=7097928
Daojing He; Chan, S.; Guizani, M., “Mobile Application Security: Malware Threats and Defenses,” in Wireless Communications, IEEE, vol. 22, no. 1, pp. 138–144, February 2015. doi:10.1109/MWC.2015.7054729
Abstract: Due to the quantum leap in functionality, the rate of upgrading traditional mobile phones to smartphones is tremendous. One of the most attractive features of smartphones is the availability of a large number of apps for users to download and install. However, it also means hackers can easily distribute malware to smartphones, launching various attacks. This issue should be addressed by both preventive approaches and effective detection techniques. This article first discusses why smartphones are vulnerable to security attacks. Then it presents malicious behavior and threats of malware. Next, it reviews the existing malware prevention and detection techniques. Besides more research in these directions, it points out efforts from app developers, app store administrators, and users, who are also required to defend against such malware.
Keywords: computer crime; invasive software; mobile computing; smart phones; telecommunication security; app store administrators; hackers; malicious behavior; malware defenses; malware detection techniques; malware prevention techniques; malware threats; mobile application security; mobile phones; preventive approaches; security attacks; smartphones; Computer hacking; Electronic mail; Mobile communication; Smart phones; Spyware (ID#: 15-7854)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054729&isnumber=7054706
Rührmair, U.; Martinez-Hurtado, J.L.; Xiaolin Xu; Kraeh, C.; Hilgers, C.; Kononchuk, D.; Finley, J.J.; Burleson, W.P., “Virtual Proofs of Reality and Their Physical Implementation,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no.,
pp. 70–85, 17–21 May 2015. doi:10.1109/SP.2015.12
Abstract: We discuss the question of how physical statements can be proven over digital communication channels between two parties (a “prover” and a “verifier”) residing in two separate local systems. Examples include: (i) “a certain object in the prover’s system has temperature X°C”, (ii) “two certain objects in the prover’s system are positioned at distance X”, or (iii) “a certain object in the prover’s system has been irreversibly altered or destroyed”. As illustrated by these examples, our treatment goes beyond classical security sensors in considering more general physical statements. Another distinctive aspect is the underlying security model: We neither assume secret keys in the prover’s system, nor do we suppose classical sensor hardware in his system which is tamper-resistant and trusted by the verifier. Without an established name, we call this new type of security protocol a “virtual proof” of reality or simply a “virtual proof” (VP). In order to illustrate our novel concept, we give example VPs based on temperature sensitive integrated circuits, disordered optical scattering media, and quantum systems. The corresponding protocols prove the temperature, relative position, or destruction/modification of certain physical objects in the prover’s system to the verifier. These objects (so-called “witness objects”) are prepared by the verifier and handed over to the prover prior to the VP. Furthermore, we verify the practical validity of our method for all our optical and circuit-based VPs in detailed proof-of-concept experiments. Our work touches upon, and partly extends, several established concepts in cryptography and security, including physical unclonable functions, quantum cryptography, interactive proof systems, and, most recently, physical zero-knowledge proofs. We also discuss potential advancements of our method, for example “public virtual proofs” that function without exchanging witness objects between the verifier and the prover.
Keywords: cryptographic protocols; private key cryptography; quantum cryptography; trusted computing; circuit-based VP; digital communication channels; disordered optical scattering media; interactive proof systems; optical-based VP; physical implementation; physical statements; physical unclonable functions; physical zero-knowledge proofs; proof-of-concept experiments; prover system; public virtual proofs; quantum systems; secret keys; security protocol; temperature sensitive integrated circuits; virtual proof of reality; witness objects; Cryptography; Protocols; Temperature distribution; Temperature measurement; Temperature sensors; Interactive Proof Systems; Keyless Security Sensors; Physical Cryptography; Physical Unclonable Functions (PUFs); Physical Zero-Knowledge Proofs; Quantum Cryptography; Virtual Proofs (VPs) of Reality (ID#: 15-7855)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163019&isnumber=7163005
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Safe Coding Guidelines 2015 |
Coding standards encourage programmers to follow a set of uniform rules and guidelines determined by the requirements of the project and organization, rather than by the programmer’s personal familiarity or preference. Developers and software designers apply these coding standards during software development to create secure systems. The development of secure coding standards is a work in progress by security researchers, language experts, and software developers. The articles cited here cover topics related to the Science of Security hard problems of resilience, metrics, human factors, and policy-based governance. They were presented in 2015.
Sodanil, M.; Porrawatpreyakorn, N.; Quirchmayr, G.; Tjoa, A.M., “A Knowledge Transfer Framework for Secure Coding Practices,” in Computer Science and Software Engineering (JCSSE), 2015 12th International Joint Conference on, vol., no.,
pp. 120–125, 22–24 July 2015. doi:10.1109/JCSSE.2015.7219782
Abstract: Building a secure software product is required understandings of security principles and guidelines for the secure coding in terms of programming languages to develop safe, reliable, and secure systems in software development process. Therefore, knowledge transferring is required and influenced to the most effective secure software development project. This paper proposes a knowledge transfer framework for secure coding practices with guidance for the development of secure software product and how the framework could be applied in the telecommunication industry. A set of knowledge transfer activities is specified which aligns for secure coding. Finally, the implementation of a knowledge transfer framework for secure coding practices could mitigate at least the most common mistakes in software development processes.
Keywords: programming languages; security of data; software engineering; knowledge transfer activities; knowledge transfer framework; secure coding practices; secure software development project; secure software product; software development processes; telecommunication industry; Communications technology; Encoding; Knowledge transfer; Privacy; Security; Software; Standards; Knowledge Transfer Framework; Secure Coding Practices; Security Vulnerable; Software Engineering (ID#: 15-7806)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219782&isnumber=7219755
Liu, S.; Qu, Q.; Chen, L.; Ni, L.M., “SMC: A Practical Schema for Privacy-Preserved Data Sharing over Distributed Data Streams,” in Big Data, IEEE Transactions on, vol. 1, no. 2, pp. 68–81, June 1 2015.
doi:10.1109/TBDATA.2015.2498156
Abstract: Data collection is required to be safe and efficient considering both data privacy and system performance. In this paper, we study a new problem: distributed data sharing with privacy-preserving requirements. Given a data demander requesting data from multiple distributed data providers, the objective is to enable the data demander to access the distributed data without knowing the privacy of any individual provider. The problem is challenged by two questions: how to transmit the data safely and accurately; and how to efficiently handle data streams? As the first study, we propose a practical method, Shadow Coding, to preserve the privacy in data transmission and ensure the recovery in data collection, which achieves privacy preserving computation in a data-recoverable, efficient, and scalable way. We also provide practical techniques to make Shadow Coding efficient and safe in data streams. Extensive experimental study on a large-scale real-life dataset offers insight into the performance of our schema. The proposed schema is also implemented as a pilot system in a city to collect distributed mobile phone data.
Keywords: Base stations; Big data; Data privacy; Distributed databases; Encoding; Mobile handsets; Distributed data streams; data mining; distributed data sharing; privacy preserving; shadow coding (ID#: 15-7807)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7321000&isnumber=7153538
Prabhakar, B.; Reddy, D.K., “Analysis of Video Coding Standards Using PSNR and Bit Rate Saving,” in Signal Processing and Communication Engineering Systems (SPACES), 2015 International Conference on, vol., no., pp. 306–308, 2–3 Jan. 2015. doi:10.1109/SPACES.2015.7058271
Abstract: This paper mainly deals with the performance comparison of several video coding standards by means of peak signal-to-noise ratio and subjective testing like Rate Distortion (RD) curves and average bit-rate savings. A particular procedure is applied for the video coding standards H.265/High Efficiency Video Coding (HEVC), H.264/MPEG4-Advance Video Coding (AVC), MPEG4V2, MPEG4 and the Google video codec’s such as VP8, VP9 at different bit-rates and Peak Signal-to-Noise Ratios (PSNR) are estimated. The bit-rate reduction on an average is achieved about 50% for comparable to earlier video coding standards. The results obtained illustrate the H.265/HEVC achieving high peak signal-to-noise at low bit rates. This resembles high coding efficiency, comparing with lower versions of video coding standards.
Keywords: code standards; error statistics; rate distortion theory; video codecs; video coding; AVC; Google video codec; HEVC; PSNR; advance video coding; bit rate saving; high coding efficiency; peak signal-to-noise ratio; rate distortion curves; video coding standards; Bit rate; Encoding; MPEG 4 Standard; Transform coding; Video coding; H.264/MPEG4-AVC; H.265/HEVC; MPEG4; MPEG4V2; RD-curves; VP8; VP9; bit-rate (ID#: 15-7808)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058271&isnumber=7058196
Jokinen, E.; Lecomte, J.; Schinkel-Bielefeld, N.; Bäckström, T., “Intelligibility Evaluation of Speech Coding Standards in Severe Background Noise and Packet Loss Conditions,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, vol., no., pp. 5152–5156, 19–24 April 2015. doi:10.1109/ICASSP.2015.7178953
Abstract: Speech intelligibility is an important aspect of speech transmission but often when speech coding standards are compared only the quality is evaluated using perceptual tests. In this study, the performance of three wideband speech coding standards, adaptive multi-rate wideband (AMR-WB), G.718, and enhanced voice services (EVS), is evaluated in a subjective intelligibility test. The test covers different packet loss conditions as well as a near-end background noise condition. Additionally, an objective quality evaluation in different packet loss conditions is conducted. All of the test conditions extend beyond the specification range to evaluate the attainable performance of the codecs in extreme conditions. The results of the subjective tests show that both EVS and G.718 are better in terms of intelligibility than AMR-WB. EVS attains the same performance as G.718 with lower algorithmic delay.
Keywords: speech coding; AMR-WB; EVS; adaptive multirate wideband; background noise; enhanced voice services; intelligibility evaluation; near end background noise condition; packet loss conditions; speech coding standards; speech intelligibility; speech transmission; Codecs; Noise; Packet loss; Speech; Speech coding; Standards; G.718; Speech intelligibility; adaptive multi-rate wideband; packet loss concealment (ID#: 15-7809)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178953&isnumber=7177909
Panichella, S.; Arnaoudova, V.; Di Penta, M.; Antoniol, G., “Would Static Analysis Tools Help Developers with Code Reviews?,” in Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, vol., no., pp. 161–170, 2–6 March 2015. doi:10.1109/SANER.2015.7081826
Abstract: Code reviews have been conducted since decades in software projects, with the aim of improving code quality from many different points of view. During code reviews, developers are supported by checklists, coding standards and, possibly, by various kinds of static analysis tools. This paper investigates whether warnings highlighted by static analysis tools are taken care of during code reviews and, whether there are kinds of warnings that tend to be removed more than others. Results of a study conducted by mining the Gerrit repository of six Java open source projects indicate that the density of warnings only slightly vary after each review. The overall percentage of warnings removed during reviews is slightly higher than what previous studies found for the overall project evolution history. However, when looking (quantitatively and qualitatively) at specific categories of warnings, we found that during code reviews developers focus on certain kinds of problems. For such categories of warnings the removal percentage tend to be very high, often above 50% and sometimes up to 100%. Examples of those are warnings in the imports, regular expressions, and type resolution categories. In conclusion, while a broad warning detection might produce way too many false positives, enforcing the removal of certain warnings prior to the patch submission could reduce the amount of effort provided during the code review process.
Keywords: Java; project management; software management; software quality; software tools; Gerrit repository; Java open source projects; broad warning detection; code quality; code review process; code reviews developers; coding standards; patch submission; project evolution history; software projects; static analysis tools; Context; Data mining; Encoding; History; Software; Standards; Code Review; Empirical Study; Mining Software Repositories; Static Analysis (ID#: 15-7810)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081826&isnumber=7081802
Takhma, Y.; Rachid, T.; Harroud, H.; Abid, M.R.; Assem, N., “Third-Party Source Code Compliance Using Early Static Code Analysis,” in Collaboration Technologies and Systems (CTS), 2015 International Conference on, vol., no., pp. 132–139, 1–5 June 2015. doi:10.1109/CTS.2015.7210413
Abstract: This paper presents a generic tool for Static Code Analysis for MyIC Phone developer community. Its major aim is to verify, early during development cycle, the compliance of third-party software with the MyIC phone platform coding standards, ensuring successful deployment through the MyIC Phone App Store. Built as an extendable Eclipse plug-in, our tool facilitates collaborative software acceptance tests imposed by the target platform provider. Our approach to code compliance is based on static code analysis, which consists in the construction of an abstract model of the source code of the application under analysis. The abstract model is then traversed in order to find the potential non compliances based on the set of rules set by the platform provider, and which are distributed as XML files, and loaded by the developer into the Eclipse environment upon project instantiation. The generated results of the analysis are represented in a tree view with line code highlighted to be easily accessed by the developer. Statistics that relate to conformity with the rules are calculated and displayed in a pie chart for consideration by the developer.
Keywords: XML; conformance testing; program diagnostics; program testing; software quality; software tools; source code (software); Eclipse environment; MyIC Phone App Store; MyIC Phone developer community; MyIC Phone platform coding standards; XML files; abstract model; collaborative software acceptance tests; early static code analysis; extendable Eclipse plug-in; line code; target platform provider; third-party software compliance; third-party source code compliance; tree view; Algorithm design and analysis; Analytical models; Decision support systems; Encoding; Software quality; Standards; Collaborative acceptance tests; Software compliance; Software conformity; Software quality; Software verification; Static code analysis
(ID#: 15-7811)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210413&isnumber=7210375
Luheng Jia; Chi-ying Tsui; Oscar C. Au ; Amin Zheng, “A Fast Variable Block Size Motion Estimation Algorithm with Refined Search Range for a Two-Layer Data Reuse Scheme,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, vol., no., pp. 1206–1209, 24–27 May 2015. doi:10.1109/ISCAS.2015.7168856
Abstract: Motion estimation (ME) serves as a key tool in a variety of video coding standards. With the increasing need for higher resolution video format, the limited memory bandwidth becomes a bottleneck for ME implementation. The huge data loading from external memory to the on-chip memory and the frequent data fetching from the on-chip memory to the ME engine are two major problems. To reduce both off-chip and on-chip memory bandwidth, we propose a two-layer data reuse scheme. On the macroblock (MB) layer, an advanced Level C data reuse scheme is presented. It employs two cooperating on-chip caches which load data in a novel local-snake scanning manner. On the block layer, we propose a fast variable block size motion estimation with a refined search window (RSW-VBSME). A new approach for hardware implementation of VBSME is then employed based on the fast algorithm. Instead of obtain the SADs of all the modes at the same time, the ME of different block sizes are performed separately. This enables higher data reusability within an MB. The two-layer data reuse scheme archives a more than 90% reduction of off-chip memory bandwidth with a slight increase of on-chip memory size. Moreover, the on-chip memory bandwidth is also greatly reduced compared with other reuse methods with different VBSME implementations.
Keywords: cache storage; microprocessor chips; motion estimation; video coding; RSW-VBSME; advanced Level C data reuse scheme; block layer; cooperating on-chip caches; external memory; fast variable block size motion estimation algorithm; frequent data fetching; huge data loading; limited memory bandwidth; local-snake scanning manner; macroblock layer; off-chip memory bandwidth reduction; on-chip memory bandwidth reduction; refined search range; refined search window; two-layer data reuse scheme; video coding standards; video format; Bandwidth; Loading; Manganese; Motion estimation; Strips; System-on-chip; Video coding; VBSME; data reuse; memory bandwidth (ID#: 15-7812)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168856&isnumber=7168553
Taeyoung Na; Sangkwon Na; Kiwon Yoo, “A Probabilistic-Based CU Size Pre-Determination Method for Parallel Processing of HEVC Encoders,” in Consumer Electronics (ICCE), 2015 IEEE International Conference on, vol., no., pp. 327–330, 9–12 Jan. 2015. doi:10.1109/ICCE.2015.7066432
Abstract: The advent of the state-of-the-art video coding standard, High Efficiency Video Coding (HEVC) is expected to bring great changes to relevant fields of broadcasting, storage and communications. HEVC achieves higher coding gains compared to its previous video coding standards in terms of rate-distortion (R-D) performance with various improved coding tools. This leads to heavy computational complexity and costs to HEVC encoders and these comes as strong restrictions specially to develop H/W types of encoders that are more preferred for real-time based applications and services. In particular, the quad-tree based coding unit (CU) structures with various sizes are known to contribute to achieving high coding gains of HEVC. However, RD cost calculation for mode decision with all CU sizes cannot normally be considered in the H/W HEVC encoders for real-time operation. To overcome this, a CU size pre-determination method based on a probabilistic decision model fit to implement H/W HEVC encoders is proposed in this paper. All available CU sizes are checked before inter prediction and unnecessary CU sizes are excluded from inter prediction according to the decision model. Then inter prediction with the reduced number of CU sizes can be performed in parallel with pipeline structures. The experimental results show that the proposed method effectively determines the necessary CU sizes with negligible coding loss of 1.57% for LD (Low-delay) coding structure and 1.08% for RA (Random access) coding structure, respectively in BD-BR.
Keywords: probability; quadtrees; video coding; HEVC encoders; computational complexity; high efficiency video coding; parallel processing; probabilistic decision model; probabilistic-based CU size pre-determination method; quad-tree based coding unit structures; rate-distortion performance; Complexity theory; Conferences; Consumer electronics; Encoding; Probabilistic logic; Standards; Video coding (ID#: 15-7813)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066432&isnumber=7066289
Siwei Ma; Tiejun Huang; Wen Gao, “The Second Generation IEEE 1857 Video Coding Standard,” in Signal and Information Processing (ChinaSIP), 2015 IEEE China Summit and International Conference on, vol., no., pp. 171–175, 12–15 July 2015. doi:10.1109/ChinaSIP.2015.7230385
Abstract: A new generation video coding standard developed by IEEE 1857 working group will be published as IEEE 1857.4, which assumes the work on the first generation of IEEE 1857 video coding standard IEEE std. 1857-2013 (IEEE 1857.1) and targets to double the coding efficiency of IEEE std. 1857-2013. This paper provides an overview of the forthcoming IEEE 1857.4 video coding standard, including the background and the key coding tools used in IEEE 1857.4. The performance comparisons between IEEE 1857.4 and the state-of-the art coding standards are also provided.
Keywords: video coding; IEEE 1857 video coding standard; IEEE 1857.1; IEEE 1857.4; IEEE std. 1857-2013; Encoding; Filtering; Redundancy; Standards; Surveillance; Transforms; Video coding; AVS2; IEEE 1857 (ID#: 15-7814)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230385&isnumber=7230339
Layman, L.; Seaman, C.; Falessi, D.; Diep, M., “Ask the Engineers: Exploring Repertory Grids and Personal Constructs for Software Data Analysis,” in Cooperative and Human Aspects of Software Engineering (CHASE), 2015 IEEE/ACM 8th International Workshop on , vol., no., pp. 81–84, 18–18 May 2015. doi:10.1109/CHASE.2015.25
Abstract: Maturity in software projects is often equated with data-driven predictability. However, data collection is expensive and measuring all variables that may correlate with project outcome is neither practical nor feasible. In contrast, a project engineer can identify a handful of factors that he or she believes influence the success of a project. The challenge is to quantify engineers’ insights in a way that is useful for data analysis. In this exploratory study, we investigate the repertory grid technique for this purpose. The repertory grid technique is an interview-based procedure for eliciting “constructs” (e.g., Adhering to coding standards) that individuals believe influence a worldly phenomenon (e.g., What makes a high-quality software project) by comparing example elements from their past (e.g., Projects they have worked on). We investigate the relationship between objective metrics of project performance and repertory grid constructs elicited from eight software engineers. Our results show correlations between the engineers’ subjective constructs and the objective project outcome measures. This suggests that repertory grids may be of benefit in developing models of project outcomes, particularly when project data is limited.
Keywords: data analysis; project management; software development management; data collection; interview-based procedure; personal constructs; project outcome measures; project performance; project success; repertory grid technique; software data analysis; software project maturity; subjective constructs; Atmospheric measurements; Companies; Interviews; Particle measurements; Productivity; Software; practitioners; repertory grids; software data analytics (ID#: 15-7815)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166093&isnumber=7166073
Hang Chen; Rong Xie; Liang Zhang, “Gradient Based Fast Mode and Depth Decision for High Efficiency Intra Frame Video Coding,” in Broadband Multimedia Systems and Broadcasting (BMSB), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 17–19 June 2015. doi:10.1109/BMSB.2015.7177230
Abstract: Intra frame coding plays an important role in video coding. To get better performance, high efficiency coding standards exploit new techniques to improve the performance of intra-coding. A flexible partition structure and multiple prediction modes are applied to achieve accurate intra prediction. Different block sizes and prediction modes are traversed to find the best coding unit and accurate prediction direction. All these processes increase lots of computational complexity. To optimize the intra coding process, a gradient based algorithm is proposed in this paper to make fast mode and depth decision. A Sobel operator is applied to get the gradient information of pixels. Its statistical properties are studied to find the most probable prediction mode for further selection. Furthermore, the texture information guides coding unit partition to avoid unnecessary calculation. In order to reduce the encoding complexity, we started from the smallest coding unit and build a bottom-up partition process. Experiments are implemented on a high efficiency coding standard AVS2, and about 48% encoding time is reduced on average with negligible coding performance loss.
Keywords: computational complexity; encoding; statistical analysis; video coding; Sobel operator; bottom-up partition process; computational complexity; depth decision; encoding complexity; flexible partition structure; gradient based fast mode; gradient pixel information; high efficiency coding standard AVS2; high efficiency intra frame video coding; multiple prediction modes; statistical properties; Algorithm design and analysis; Complexity theory; Encoding; Indexes; Partitioning algorithms; Standards; Video coding; Depth decision; Gradient; Intra prediction; Mode decision (ID#: 15-7816)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177230&isnumber=7177182
Mandal, D.K.; Mody, M.; Mehendale, M.; Yadav, N.; Chaitanya, G.; Goswami, P.; Sanghvi, H.; Nandan, N., “Accelerating H.264/HEVC Video Slice Processing Using Application Specific Instruction Set Processor,” in Consumer Electronics (ICCE), 2015 IEEE International Conference on , vol., no., pp. 408–411, 9–12 Jan. 2015. doi:10.1109/ICCE.2015.7066465
Abstract: Video coding standards (e.g. H.264, HEVC) use slice, consisting of a header and payload video data, as an independent coding unit for low latency encode-decode and better transmission error resiliency. In typical video streams, decoding the slice header is quite simple that can be done on standard embedded RISC processor architectures. However, universal decoding scenarios require handling worst case slice header complexity that grows to un-manageable level, well beyond the capacity of most embedded RISC processors. Hardwiring of slice processing control logic is potentially helpful but it reduces flexibility to tune the decoder for error conditions—an important differentiator for the end user. The paper presents a programmable approach to accelerate slice header decoding using an Application Specific Instruction Set Processor (ASIP). Purpose built instructions, built as extensions to a RISC processor (ARP32), accelerate slice processing by 30% for typical cases, reaching up to 70% for slices with worst case decoding complexity. The approach enables real time universal video decode for all slice-complexity-scenarios without sacrificing the flexibility, adaptability to customize, differentiate the codec solution via software programmability.
Keywords: instruction sets; reduced instruction set computing; video codecs; video coding; video streaming; ARP32; ASIP; H.264-HEVC video slice processing; RISC processor; application specific instruction set processor; codec solution; programmable approach; real time universal video decoding; slice header decoding; slice-complexity-scenarios; software programmability; worst case decoding complexity; Decoding; Engines; Real-time systems; Software; Standards; Streaming media; Video coding; ASIP; Custom instructions; H.264; HEVC; Slice; Universal Decoder (ID#: 15-7817)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066465&isnumber=7066289
Takada, R.; Orihashi, S.; Matsuo, Y.; Katto, J., “Improvement of 8K UHDTV Picture Quality for H.265/HEVC by Global Zoom Estimation,” in Consumer Electronics (ICCE), 2015 IEEE International Conference on, vol., no., pp. 58–59, 9–12 Jan. 2015. doi:10.1109/ICCE.2015.7066317
Abstract: Block-based Motion Estimation (ME) has been widely used in various video coding standards to remove temporal redundancy. However, this ME has limitation that it can only compensate for a parallel translation. Various methods have been proposed for other motions such as zooming. In recent years, 8K UHDTV (7,680 × 4,320 pixels) has been developed. Since 8K has large motion by zooming that is difficult to be predicted by block matching, it is important to improve zoom motion estimation. In this paper, to handle zooming in 8K video sequences, we propose a method for improving the picture quality by global zoom estimation based on motion vector analysis extracted by block matching.
Keywords: estimation theory; high definition television; image matching; image sequences; motion estimation; video coding; 8K UHDTV picture quality; H.265-HEVC; ME; block based motion estimation; block matching; global zoom estimation; motion vector analysis; parallel translation; video coding standards; video sequences; zoom motion estimation; Encoding; Estimation; Motion estimation; Motion segmentation; Proposals; Vectors; Video coding (ID#: 15-7818)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066317&isnumber=7066289
Papadopoulos, M.A.; Agrafiotis, D.; Bull, D., “On the Performance of Modern Video Coding Standards with Textured Sequences,” in Systems, Signals and Image Processing (IWSSIP), 2015 International Conference on, vol., no., pp. 137–140,
10–12 Sept. 2015. doi:10.1109/IWSSIP.2015.7314196
Abstract: This work presents two studies on the topic of coding highly textured content with H.265/HEVC. The aim of the studies is to identify any potential for improvement in the performance of the codec with this type of content. Both studies employ a texture-focused video database developed by the authors. Study I evaluates the performance of H.265/HEVC relative to H.264/AVC for the case of static, dynamic and mixed texture content. Study II evaluates the effectiveness of the currently used objective quality measures with this type of content. The results suggest that there is potential for improvement in coding performance by matching the quality/error measure used to the type of content (textured/non-textured) and type of texture (static, dynamic, mixed) encountered.
Keywords: image sequences; image texture; video coding; H.265 video coding performance; HEVC performance; modern video coding standards; textured sequences; Correlation; Databases; Encoding; Quality assessment; Rate-distortion; Silicon; Video coding; AVC; BVI Texture; HEVC; texture sequences (ID#: 15-7819)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7314196&isnumber=7313917
Blasi, S.G.; Zupancic, I.; Izquierdo, E.; Peixoto, E., “Adaptive Precision Motion Estimation for HEVC Coding,” in Picture Coding Symposium (PCS), 2015, vol., no., pp. 144–148, May 31 2015–June 3 2015. doi:10.1109/PCS.2015.7170064
Abstract: Most video coding standards, including the state-of-the-art High Efficiency Video Coding (HEVC), make use of sub-pixel Motion Estimation (ME) with Motion Vectors (MV) at fractional precisions to achieve high compression ratios. Unfortunately, sub-pixel ME comes at very high computational costs due to the interpolation step and additional motion searches. In this paper, a fast sub-pixel ME algorithm is proposed. The MV precision is adaptively selected on each block to skip the half or quarter precision steps when not needed. The algorithm bases the decision on local features, such as the behaviour of the residual error samples, and global features, such as the amount of edges in the pictures. Experimental results show that the method reduces total encoding time by up to 17.6% compared to conventional HEVC, at modest efficiency losses.
Keywords: data compression; motion estimation; vectors; video codecs; video coding; HEVC coding; MV; adaptive precision motion estimation; high efficiency video coding; motion vectors; subpixel ME algorithm; subpixel motion estimation; video coding standards; video compression ratios; Algorithm design and analysis; Encoding; Libraries; Vehicles; HEVC; Sub-Pixel Motion Estimation; Video Coding (ID#: 15-7820)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7170064&isnumber=7170026
Argyriou, V., “DNA Based Image Coding,” in Digital Signal Processing (DSP), 2015 IEEE International Conference on, vol., no., pp. 468–472, 21–24 July 2015. doi:10.1109/ICDSP.2015.7251916
Abstract: Lossless image compression is necessary for many applications related to digital cameras, medical imaging, mobile telecommunications, security and entertainment. Image compression as an important filed in image processing, includes several coding standards providing high compression ratios. In this work a novel method for lossless image encoding and decoded is introduced. Inspired by the storing and data representation architectures used in living multicellular organisms, the proposed DNA coding approach encodes images based on the same principles. The coding process includes three main stages, division, differentiation and specialization allowing the exploitation of spatial and inter-pixel redundancies. The key element to achieve that representation and efficiency is the novel concept of doi:10.1109/ICDSP.2015.7251916 ‘stem' pixels that is introduced. A comparative study was performed with current state of the art lossless image coding standards showing that the proposed methodology provides high compression rations.
Keywords: DNA; data compression; image coding; image representation; medical image processing; redundancy; DNA based image coding; data representation; data storage; image processing; interpixel redundancy; living multicellular organism; lossless image compression; lossless image decoding; lossless image encoding; spatial redundancy; Biomedical imaging; Cameras; Encoding; Mobile communication; Transform coding; lossless compression (ID#: 15-7821)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7251916&isnumber=7251315
Daehyeok Gwon; Haechul Choi; Youn, J.M., “HEVC Fast Intra Mode Decision Based on Edge and SATD Cost,” in Multimedia and Broadcasting (APMediaCast), 2015 Asia Pacific Conference on, vol., no., pp. 1–5, 23–25 April 2015. doi:10.1109/APMediaCast.2015.7210287
Abstract: HEVC (high efficiency video coding) achieves much higher coding efficiency compared with previous video coding standards at the cost of significant computational complexity. This paper proposes a fast intra mode decision scheme, where edge orientation and the sum of absolute Hadamard transformed difference (SATD) are used to consider texture characteristics of blocks. According to these features, the numbers of candidate modes to be tested in rough mode decision and rate-distortion optimization processes are reduced, respectively. In particular, the rate-distortion optimization candidates are selected by Bayesian classification framework to minimize a risk such as coding loss and computation complexity. Experimental results reveal that the proposed scheme reduces encoding run time by 30.3% with a negligible coding loss of 0.9% BD-rate for the all intra coding scenario.
Keywords: belief networks; computational complexity; distortion; optimisation; video coding; Bayesian classification framework; HEVC fast intra mode decision; SATD cost; computational complexity; edge orientation; high efficiency video coding; rate-distortion optimization processes; rough mode decision; sum of absolute Hadamard transformed difference; Bayes methods; Computational complexity; Encoding; Image edge detection; Indexes; Multimedia communication; Video coding; HEVC; fast encoder; intra coding; mode decision (ID#: 15-7822)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210287&isnumber=7210263
Yu, Matt; Lakshman, Haricharan; Girod, Bernd, “A Framework to Evaluate Omnidirectional Video Coding Schemes,” in Mixed and Augmented Reality (ISMAR), 2015 IEEE International Symposium on, vol., no., pp. 31–36, Sept. 29 2015–Oct. 3 2015. doi:10.1109/ISMAR.2015.12
Abstract: Omnidirectional videos of real world environments viewed on head-mounted displays with real-time head motion tracking can offer immersive visual experiences. For live streaming applications, compression is critical to reduce the bitrate. Omnidirectional videos, which are spherical in nature, are mapped onto one or more planes before encoding to interface with modern video coding standards. In this paper, we consider the problem of evaluating the coding efficiency in the context of viewing with a head-mounted display. We extract viewport based head motion trajectories, and compare the original and coded videos on the viewport. With this approach, we compare different sphere-to-plane mappings. We show that the average viewport quality can be approximated by a weighted spherical PSNR.
Keywords: Approximation methods; Bit rate; Encoding; Head; Streaming media; Trajectory; Video coding (ID#: 15-7823)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328056&isnumber=7328030
Tok, M.; Eiselein, V.; Sikora, T., “Motion Modeling for Motion Vector Coding in HEVC,” in Picture Coding Symposium (PCS), 2015, vol., no., pp. 154–158, May 31 2015–June 3 2015. doi:10.1109/PCS.2015.7170066
Abstract: During the standardization of HEVC, new motion information coding and prediction schemes such as temporal motion vector prediction have been investigated to reduce the spatial redundancy of motion vector fields used for motion compensated inter prediction. In this paper a general motion model based vector coding scheme is introduced. This scheme includes estimation, coding and dynamic recombination of parametric motion models to generate vector predictors and merge candidates for all common HEVC inter coding settings. Bit rate reductions of up to 4.9% indicate that higher order motion models can increase the efficiency of motion information coding in modern hybrid video coding standards.
Keywords: motion estimation; video coding; HEVC; bit rate reduction; general motion model based vector coding scheme; high efficiency video coding; motion information coding; motion model coding; motion model dynamic recombination; motion model estimation; motion prediction scheme; parametric motion model; spatial redundancy reduction; Bit rate; Delays; Encoding; Image coding; Predictive models; Standards; Video coding (ID#: 15-7824)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7170066&isnumber=7170026
Weijia Zhu; Wenpeng Ding; Jizheng Xu; Yunhui Shi; Baocai Yin, “Multi-stage Hash Based Motion Estimation for HEVC,” in Data Compression Conference (DCC), 2015, vol., no., pp. 478–478, 7–9 April 2015. doi:10.1109/DCC.2015.25
Abstract: Motion estimation plays an important role in video coding standards, such as H.264/AVC and HEVC. In this paper, we propose a multi-stage hash based motion estimation algorithm for HEVC, which enables hash based motion estimation for natural videos. In the proposed method, the prediction blocks significantly different from the current prediction unit will be eliminated in the motion estimation process. Locality sensitive hashing functions are used to measure the difference between the input block and predicted blocks. The proposed algorithm is implemented into the HM 12.0 software, and the simulation results show that the complexity of motion estimation is significantly reduced with negligible coding performance loss.
Keywords: computational complexity; cryptography; motion estimation; video coding; HEVC; HM12.0 software; complexity reduction; locality sensitive hashing function; multistage hash based motion estimation; natural video coding standard; Asia; Data compression; Encoding; Motion estimation; Multimedia communication; Software; Transportation; Locality sensitive hashing; fast motion estimation (ID#: 15-7825)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149341&isnumber=7149089
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Sandboxing 2015 |
At a recent Lablet quarterly meeting and at the HotSoS 2015 Symposium and Bootcamp on the Science of Security, sandboxing was discussed as an important tool for the Science of Security community, particularly with regard to developing composable systems and policy-governed systems. To many researchers, it is a promising method for preventing and containing damage. Sandboxing, frequently used to test unverified programs that may contain malware, allows the software to run without harming the host device. The bibliographies cited here are of articles about sandboxing published in 2015.
Irazoqui, G.; Eisenbarth, T.; Sunar, B., “S$A: A Shared Cache Attack that Works Across Cores and Defies VM Sandboxing — and Its Application to AES,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 591–604, 17–21 May 2015. doi:10.1109/SP.2015.42
Abstract: The cloud computing infrastructure relies on virtualized servers that provide isolation across guest OS’s through sand boxing. This isolation was demonstrated to be imperfect in past work which exploited hardware level information leakages to gain access to sensitive information across co-located virtual machines (VMs). In response virtualization companies and cloud services providers have disabled features such as deduplication to prevent such attacks. In this work, we introduce a fine-grain cross-core cache attack that exploits access time variations on the last level cache. The attack exploits huge pages to work across VM boundaries without requiring deduplication. No configuration changes on the victim OS are needed, making the attack quite viable. Furthermore, only machine co-location is required, while the target and victim OS can still reside on different cores of the machine. Our new attack is a variation of the prime and probe cache attack whose applicability at the time is limited to L1 cache. In contrast, our attack works in the spirit of the flush and reload attack targeting the shared L3 cache instead. Indeed, by adjusting the huge page size our attack can be customized to work virtually at any cache level/size. We demonstrate the viability of the attack by targeting an Open SSL1.0.1f implementation of AES. The attack recovers AES keys in the cross-VM setting on Xen 4.1 with deduplication disabled, being only slightly less efficient than the flush and reload attack. Given that huge pages are a standard feature enabled in the memory management unit of OS’s and that besides co-location no additional assumptions are needed, the attack we present poses a significant risk to existing cloud servers.
Keywords: cache storage; cloud computing; security of data; virtual machines; AES keys; L1 cache; Open SSL1.0.1f implementation; S$A; VM sandboxing; cloud computing infrastructure; cloud servers; cloud services; probe cache attack; shared cache attack; virtual machines; virtualized servers; Cloud computing; Cryptography; Hardware; Monitoring; Program processors; Servers; Cross-VM; cache attacks; flush+reload; huge pages; memory deduplication; prime and probe (ID#: 15-7523)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163049&isnumber=7163005
Reffett, C.; Fleck, D., “Securing Applications with Dyninst,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225297
Abstract: While significant bodies of work exist for sandboxing potentially malicious software and for sanitizing input, there has been little investigation into using binary editing software to perform either of these tasks. However, because binary editors do not require source code and can modify the software, they can generate secure versions of arbitrary binaries and provide better control over the software than existing approaches. In this paper, we explore the application of the binary editing library Dyninst to both the sandboxing and sanitization problems. We also create a prototype of a more advanced graphical tool to perform these tasks. Finally, we lay the groundwork for more complex and functional tools to solve these problems.
Keywords: program diagnostics; security of data; software libraries; Dyninst; arbitrary binaries; binary editing library; binary editing software; binary editors; graphical tool; input sanitization; malicious software; sanitization problems; secure versions; securing applications; Graphical user interfaces; Instruments; Libraries; Memory management; Monitoring; Runtime; Software; binary instrumentation; dyninst; input sanitization; sandboxing (ID#: 15-7524)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225297&isnumber=7190491
De Ryck, P.; Nikiforakis, N.; Desmet, L.; Piessens, F.; Joosen, W., “Protected Web Components: Hiding Sensitive Information in the Shadows,” in IT Professional, vol. 17, no.1, pp. 36–43, Jan.–Feb. 2015. doi:10.1109/MITP.2015.12
Abstract: Most modern Web applications depend on the integration of code from third-party providers, such as JavaScript libraries and advertisements. Because the included code runs within the page’s security context, it represents an attractive attack target, allowing the compromise of numerous Web applications through a single attack vector (such as a malicious advertisement). Such opportunistic attackers aim to execute low-profile, nontargeted, widely applicable data-gathering attacks, such as the silent extraction of user-specific data and authentication credentials. In this article, the authors show that third-party code inclusion is rampant, even in privacy-sensitive applications such as online password managers, thereby potentially exposing the user’s most sensitive data to attackers. They propose protected Web components, which leverage the newly proposed Web components, repurposing them to protect private data against opportunistic attacks, by hiding static data in the Document Object Model (DOM) and isolating sensitive interactive elements within a component. This article is part of a special issue on IT security.
Keywords: Internet; data encapsulation; data privacy; document handling; DOM; Document Object Model; IT security; Web applications; Web component protection; attack vector; code integration; data-gathering attacks; opportunistic attacks; page security context; privacy-sensitive applications; sensitive information hiding; sensitive interactive element isolation; static data hiding; third-party code inclusion; Browsers; Computer Security; Context modeling; Data models; Google; HTML; Information technology; Web services; Internet/Web technologies; Web components; information technology; privacy; sandboxing; script inclusion; security; shadow DOM (ID#: 15-7525)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7030138&isnumber=7030137
Thom, D.; Ertl, T., “TreeQueST: A Treemap-Based Query Sandbox for Microdocument Retrieval,” in System Sciences (HICSS), 2015 48th Hawaii International Conference on, vol., no., pp. 1714–1723, 5–8 Jan. 2015. doi:10.1109/HICSS.2015.206
Abstract: Scatter/Gather-browsing has been proposed as a technique for information retrieval that fosters understanding of textual data and identification of key documents by means of exploration and drill-down. It has been found that such approaches are more expensive but not more effective than less interactive search solutions for traditional retrieval tasks. In this paper, however, we show that the rise of online micro document platforms, such as Twitter, has brought new relevance to the technique for finding and understanding information about recent events. Our novel approach builds on hierarchical topic clustering combined with a tree map-based visualization to provide a highly interactive information management and query sandboxing space. Large volumes of data, only accessible through rate- and throughput-limited channels, can thus effectively be filtered and retrieved using iteratively optimized queries. We conducted a user study that demonstrates the performance of our approach compared to plain text search based on the Twitter engine.
Keywords: data visualisation; document handling; pattern clustering; query processing; search engines; social networking (online); trees (mathematics); TreeQueST; Twitter engine; hierarchical topic clustering; information retrieval; microdocument retrieval; tree map-based visualization; treemap-based query sandbox; Data visualization; Media; Navigation; Query processing; Twitter; Visualization; Hierarchical Topics; Twitter; Visual Analytics (ID#: 15-7526)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070016&isnumber=7069647
Steven Van Acker, Daniel Hausknecht, Wouter Joosen, Andrei Sabelfeld; “Password Meters and Generators on the Web: From Large-Scale Empirical Study to Getting It Right,” CODASPY ’15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 253–262. doi:10.1145/2699026.2699118
Abstract: Web services heavily rely on passwords for user authentication. To help users chose stronger passwords, password meter and password generator facilities are becoming increasingly popular. Password meters estimate the strength of passwords provided by users. Password generators help users with generating stronger passwords. This paper turns the spotlight on the state of the art of password meters and generators on the web. Orthogonal to the large body of work on password metrics, we focus on getting password meters and generators right in the web setting. We report on the state of affairs via a large-scale empirical study of web password meters and generators. Our findings reveal pervasive trust to third-party code to have access to the passwords. We uncover three cases when this trust is abused to leak the passwords to third parties. Furthermore, we discover that often the passwords are sent out to the network, invisibly to users, and sometimes in clear. To improve the state of the art, we propose SandPass, a general web framework that allows secure and modular porting of password meter and generation modules. We demonstrate the usefulness of the framework by a reference implementation and a case study with a password meter by the Swedish Post and Telecommunication Agency.
Keywords: passwords, sandboxing, web security (ID#: 15-7527)
URL: http://doi.acm.org/10.1145/2699026.2699118
Niranjan Suri; “Java and Distributed Systems: Observations, Experiences, and ... a Wish List,” PPPJ ’15 Proceedings of the Principles and Practices of Programming on The Java Platform, September 2015, pages 1–1. doi:10.1145/2807426.2817927
Abstract: When Java was introduced to the world at large 20 years ago, it brought many interesting features and capabilities into the mainstream computing environment. A Virtual Machine based approach with a just-in-time compiler that supported sandboxing, dynamic class loading, and introspection enabled a number of novel and innovative network-based applications to be developed. While many of these capabilities existed in some fashion in other prototype and experimental languages, the combination of all of them in a popular general purpose language opened up the possibility of building real systems that could leverage these capabilities. Applets, Jini, JXTA, and many other innovative concepts were introduced over the course of time, building on top of the basic capabilities of Java. This talk will present some personal experiences with using Java in distributed computing environments ranging from mobile software agents to distributed resource sharing to process integrated mechanisms. The basis for many of these capabilities is the Aroma Virtual Machine, a custom Java compatible VM with state capture, migration, and resource control capabilities. Motivations behind the Aroma VM will be discussed, along with design choices and some results. Finally, the talk will discuss a wish list of features that would be nice to have in future versions of Java to enable many more novel applications to be developed.
Keywords: (not provided) (ID#: 15-7528)
URL: http://doi.acm.org/10.1145/2807426.2817927
Ben Niu, Gang Tan; “Per-Input Control-Flow Integrity,” CCS ’15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015, Pages 914–926. doi:10.1145/2810103.2813644
Abstract: Control-Flow Integrity (CFI) is an effective approach to mitigating control-flow hijacking attacks. Conventional CFI techniques statically extract a control-flow graph (CFG) from a program and instrument the program to enforce that CFG. The statically generated CFG includes all edges for all possible inputs; however, for a concrete input, the CFG may include many unnecessary edges. We present Per-Input Control-Flow Integrity (PICFI), which is a new CFI technique that can enforce a CFG computed for each concrete input. PICFI starts executing a program with the empty CFG and lets the program itself lazily add edges to the enforced CFG if such edges are required for the concrete input. The edge addition is performed by PICFI-inserted instrumentation code. To prevent attackers from arbitrarily adding edges, PICFI uses a statically computed all-input CFG to constrain what edges can be added at runtime. To minimize performance overhead, operations for adding edges are designed to be idempotent, so they can be patched to no-ops after their first execution. As our evaluation shows, PICFI provides better security than conventional fine-grained CFI with comparable performance overhead.
Keywords: control-flow integrity, dynamic CFI (ID#: 15-7529)
URL: http://doi.acm.org/10.1145/2810103.2813644
Meng Xu, Yeongjin Jang, Xinyu Xing, Taesoo Kim, Wenke Lee; “UCognito: Private Browsing Without Tears,” CCS ’15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015, Pages 438–449. doi:10.1145/2810103.2813716
Abstract: While private browsing is a standard feature, its implementation has been inconsistent among the major browsers. More seriously, it often fails to provide the adequate or even the intended privacy protection. For example, as shown in prior research, browser extensions and add-ons often undermine the goals of private browsing. In this paper, we first present our systematic study of private browsing. We developed a technical approach to identify browser traces left behind by a private browsing session, and showed that Chrome and Firefox do not correctly clear some of these traces. We analyzed the source code of these browsers and discovered that the current implementation approach is to decide the behaviors of a browser based on the current browsing mode (i.e., private or public); but such decision points are scattered throughout the code base. This implementation approach is very problematic because developers are prone to make mistakes given the complexities of browser components (including extensions and add-ons). Based on this observation, we propose a new and general approach to implement private browsing. The main idea is to overlay the actual filesystem with a sandbox filesystem when the browser is in private browsing mode, so that no unintended leakage is allowed and no persistent modification is stored. This approach requires no change to browsers and the OS kernel because the layered sandbox filesystem is implemented by interposing system calls. We have implemented a prototype system called Ucognito on Linux. Our evaluations show that Ucognito, when applied to Chrome and Firefox, stops all known privacy leaks identified by prior work and our current study. More importantly, Ucognito incurs only negligible performance overhead: e.g., 0%–2.5% in benchmarks for standard JavaScript and webpage loading.
Keywords: browser implementation, filesystem sandbox, private browsing (ID#: 15-7530)
URL: http://doi.acm.org/10.1145/2810103.2813716
Minjia Zhang, Jipeng Huang, Man Cao, Michael D. Bond; “Low-Overhead Software Transactional Memory with Progress Guarantees and Strong Semantics,” PPoPP 2015 Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, January 2015, Pages 97–108. doi:10.1145/2688500.2688510
Abstract: Software transactional memory offers an appealing alternative to locks by improving programmability, reliability, and scalability. However, existing STMs are impractical because they add high instrumentation costs and often provide weak progress guarantees and/or semantics. This paper introduces a novel STM called LarkTM that provides three significant features. (1) Its instrumentation adds low overhead except when accesses actually conflict, enabling low single-thread overhead and scaling well on low-contention workloads. (2) It uses eager concurrency control mechanisms, yet naturally supports flexible conflict resolution, enabling strong progress guarantees. (3) It naturally provides strong atomicity semantics at low cost. LarkTM’s design works well for low-contention workloads, but adds significant overhead under higher contention, so we design an adaptive version of LarkTM that uses alternative concurrency control for high-contention objects. An implementation and evaluation in a Java virtual machine show that the basic and adaptive versions of LarkTM not only provide low single-thread overhead, but their multithreaded performance compares favorably with existing high-performance STMs.
Keywords: Software transactional memory, biased reader-writer locks, concurrency control, managed languages, strong atomicity (ID#: 15-7531)
URL: http://doi.acm.org/10.1145/2688500.2688510
Sophia Drossopoulou, James Noble, Mark S. Miller; “Swapsies on the Internet: First Steps Towards Reasoning About Risk and Trust in an Open World,” PLAS’15 Proceedings of the 10th ACM Workshop on Programming Languages and Analysis for Security, July 2015, Pages 2–15. doi:10.1145/2786558.2786564
Abstract: Contemporary open systems use components developed by many different parties, linked together dynamically in unforeseen constellations. Code needs to live up to strict security specifications: it has to ensure the correct functioning of its objects when they collaborate with external objects which may be malicious. In this paper we propose specifications that model risk and trust in such open systems. We specify Miller, Van Cutsem, and Tulloh’s escrow exchange example, and discuss the meaning of such a specification. We argue informally that the code satisfies its specification.
Keywords: (not provided) (ID#: 15-7532)
URL: http://doi.acm.org/10.1145/2786558.2786564
Minh Ngo, Fabio Massacci, Dimiter Milushev, Frank Piessens; “Runtime Enforcement of Security Policies on Black Box Reactive Programs,” POPL ’15 Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, January 2015, Pages 43–54. doi:10.1145/2676726.2676978
Abstract: Security enforcement mechanisms like execution monitors are used to make sure that some untrusted program complies with a policy. Different enforcement mechanisms have different strengths and weaknesses and hence it is important to understand the qualities of various enforcement mechanisms. This paper studies runtime enforcement mechanisms for reactive programs. We study the impact of two important constraints that many practical enforcement mechanisms satisfy: (1) the enforcement mechanism must handle each input/output event in finite time and on occurrence of the event (as opposed to for instance Ligatti’s edit automata that have the power to buffer events for an arbitrary amount of time), and (2) the enforcement mechanism treats the untrusted program as a black box: it can monitor and/or edit the input/output events that the program exhibits on execution and it can explore alternative executions of the program by running additional copies of the program and providing these different inputs. It cannot inspect the source or machine code of the untrusted program. Such enforcement mechanisms are important in practice: they include for instance many execution monitors, virtual machine monitors, and secure multi-execution or shadow executions. We establish upper and lower bounds for the class of policies that are enforceable by such black box mechanisms, and we propose a generic enforcement mechanism that works for a wide range of policies. We also show how our generic enforcement mechanism can be instantiated to enforce specific classes of policies, at the same time showing that many existing enforcement mechanisms are optimized instances of our construction.
Keywords: black box mechanism, hypersafety policy, reactive program, runtime enforcement (ID#: 15-7533)
URL: http://doi.acm.org/10.1145/2676726.2676978
Kavita Agarwal, Bhushan Jain, Donald E. Porter; “Containing the Hype,” APSys ’15 Proceedings of the 6th Asia-Pacific Workshop on Systems, July 2015, Article No. 8. doi:10.1145/2797022.2797029
Abstract: Containers, or OS-based virtualization, have seen a recent resurgence in deployment. The term “container” is nearly synonymous with “lightweight virtualization”, despite a remarkable dearth of careful measurements supporting this notion. This paper contributes comparative measurements and analysis of both containers and hardware virtual machines where the functionality of both technologies intersects. This paper focuses on two important issues for cloud computing: density (guests per physical host) and start-up latency (for responding to load spikes). We conclude that the overall density is highly dependent on the most demanded resource. In many dimensions there are no significant differences, and in other dimensions VMs have significantly higher overheads. A particular contribution is the first detailed analysis of the biggest difference—memory footprint—and opportunities to significantly reduce this overhead.
Keywords: (not provided) (ID#: 15-7534)
URL: http://doi.acm.org/10.1145/2797022.2797029
Andrey Chudnov, David A. Naumann; “Inlined Information Flow Monitoring for JavaScript,” CCS ’15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015, Pages 629–643. doi:10.1145/2810103.2813684
Abstract: Extant security mechanisms for web apps, notably the “same-origin policy”, are not sufficient to achieve confidentiality and integrity goals for the many apps that manipulate sensitive information. The trend in web apps is “mashups” which integrate JavaScript code from multiple providers in ways that can undercut existing security mechanisms. Researchers are exploring dynamic information flow controls (IFC) for JavaScript, but there are many challenges to achieving strong IFC without excessive performance cost or impractical browser modifications. This paper presents an inlined IFC monitor for ECMAScript 5 with web support, using the no-sensitive-upgrade (NSU) technique, together with experimental evaluation using synthetic mashups and performance benchmarks. On this basis it should be possible to conduct experiments at scale to evaluate feasibility of both NSU and inlined monitoring.
Keywords: inlined monitoring, javascript, run-time monitoring, web applications (ID#: 15-7535)
URL: http://doi.acm.org/10.1145/2810103.2813684
Tobias Holstein, Joachim Wietzke; “Contradiction of Separation through Virtualization and Inter Virtual Machine Communication in Automotive Scenarios,” ECSAW ’15 Proceedings of the 2015 European Conference on Software Architecture Workshops, September 2015, Article No. 4. doi:10.1145/2797433.2797437
Abstract: A trend in automotive infotainment software is to create a separation of components based on different domains (e.g. Navigation, Radio, etc.). This intends to limit susceptibility to errors, simplify maintainability and to organize development based on domains. Multi-OS environments create another layer of separation through hardware/software virtualization. Using a hypervisor for virtualization allows the development of mixed critical systems. However, we see a contradiction in current architectures, which on one side aim to separate everything into virtual machines (VMs), while on the other side allow inter-VM-connectivity. In the end all applications are composited into one homogeneous UI and the previous intent of separation is disregarded. In this paper we investigate current architectures for in-vehicle infotainment systems (IVIS), i.e. mixed critical systems for automotive purposes, and show that regulations and/or requirements break the previous intents of the architecture.
Keywords: Composition, Heterogeneous Platforms, Hypervisor, Ubiquitous Interoperability, User Interface, Virtualization
(ID#: 15-7536)
URL: http://doi.acm.org/10.1145/2797433.2797437
Petr Hosek, Cristian Cadar; “VARAN the Unbelievable: An Efficient N-version Execution Framework,” ASPLOS ’15 Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, March 2015, Pages 339–353. doi:10.1145/2775054.2694390
Abstract: With the widespread availability of multi-core processors, running multiple diversified variants or several different versions of an application in parallel is becoming a viable approach for increasing the reliability and security of software systems. The key component of such N-version execution (NVX) systems is a runtime monitor that enables the execution of multiple versions in parallel. Unfortunately, existing monitors impose either a large performance overhead or rely on intrusive kernel-level changes. Moreover, none of the existing solutions scales well with the number of versions, since the runtime monitor acts as a performance bottleneck. In this paper, we introduce Varan, an NVX framework that combines selective binary rewriting with a novel event-streaming architecture to significantly reduce performance overhead and scale well with the number of versions, without relying on intrusive kernel modifications. Our evaluation shows that Varan can run NVX systems based on popular C10k network servers with only a modest performance overhead, and can be effectively used to increase software reliability using techniques such as transparent failover, live sanitization and multi-revision execution.
Keywords: N-version execution, event streaming, live sanitization, multi-revision execution, record-replay, selective binary rewriting, transparent failover (ID#: 15-7537)
URL: http://doi.acm.org/10.1145/2775054.2694390
Swarnendu Biswas, Minjia Zhang, Michael D. Bond, Brandon Lucia; “Valor: Efficient, Software-only Region Conflict Exceptions,” OOPSLA 2015 Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, October 2015, Pages 241–259. doi:10.1145/2814270.2814292
Abstract: Data races complicate programming language semantics, and a data race is often a bug. Existing techniques detect data races and define their semantics by detecting conflicts between synchronization-free regions (SFRs). However, such techniques either modify hardware or slow programs dramatically, preventing always-on use today. This paper describes Valor, a sound, precise, software-only region conflict detection analysis that achieves high performance by eliminating the costly analysis on each read operation that prior approaches require. Valor instead logs a region’s reads and lazily detects conflicts for logged reads when the region ends. As a comparison, we have also developed FastRCD, a conflict detector that leverages the epoch optimization strategy of the FastTrack data race detector. We evaluate Valor, FastRCD, and FastTrack, showing that Valor dramatically outperforms FastRCD and FastTrack. Valor is the first region conflict detector to provide strong semantic guarantees for racy program executions with under 2X slowdown. Overall, Valor advances the state of the art in always-on support for strong behavioral guarantees for data races.
Keywords: conflict exceptions, data races, dynamic analysis, region serializability (ID#: 15-7538)
URL: http://doi.acm.org/10.1145/2814270.2814292
Ben Hermann, Michael Reif, Michael Eichberg, Mira Mezini; “Getting to Know You: Towards a Capability Model for Java,” ESEC/FSE 2015 Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, August 2015,
Pages 758–769. doi:10.1145/2786805.2786829
Abstract: Developing software from reusable libraries lets developers face a security dilemma: either be efficient and reuse libraries as they are or inspect them, know about their resource usage, but possibly miss deadlines as reviews are a time consuming process. In this paper, we propose a novel capability inference mechanism for libraries written in Java. It uses a coarse-grained capability model for system resources that can be presented to developers. We found that the capability inference agrees by 86.81% on expectations towards capabilities that can be derived from project documentation. Moreover, our approach can find capabilities that cannot be discovered using project documentation. It is thus a helpful tool for developers mitigating the aforementioned dilemma.
Keywords: analysis, capability, library, reuse, security (ID#: 15-7539)
URL: http://doi.acm.org/10.1145/2786805.2786829
Daejun Park, Andrei Stefănescu, Grigore Roşu; “KJS: A Complete Formal Semantics of JavaScript,” PLDI 2015 Proceedings
of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2015, Pages 346–356. doi:10.1145/2737924.2737991
Abstract: This paper presents KJS, the most complete and throughly tested formal semantics of JavaScript to date. Being executable, KJS has been tested against the ECMAScript 5.1 conformance test suite, and passes all 2,782 core language tests. Among the existing implementations of JavaScript, only Chrome V8’s passes all the tests, and no other semantics passes more than 90%. In addition to a reference implementation for JavaScript, KJS also yields a simple coverage metric for a test suite: the set of semantic rules it exercises. Our semantics revealed that the ECMAScript 5.1 conformance test suite fails to cover several semantic rules. Guided by the semantics, we wrote tests to exercise those rules. The new tests revealed bugs both in production JavaScript engines (Chrome V8, Safari WebKit, Firefox SpiderMonkey) and in other semantics. KJS is symbolically executable, thus it can be used for formal analysis and verification of JavaScript programs. We verified non-trivial programs and found a known security vulnerability.
Keywords: JavaScript, K framework, mechanized semantics (ID#: 15-7540)
URL: http://doi.acm.org/10.1145/2737924.2737991
Junjie Wang, Yinxing Xue, Yang Liu, Tian Huat Tan; “JSDC: A Hybrid Approach for JavaScript Malware Detection and Classification,” ASIA CCS ’15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 109–120. doi:10.1145/2714576.2714620
Abstract: Malicious JavaScript is one of the biggest threats in cyber security. Existing research and anti-virus products mainly focus on detection of JavaScript malware rather than classification. Usually, the detection will simply report the malware family name without elaborating details about attacks conducted by the malware. Worse yet, the reported family name may differ from one tool to another due to the different naming conventions. In this paper, we propose a hybrid approach to perform JavaScript malware detection and classification in an accurate and efficient way, which could not only explain the attack model but also potentially discover new malware variants and new vulnerabilities. Our approach starts with machine learning techniques to detect JavaScript malware using predicative features of textual information, program structures and risky function calls. For the detected malware, we classify them into eight known attack types according to their attack feature vector or dynamic execution traces by using machine learning and dynamic program analysis respectively. We implement our approach in a tool named JSDC, and conduct large-scale evaluations to show its effectiveness. The controlled experiments (with 942 malware) show that JSDC gives low false positive rate (0.2123%) and low false negative rate (0.8492%), compared with other tools. We further apply JSDC on 1,400,000 real-world JavaScript with over 1,500 malware reported, for which many anti-virus tools failed. Lastly, JSDC can effectively and accurately classify these detected malwares into either attack types.
Keywords: (not provided) (ID#: 15- 7541)
URL: http://doi.acm.org/10.1145/2714576.2714620
Soo-Jin Moon, Vyas Sekar, Michael K. Reiter; “Nomad: Mitigating Arbitrary Cloud Side Channels via Provider-Assisted Migration,” CCS ’15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015, Pages 1595–1606. doi:10.1145/2810103.2813706
Abstract: Recent studies have shown a range of co-residency side channels that can be used to extract private information from cloud clients. Unfortunately, addressing these side channels often requires detailed attack-specific fixes that require significant modifications to hardware, client virtual machines (VM), or hypervisors. Furthermore, these solutions cannot be generalized to future side channels. Barring extreme solutions such as single tenancy which sacrifices the multiplexing benefits of cloud computing, such side channels will continue to affect critical services. In this work, we present Nomad, a system that offers vector-agnostic defense against known and future side channels. Nomad envisions a provider-assisted VM migration service, applying the moving target defense philosophy to bound the information leakage due to side channels. In designing Nomad, we make four key contributions: (1) a formal model to capture information leakage via side channels in shared cloud deployments; (2) identifying provider-assisted VM migration as a robust defense for arbitrary side channels; (3) a scalable online VM migration heuristic that can handle large datacenter workloads; and (4) a practical implementation in OpenStack. We show that Nomad is scalable to large cloud deployments, achieves near-optimal information leakage subject to constraints on migration overhead, and imposes minimal performance degradation for typical cloud applications such as web services and Hadoop MapReduce.
Keywords: VM migration, Cloud computing, Cross-VM side-channel attacks (ID#: 15-7542)
URL: http://doi.acm.org/10.1145/2810103.2813706
Isaac Evans, Fan Long, Ulziibayar Otgonbaatar, Howard Shrobe, Martin Rinard, Hamed Okhravi, Stelios Sidiroglou-Douskos; “Control Jujutsu: On the Weaknesses of Fine-Grained Control Flow Integrity,” CCS ’15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015, Pages 901–913. doi:10.1145/2810103.2813646
Abstract: Control flow integrity (CFI) has been proposed as an approach to defend against control-hijacking memory corruption attacks. CFI works by assigning tags to indirect branch targets statically and checking them at runtime. Coarse-grained enforcements of CFI that use a small number of tags to improve the performance overhead have been shown to be ineffective. As a result, a number of recent efforts have focused on fine-grained enforcement of CFI as it was originally proposed. In this work, we show that even a fine-grained form of CFI with unlimited number of tags and a shadow stack (to check calls and returns) is ineffective in protecting against malicious attacks. We show that many popular code bases such as Apache and Nginx use coding practices that create flexibility in their intended control flow graph (CFG) even when a strong static analyzer is used to construct the CFG. These flexibilities allow an attacker to gain control of the execution while strictly adhering to a fine-grained CFI. We then construct two proof-of-concept exploits that attack an unlimited tag CFI system with a shadow stack. We also evaluate the difficulties of generating a precise CFG using scalable static analysis for real-world applications. Finally, we perform an analysis on a number of popular applications that highlights the availability of such attacks.
Keywords: code reuse, control flow integrity, memory corruption, return-oriented programming (ID#: 15-7543)
URL: http://doi.acm.org/10.1145/2810103.2813646
Khilan Gudka, Robert N.M. Watson, Jonathan Anderson, David Chisnall, Brooks Davis, Ben Laurie, Ilias Marinos, Peter G. Neumann, Alex Richardson; “Clean Application Compartmentalization with SOAAP,” CCS ’15 Proceedings of the 22nd
ACM SIGSAC Conference on Computer and Communications Security, October 2015, Pages 1016—1031. doi:10.1145/2810103.2813611
Abstract: Application compartmentalization, a vulnerability mitigation technique employed in programs such as OpenSSH and the Chromium web browser, decomposes software into isolated components to limit privileges leaked or otherwise available to attackers. However, compartmentalizing applications — and maintaining that compartmentalization — is hindered by ad hoc methodologies and significantly increased programming effort. In practice, programmers stumble through (rather than overtly reason about) compartmentalization spaces of possible decompositions, unknowingly trading off correctness, security, complexity, and performance. We present a new conceptual framework embodied in an LLVM-based tool: the Security-Oriented Analysis of Application Programs (SOAAP) that allows programmers to reason about compartmentalization using source-code annotations (compartmentalization hypotheses). We demonstrate considerable benefit when creating new compartmentalizations for complex applications, and analyze existing compartmentalized applications to discover design faults and maintenance issues arising from application evolution.
Keywords: compartmentalization, security, vulnerability mitigation (ID#: 15-7544)
URL: http://doi.acm.org/10.1145/2810103.2813611
Radu Stoenescu, Vladimir Olteanu, Matei Popovici, Mohamed Ahmed, Joao Martins, Roberto Bifulco, Filipe Manco, Felipe Huici, Georgios Smaragdakis, Mark Handley, Costin Raiciu; “In-Net: In-Network Processing for the Masses,” EuroSys ’15 Proceedings of the Tenth European Conference on Computer Systems, April 2015, Article No. 23. doi:10.1145/2741948.2741961
Abstract: Network Function Virtualization is pushing network operators to deploy commodity hardware that will be used to run middlebox functionality and processing on behalf of third parties: in effect, network operators are slowly but surely becoming in-network cloud providers. The market for innetwork clouds is large, ranging from content providers, mobile applications and even end-users. We show in this paper that blindly adopting cloud technologies in the context of in-network clouds is not feasible from both the security and scalability points of view. Instead we propose In-Net, an architecture that allows untrusted endpoints as well as content-providers to deploy custom in-network processing to be run on platforms owned by network operators. In-Net relies on static analysis to allow platforms to check whether the requested processing is safe, and whether it contradicts the operator’s policies. We have implemented In-Net and tested it in the wide-area, supporting a range of use-cases that are difficult to deploy today. Our experience shows that In-Net is secure, scales to many users (thousands of clients on a single inexpensive server), allows for a wide-range of functionality, and offers benefits to end-users, network operators and content providers alike.
Keywords: (not provided) (ID#: 15-7545)
URL: http://doi.acm.org/10.1145/2741948.2741961
Anil Saini, Manoj Singh Gaur, Vijay Laxmi, Priyadarsi Nanda; “sandFOX: Secure Sandboxed and Isolated Environment for Firefox Browser,” SIN ’15 Proceedings of the 8th International Conference on Security of Information and Networks, September 2015, Pages 20–27. doi:10.1145/2799979.2800000
Abstract: Browser functionalities can be widely extended by browser extensions. One of the key features that makes browser extensions so powerful is that they run with “high” privileges. As a consequence, a vulnerable or malicious extension might expose browser, and operating system (OS) resources to possible attacks such as privilege escalation, information stealing, and session hijacking. The resources are referred as browser as well as OS components accessed through browser extension such as accessing information on the web application, executing arbitrary processes, and even access files from a host file system. This paper presents sandFOX (secure sandbox and isolated environment), a client-side browser policies for constructing sandbox environment. sandFOX allows the browser extension to express fine-grained OS specific security policies that are enforced at runtime. In particular, our proposed policies provide the protection to OS resources (e.g., host file system, network and processes) from the browser attacks. We use Security-Enhanced Linux (SELinux) to tune OS and build a sandbox that helps in reducing potential damage from attacks on the OS resources. To show the practicality of sandFOX in a range of settings, we compute the effectiveness of sandFOX for various browser attacks on OS resources. We also show that sandFOX enabled browser experiences low overhead on loading pages and utilizes negligible memory when running with sandbox environment.
Keywords: browser attacks, browser policies, browser security, extension-based attacks (ID#: 15-7546)
URL: http://doi.acm.org/10.1145/2799979.2800000
Gorka Irazoqui, Mehmet Sinan Inci, Thomas Eisenbarth, Berk Sunar; “Lucky 13 Strikes Back,” ASIA CCS ’15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 85–96. doi:10.1145/2714576.2714625
Abstract: In this work we show how the Lucky 13 attack can be resurrected in the cloud by gaining access to a virtual machine co-located with the target. Our version of the attack exploits distinguishable cache access times enabled by VM deduplication to detect dummy function calls that only happen in case of an incorrectly CBC-padded TLS packet. Thereby, we gain back a new covert channel not considered in the original paper that enables the Lucky 13 attack. In fact, the new side channel is significantly more accurate, thus yielding a much more effective attack. We briefly survey prominent cryptographic libraries for this vulnerability. The attack currently succeeds to compromise PolarSSL, GnuTLS and CyaSSL on deduplication enabled platforms while the Lucky 13 patches in OpenSSL, Mozilla NSS and MatrixSSL are immune to this vulnerability. We conclude that, any program that follows secret data dependent execution flow is exploitable by side-channel attacks as shown in (but not limited to) our version of the Lucky 13 attack.
Keywords: cross-vm attacks, deduplication, lucky 13 attack, virtualization (ID#: 15-7547)
URL: http://doi.acm.org/10.1145/2714576.2714625
Vera Zaychik Moffitt, Julia Stoyanovich, Serge Abiteboul, Gerome Miklau; “Collaborative Access Control in WebdamLog,” SIGMOD ’15 Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, May 2015,
Pages 197–211. doi:10.1145/2723372.2749433
Abstract: The management of Web users’ personal information is increasingly distributed across a broad array of applications and systems, including online social networks and cloud-based services. Users wish to share data using these systems, but avoiding the risks of unintended disclosures or unauthorized access by applications has become a major challenge. We propose a novel access control model that operates within a distributed data management framework based on datalog. Using this model, users can control access to data they own and control applications they run. They can conveniently specify access control policies providing flexible tuple-level control derived using provenance information. We present a formal specification of the model, an implementation built using an open-source distributed datalog engine, and an extensive experimental evaluation showing that the computational cost of access control is modest.
Keywords: collaborative access control, distributed datalog, personal information management, provenance (ID#: 15-7548)
URL: http://doi.acm.org/10.1145/2723372.2749433
Charlie Hothersall-Thomas, Sergio Maffeis, Chris Novakovic; “BrowserAudit: Automated Testing of Browser Security Features,” ISSTA 2015 Proceedings of the 2015 International Symposium on Software Testing and Analysis, July 2015,
Pages 37–47. doi:10.1145/2771783.2771789
Abstract: The security of the client side of a web application relies on browser features such as cookies, the same-origin policy and HTTPS. As the client side grows increasingly powerful and sophisticated, browser vendors have stepped up their offering of security mechanisms which can be leveraged to protect it. These are often introduced experimentally and informally and, as adoption increases, gradually become standardised (e.g., CSP, CORS and HSTS). Considering the diverse landscape of browser vendors, releases, and customised versions for mobile and embedded devices, there is a compelling need for a systematic assessment of browser security. We present BrowserAudit, a tool for testing that a deployed browser enforces the guarantees implied by the main standardised and experimental security mechanisms. It includes more than 400 fully-automated tests that exercise a broad range of security features, helping web users, application developers and security researchers to make an informed security assessment of a deployed browser. We validate BrowserAudit by discovering both fresh and known security-related bugs in major browsers.
Keywords: Content Security Policy, Cross-Origin Resource Sharing, Same-Origin Policy, Web security, click-jacking, cookies, web browser testing (ID#: 15-7549)
URL: http://doi.acm.org/10.1145/2771783.2771789
Håvard D. Johansen, Eleanor Birrell, Robbert van Renesse, Fred B. Schneider, Magnus Stenhaug, Dag Johansen; “Enforcing Privacy Policies with Meta-Code,” APSys ’15 Proceedings of the 6th Asia-Pacific Workshop on Systems, July 2015, Article
No. 16. doi:10.1145/2797022.2797040
Abstract: This paper proposes a mechanism for expressing and enforcing security policies for shared data. Security policies are expressed as stateful meta-code operations; meta-code can express a broad class of policies, including access-based policies, use-based policies, obligations, and sticky policies with declassification. The meta-code is interposed in the filesystem access path to ensure policy compliance. The generality and feasibility of our approach is demonstrated using a sports analytics prototype system.
Keywords: (not provided) (ID#: 15- 7550)
URL: http://doi.acm.org/10.1145/2797022.2797040
Gary T. Leavens; “JML: Expressive Contracts, Specification Inheritance, and Behavioral Subtyping,” PPPJ ’15 Proceedings of the Principles and Practices of Programming on the Java Platform, September 2015, pages 1–1. doi:10.1145/2807426.2817926
Abstract: JML, the Java Modeling Language, is a formal specification language tailored to the specification of sequential Java classes and interfaces. It features contracts in the style of design by contract (as in Eiffel), as well as more sophisticated features that allow it to be used with a variety of tools from dynamic assertion checking to static verification. The talk will explain JML using some small examples. JML also features a notion of “specification inheritance,” which forces all subtypes to be “behavioral subtypes.” Behavioral subtyping allows client code to validly reason about objects using “supertype abstraction”; for example, when calling a method on an object, the specification for that method in the object’s static type can be used, even though the method call may dynamically dispatch to an overriding method in some subtype. Specification inheritance makes this valid by forcing each such overriding method to obey the specification of that method given in each of its supertypes. Specification inheritance, and thus supertype abstraction, also apply to JML’s invariants, history constraints, and initially clauses. These features make reasoning about object-oriented programs modular. Work on JML has been supported in part by NSF grants CCF0916350, CCF0916715, CCF1017262, and CNS1228695.
Keywords: (not provided) (ID#: 15-7551)
URL: http://doi.acm.org/10.1145/2807426.2817926
David Chisnall, Colin Rothwell, Robert N.M. Watson, Jonathan Woodruff, Munraj Vadera, Simon W. Moore, Michael Roe, Brooks Davis, Peter G. Neumann; “Beyond the PDP-11: Architectural Support for a Memory-Safe C Abstract Machine,” ASPLOS ’15 Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, April 2015, Pages 117–130. doi:10.1145/2786763.2694367
Abstract: We propose a new memory-safe interpretation of the C abstract machine that provides stronger protection to benefit security and debugging. Despite ambiguities in the specification intended to provide implementation flexibility, contemporary implementations of C have converged on a memory model similar to the PDP-11, the original target for C. This model lacks support for memory safety despite well-documented impacts on security and reliability. Attempts to change this model are often hampered by assumptions embedded in a large body of existing C code, dating back to the memory model exposed by the original C compiler for the PDP-11. Our experience with attempting to implement a memory-safe variant of C on the CHERI experimental microprocessor led us to identify a number of problematic idioms. We describe these as well as their interaction with existing memory safety schemes and the assumptions that they make beyond the requirements of the C specification. Finally, we refine the CHERI ISA and abstract model for C, by combining elements of the CHERI capability model and fat pointers, and present a softcore CPU that implements a C abstract machine that can run legacy C code with strong memory protection guarantees.
Keywords: C language, bounds checking, capabilities, compilers, memory protection, memory safety, processor design, security (ID#: 15-7552)
URL: http://doi.acm.org/10.1145/2786763.2694367
Yossef Oren, Vasileios P. Kemerlis, Simha Sethumadhavan, Angelos D. Keromytis; “The Spy in the Sandbox: Practical Cache Attacks in JavaScript and their Implications,” CCS ’15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015, Pages 1406–1418. doi:10.1145/2810103.2813708
Abstract: We present a micro-architectural side-channel attack that runs entirely in the browser. In contrast to previous work in this genre, our attack does not require the attacker to install software on the victim’s machine; to facilitate the attack, the victim needs only to browse to an untrusted webpage that contains attacker-controlled content. This makes our attack model highly scalable, and extremely relevant and practical to today’s Web, as most desktop browsers currently used to access the Internet are affected by such side channel threats. Our attack, which is an extension to the last-level cache attacks of Liu et al., allows a remote adversary to recover information belonging to other processes, users, and even virtual machines running on the same physical host with the victim web browser. We describe the fundamentals behind our attack, and evaluate its performance characteristics. In addition, we show how it can be used to compromise user privacy in a common setting, letting an attacker spy after a victim that uses private browsing. Defending against this side channel is possible, but the required countermeasures can exact an impractical cost on benign uses of the browser.
Keywords: cache-timing attacks, covert channel, javascript-based cache attacks, side-channel attacks, user tracking
(ID#: 15-7553)
URL: http://doi.acm.org/10.1145/2810103.2813708
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Sandboxing for Mobile Apps 2015 |
Just as containing or “sandboxing” provides a way to protect against malware in other software, sandboxing can be used to identify and protect against malicious code in iOS and Android apps. For the Science of Security community, sandboxing offers opportunities to advance composability, metrics, and policy-based governance. The work cited here was published in 2015.
Mihai Bucicoiu, Lucas Davi, Razvan Deaconescu, Ahmad-Reza Sadeghi; “XiOS: Extended Application Sandboxing on iOS,” ASIA CCS ’15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 43–54. doi:10.1145/2714576.2714629
Abstract: Until very recently it was widely believed that iOS malware is effectively blocked by Apple’s vetting process and application sandboxing. However, the newly presented severe malicious app attacks (e.g., Jekyll) succeeded to undermine these protection measures and steal private data, post Twitter messages, send SMS, and make phone calls. Currently, no effective defenses against these attacks are known for iOS. The main goal of this paper is to systematically analyze the recent attacks against iOS sandboxing and provide a practical security framework for iOS app hardening which is fully independent of the Apple’s vetting process and particularly benefits enterprises to protect employees’ iOS devices. The contribution of this paper is twofold: First, we show a new and generalized attack that significantly reduces the complexity of the recent attacks against iOS sandboxing. Second, we present the design and implementation of a novel and efficient iOS app hardening service, XiOS, that enables fine-grained application sandboxing, and mitigates the existing as well as our new attacks. In contrast to previous work in this domain (on iOS security), our approach does not require to jailbreak the device. We demonstrate the efficiency and effectiveness of XiOS by conducting several benchmarks as well as fine-grained policy enforcement on real-world iOS applications.
Keywords: binary instrumentation, ios, mobile security, sandboxing (ID#: 15-7554)
URL: http://doi.acm.org/10.1145/2714576.2714629
Xingmin Cui, Jingxuan Wang, Lucas C. K. Hui, Zhongwei Xie, Tian Zeng, S. M. Yiu; “WeChecker: Efficient and Precise Detection of Privilege Escalation Vulnerabilities in Android Apps,” WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 25. doi:10.1145/2766498.2766509
Abstract: Due to the rapid increase of Android apps and their wide usage to handle personal data, a precise and large-scaling checker is in need to validate the apps’ permission flow before they are listed on the market. Several tools have been proposed to detect sensitive data leaks in Android apps. But these tools are not applicable to large-scale analysis since they fail to deal with the arbitrary execution orders of different event handlers smartly. Event handlers are invoked by the framework based on the system state, therefore we cannot pre-determine their order of execution. Besides, since all exported components can be invoked by an external app, the execution orders of these components are also arbitrary. A naive way to simulate these two types of arbitrary execution orders yields a permutation of all event handlers in an app. The time complexity is O(n!) where n is the number of event handlers in an app. This leads to a high analysis overhead when n is big. To give an illustration, CHEX [10] found 50.73 entry points of 44 unique class types in an app on average. In this paper we propose an improved static taint analysis to deal with the challenge brought by the arbitrary execution orders without sacrificing the high precision. Our analysis does not need to make permutations and achieves a polynomial time complexity. We also propose to unify the array and map access with object reference by propagating access paths to reduce the number of false positives due to field-insensitivity and over approximation of array access and map access. We implement a tool, WeChecker, to detect privilege escalation vulnerabilities [7] in Android apps. WeChecker achieves 96% precision and 96% recall in the state-of-the-art test suite DriodBench (for compairson, the precision and recall of FlowDroid [1] are 86% and 93%, respectively). The evaluation of WeChecker on real apps shows that it is efficient (average analysis time of each app: 29.985s) and fits for large-scale checking.
Keywords: Android, control flow, data flow checking, privilege escalation attack, taint analysis (ID#: 15-7555)
URL: http://doi.acm.org/10.1145/2766498.2766509
Roee Hay, Omer Tripp, Marco Pistoia; “Dynamic Detection of Inter-Application Communication Vulnerabilities in Android,” ISSTA 2015 Proceedings of the 2015 International Symposium on Software Testing and Analysis, July 2015, Pages 118–128. doi:10.1145/2771783.2771800
Abstract: A main aspect of the Android platform is Inter-Application Communication (IAC), which enables reuse of functionality across apps and app components via message passing. While a powerful feature, IAC also constitutes a serious attack surface. A malicious app can embed a payload into an IAC message, thereby driving the recipient app into a potentially vulnerable behavior if the message is processed without its fields first being sanitized or validated. We present what to our knowledge is the first comprehensive testing algorithm for Android IAC vulnerabilities. Toward this end, we first describe a catalog, stemming from our field experience, of 8 concrete vulnerability types that can potentially arise due to unsafe handling of incoming IAC messages. We then explain the main challenges that automated discovery of Android IAC vulnerabilities entails, including in particular path coverage and custom data fields, and present simple yet surprisingly effective solutions to these challenges. We have realized our testing approach as the IntentDroid system, which is available as a commercial cloud service. IntentDroid utilizes lightweight platform-level instrumentation, implemented via debug breakpoints (to run atop any Android device without any setup or customization), to recover IAC-relevant app-level behaviors. Evaluation of IntentDroid over a set of 80 top-popular apps has revealed a total 150 IAC vulnerabilities — some already fixed by the developers following our report — with a recall rate of 92% w.r.t. a ground truth established via manual auditing by a security expert.
Keywords: Android, inter-application communication, mobile, security (ID#: 15-7556)
URL: http://doi.acm.org/10.1145/2771783.2771800
Yajin Zhou, Kunal Patel, Lei Wu, Zhi Wang, Xuxian Jiang; “Hybrid User-Level Sandboxing of Third-Party Android Apps,”
ASIA CCS ’15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015,
Pages 19–30. doi:10.1145/2714576.2714598
Abstract: Users of Android phones increasingly entrust personal information to third-party apps. However, recent studies reveal that many apps, even benign ones, could leak sensitive information without user awareness or consent. Previous solutions either require to modify the Android framework thus significantly impairing their practical deployment, or could be easily defeated by malicious apps using a native library. In this paper, we propose AppCage, a system that thoroughly confines the run-time behavior of third-party Android apps without requiring framework modifications or root privilege. AppCage leverages two complimentary user-level sandboxes to interpose and regulate an app’s access to sensitive APIs. Specifically, dex sandbox hooks into the app’s Dalvik virtual machine instance and redirects each sensitive framework API to a proxy which strictly enforces the user-defined policies, and native sandbox leverages software fault isolation to prevent the app’s native libraries from directly accessing the protected APIs or subverting the dex sandbox. We have implemented a prototype of AppCage. Our evaluation shows that AppCage can successfully detect and block attempts to leak private information by third-party apps, and the performance overhead caused by AppCage is negligible for apps without native libraries and minor for apps with them.
Keywords: android, dalvik hooking, software fault isolation (ID#: 15-7557)
URL: http://doi.acm.org/10.1145/2714576.2714598
Dan Ping, Xin Sun, Bing Mao; “TextLogger: Inferring Longer Inputs on Touch Screen Using Motion Sensors,” WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 24. doi:10.1145/2766498.2766511
Abstract: Today’s smartphones are equipped with precise motion sensors like accelerometer and gyroscope, which can measure tiny motion and rotation of devices. While they make mobile applications more functional, they also bring risks of leaking users’ privacy. Researchers have found that tap locations on screen can be roughly inferred from motion data of the device. They mostly utilized this side-channel for inferring short input like PIN numbers and passwords, with repeated attempts to boost accuracy. In this work, we study further for longer input inference, such as chat record and e-mail content, anything a user ever typed on a soft keyboard. Since people increasingly rely on smartphones for daily activities, their inputs directly or indirectly expose privacy about them. Thus, it is a serious threat if their input text is leaked. To make our attack practical, we utilize the shared memory side-channel for detecting window events and tap events of a soft keyboard. The up or down state of the keyboard helps triggering our Trojan service for collecting accelerometer and gyroscope data. Machine learning algorithms are used to roughly predict the input text from the raw data and language models are used to further correct the wrong predictions. We performed experiments on two real-life scenarios, which were writing emails and posting Twitter messages, both through mobile clients. Based on the experiments, we show the feasibility of inferring long user inputs to readable sentences from motion sensor data. By applying text mining technology on the inferred text, more sensitive information about the device owners can be exposed.
Keywords: edit distance model, keystroke inference using motion sensors, language model, machine learning, shared memory side-channel, side-channel attacks, smartphone security (ID#: 15-7558)
URL: http://doi.acm.org/10.1145/2766498.2766511
Xin Chen, Sencun Zhu; “DroidJust: Automated Functionality-Aware Privacy Leakage Analysis for Android Applications,” WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015,
Article No. 5. doi:10.1145/2766498.2766507
Abstract: Android applications (apps for short) can send out users’ sensitive information against users’ intention. Based on the stats from Genome and Mobile-Sandboxing, 55.8% and 59.7% Android malware families feature privacy leakage. Prior approaches to detecting privacy leakage on smartphones primarily focused on the discovery of sensitive information flows. However, Android apps also send out users’ sensitive information for legitimate functions. Due to the fuzzy nature of the privacy leakage detection problem, we formulate it as a justification problem, which aims to justify if a sensitive information transmission in an app serves any purpose, either for intended functions of the app itself or for other related functions. This formulation makes the problem more distinct and objective, and therefore more feasible to solve than before. We propose DroidJust, an automated approach to justifying an app’s sensitive information transmission by bridging the gap between the sensitive information transmission and application functions. We also implement a prototype of DroidJust and evaluate it with over 6000 Google Play apps and over 300 known malware collected from VirusTotal. Our experiments show that our tool can effectively and efficiently analyze Android apps w.r.t their sensitive information flows and functionalities, and can greatly assist in detecting privacy leakage.
Keywords: Android security, privacy leakage detection, static taint analysis (ID#: 15-7559)
URL: http://doi.acm.org/10.1145/2766498.2766507
Yuanzhong Xu, Emmett Witchel; “Maxoid: Transparently Confining Mobile Applications with Custom Views of State,” EuroSys ’15 Proceedings of the Tenth European Conference on Computer Systems, April 2015, Article No. 26. doi:10.1145/2741948.2741966
Abstract: We present Maxoid, a system that allows an Android app to process its sensitive data by securely invoking other, untrusted apps. Maxoid provides secrecy and integrity for both the invoking app and the invoked app. For each app, Maxoid presents custom views of private and public state (files and data in content providers) to transparently redirect unsafe data flows and minimize disruption. Maxoid supports unmodified apps with full security guarantees, and also introduces new APIs to improve usability. We show that Maxoid can improve security for popular Android apps with minimal performance overheads.
Keywords: (not provided) (ID#: 15-7560)
URL: http://doi.acm.org/10.1145/2741948.2741966
Daniel R. Thomas, Alastair R. Beresford, Andrew Rice; “Security Metrics for the Android Ecosystem,” SPSM ’15 Proceedings
of the 5th Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices, October 2015,
Pages 87–98. doi:10.1145/2808117.2808118
Abstract: The security of Android depends on the timely delivery of updates to fix critical vulnerabilities. In this paper we map the complex network of players in the Android ecosystem who must collaborate to provide updates, and determine that inaction by some manufacturers and network operators means many handsets are vulnerable to critical vulnerabilities. We define the FUM security metric to rank the performance of device manufacturers and network operators, based on their provision of updates and exposure to critical vulnerabilities. Using a corpus of 20 400 devices we show that there is significant variability in the timely delivery of security updates across different device manufacturers and network operators. This provides a comparison point for purchasers and regulators to determine which device manufacturers and network operators provide security updates and which do not. We find that on average 87.7% of Android devices are exposed to at least one of 11 known critical vulnerabilities and, across the ecosystem as a whole, assign a FUM security score of 2.87 out of 10. In our data, Nexus devices do considerably better than average with a score of 5.17; and LG is the best manufacturer with a score of 3.97.
Keywords: android, ecosystems, metrics, updates, vulnerabilities (ID#: 15-7561)
URL: http://doi.acm.org/10.1145/2808117.2808118
Antonio Bianchi, Yanick Fratantonio, Christopher Kruegel, Giovanni Vigna; “NJAS: Sandboxing Unmodified Applications in Non-Rooted Devices Running Stock Android,” SPSM ’15 Proceedings of the 5th Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices, October 2015, Pages 27–38. doi:10.1145/2808117.2808122
Abstract: Malware poses a serious threat to the Android ecosystem. Moreover, even benign applications can sometimes constitute security and privacy risks to their users, as they might contain vulnerabilities, or they might perform unwanted actions. Previous research has shown that the current Android security model is not sufficient to protect against these threats, and several solutions have been proposed to enable the specification and enforcing of finer-grained security policies. Unfortunately, many existing solutions suffer from several limitations: they require modifications to the Android framework, root access to the device, to create a modified version of an existing app that cannot be installed without enabling unsafe options, or they cannot completely sandbox native code components. In this work, we propose a novel approach that aims to sandbox arbitrary Android applications. Our solution, called NJAS, works by executing an Android application within the context of another one, and it achieves sandboxing by means of system call interposition. In this paper, we show that our solution overcomes major limitations that affect existing solutions. In fact, it does not require any modification to the framework, does not require root access to the device, and does not require the user to enable unsafe options. Moreover, the core sandboxing mechanism cannot be evaded by using native code components.
Keywords: android, code sandboxing, mobile security, system call interposition (ID#: 15-7562)
URL: http://doi.acm.org/10.1145/2808117.2808122
Suzanna Schmeelk, Junfeng Yang, Alfred Aho; “Android Malware Static Analysis Techniques,” CISR ’15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 5. doi:10.1145/2746266.2746271
Abstract: During 2014, Business Insider announced that there are over a billion users of Android worldwide. Government officials are also trending towards acquiring Android mobile devices. Google’s application architecture is already ubiquitous and will keep expanding. The beauty of an application-based architecture is the flexibility, interoperability and customizability it provides users. This same flexibility, however, also allows and attracts malware development. This paper provides a horizontal research analysis of techniques used for Android application malware analysis. The paper explores techniques used by Android malware static analysis methodologies. It examines the key analysis efforts used by examining applications for permission leakage and privacy concerns. The paper concludes with a discussion of some gaps of current malware static analysis research.
Keywords: Android Application Security, Cyber Security, Java, Malware Analysis, Static Analysis (ID#: 15-7563)
URL: http://doi.acm.org/10.1145/2746266.2746271
Yajin Zhou, Lei Wu, Zhi Wang, Xuxian Jiang; “Harvesting Developer Credentials in Android Apps,” WiSec ’15
Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 23. doi:10.1145/2766498.2766499
Abstract: Developers often integrate third-party services into their apps. To access a service, an app must authenticate itself to the service with a credential. However, credentials in apps are often not properly or adequately protected, and might be easily extracted by attackers. A leaked credential could pose serious privacy and security threats to both the app developer and app users. In this paper, we propose CredMiner to systematically study the prevalence of unsafe developer credential uses in Android apps. CredMiner can programmatically identify and recover (obfuscated) developer credentials unsafely embedded in Android apps. Specifically, it leverages data flow analysis to identify the raw form of the embedded credential, and selectively executes the part of the program that builds the credential to recover it. We applied CredMiner to 36,561 apps collected from various Android markets to study the use of free email services and Amazon AWS. There were 237 and 196 apps that used these two services, respectively. CredMiner discovered that 51.5% (121/237) and 67.3% (132/196) of them were vulnerable. In total, CredMiner recovered 302 unique email login credentials and 58 unique Amazon AWS credentials, and verified that 252 and 28 of these credentials were still valid at the time of the experiments, respectively.
Keywords: Amazon AWS, CredMiner, information flow, static analysis (ID#: 15-7564)
URL: http://doi.acm.org/10.1145/2766498.2766499
Luyi Xing, Xiaolong Bai, Tongxin Li, XiaoFeng Wang, Kai Chen, Xiaojing Liao, Shi-Min Hu, Xinhui Han; “Cracking App Isolation on Apple: Unauthorized Cross-App Resource Access on MAC OS-X and iOS,” CCS ’15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015, Pages 31–43. doi:10.1145/2810103.2813609
Abstract: On modern operating systems, applications under the same user are separated from each other, for the purpose of protecting them against malware and compromised programs. Given the complexity of today’s OSes, less clear is whether such isolation is effective against different kind of cross-app resource access attacks (called XARA in our research). To better understand the problem, on the less-studied Apple platforms, we conducted a systematic security analysis on MAC OS~X and iOS. Our research leads to the discovery of a series of high-impact security weaknesses, which enable a sandboxed malicious app, approved by the Apple Stores, to gain unauthorized access to other apps’ sensitive data. More specifically, we found that the inter-app interaction services, including the keychain, WebSocket and NSConnection on OS~X and URL Scheme on the MAC OS and iOS, can all be exploited by the malware to steal such confidential information as the passwords for iCloud, email and bank, and the secret token of Evernote. Further, the design of the app sandbox on OS~X was found to be vulnerable, exposing an app’s private directory to the sandboxed malware that hijacks its Apple Bundle ID. As a result, sensitive user data, like the notes and user contacts under Evernote and photos under WeChat, have all been disclosed. Fundamentally, these problems are caused by the lack of app-to-app and app-to-OS authentications. To better understand their impacts, we developed a scanner that automatically analyzes the binaries of MAC OS and iOS apps to determine whether proper protection is missing in their code. Running it on hundreds of binaries, we confirmed the pervasiveness of the weaknesses among high-impact Apple apps. Since the issues may not be easily fixed, we built a simple program that detects exploit attempts on OS~X, helping protect vulnerable apps before the problems can be fully addressed.
Keywords: MACH-O, OS X, XARA, apple, attack, confuse deputy, cross-app resource access, iOS, program analysis, vulnerability (ID#: 15-7565)
URL: http://doi.acm.org/10.1145/2810103.2813609
Zhui Deng, Brendan Saltaformaggio, Xiangyu Zhang, Dongyan Xu; “iRiS: Vetting Private API Abuse in iOS Applications,”
CCS ’15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015,
Pages 44–56. doi:10.1145/2810103.2813675
Abstract: With the booming sale of iOS devices, the number of iOS applications has increased significantly in recent years. To protect the security of iOS users, Apple requires every iOS application to go through a vetting process called App Review to detect uses of private APIs that provide access to sensitive user information. However, recent attacks have shown the feasibility of using private APIs without being detected during App Review. To counter such attacks, we propose a new iOS application vetting system, called iRiS, in this paper. iRiS first applies fast static analysis to resolve API calls. For those that cannot be statically resolved, iRiS uses a novel iterative dynamic analysis approach, which is slower but more powerful compared to static analysis. We have ported Valgrind to iOS and implemented a prototype of iRiS on top of it. We evaluated iRiS with 2019 applications from the official App Store. From these, iRiS identified 146 (7%) applications that use a total number of 150 different private APIs, including 25 security-critical APIs that access sensitive user information, such as device serial number. By analyzing iOS applications using iRiS, we also identified a suspicious advertisement service provider which collects user privacy information in its advertisement serving library. Our results show that, contrary to popular belief, a nontrivial number of iOS applications that violate Apple’s terms of service exist in the App Store. iRiS is effective in detecting private API abuse missed by App Review.
Keywords: application vetting, binary instrumentation, dynamic analysis, forced execution, iOS, private API, static analysis
(ID#: 15-7566)
URL: http://doi.acm.org/10.1145/2810103.2813675
Kevin Boos, Ardalan Amiri Sani, Lin Zhong; “Eliminating State Entanglement with Checkpoint-Based Virtualization of Mobile OS Services,” APSys ’15 Proceedings of the 6th Asia-Pacific Workshop on Systems, July 2015, Article No. 20. doi:10.1145/2797022.2797041
Abstract: Mobile operating systems have adopted a service model in which applications access system functionality by interacting with various OS Services in separate processes. These interactions cause application-specific states to be spread across many service processes, a problem we identify as state entanglement. State entanglement presents significant challenges to a wide variety of computing goals: fault isolation, fault tolerance, application migration, live update, and application speculation. We propose CORSA, a novel virtualization solution that uses a lightweight checkpoint/restore mechanism to virtualize OS Services on a per-application basis. This cleanly encapsulates a single application’s service-side states into a private virtual service instance, eliminating state entanglement and enabling the above goals. We present empirical evidence that our ongoing implementation of CORSA on Android is feasible with low overhead, even in the worst case of high frequency service interactions.
Keywords: (not provided) (ID#: 15-7567)
URL: http://doi.acm.org/10.1145/2797022.2797041
Tai-Lun Tseng, Shih-Hao Hung, Chia-Heng Tu; “Migratom.js: A Javascript Migration Framework for Distributed Web Computing and Mobile Devices,” SAC ’15 Proceedings of the 30th Annual ACM Symposium on Applied Computing, April 2015, Pages 798–801. doi:10.1145/2695664.2695987
Abstract: The emerging HTML5 technologies aim to enhance web apps with increased capabilities on mobile devices, as device-to-device computing becomes important in the future. To enable new application scenarios by making HTML5 execution environment dynamic and efficient, we propose a JavaScript framework Migratom.js, which manages task offloading and code migration with the flow-based programming paradigm. Migratom.js accelerates mobile web apps by offloading compute-intensive tasks to superior computing resources and enables the development of distributed HTML5 applications. This paper describes the design and implementation of Migratom.js and conducts case studies to evaluate the proposed framework. The results show that our framework is suitable for augmenting existing and emerging mobile applications.
Keywords: JavaScript, flow-based programming, mobile computing, remote execution (ID#: 15-7568)
URL: http://doi.acm.org/10.1145/2695664.2695987
Ajay Kumar Jha, Seungmin Lee, Woo Jin Lee; “Permission-Based Security in Android Application: From Policy Expert to
End User,” RACS Proceedings of the 2015 Conference on Research in Adaptive and Convergent Systems, October 2015,
Pages 319–320. doi:10.1145/2811411.2811493
Abstract: Though filtering of malicious applications is performed at Play Store, some malicious applications escape the filtering process. Thus, it becomes necessary to take strong security measures at other levels. Security in Android can be enforced at system and application levels. At system level Android uses sandboxing technique while at application level it uses permission. In this paper we briefly discuss the permission-based security implemented in Android through three different perspectives—policy expert, developer, and end user.
Keywords: android, permission, policy, privacy, security (ID#: 15-7569)
URL: http://doi.acm.org/10.1145/2811411.2811493
Tobias Distler, Christopher Bahn, Alysson Bessani, Frank Fischer, Flavio Junqueira; “Extensible Distributed Coordination,” EuroSys ’15 Proceedings of the Tenth European Conference on Computer Systems, April 2015,
Article No. 10. doi:10.1145/2741948.2741954
Abstract: Most services inside a data center are distributed systems requiring coordination and synchronization in the form of primitives like distributed locks and message queues. We argue that extensibility is a crucial feature of the coordination infrastructures used in these systems. Without the ability to extend the functionality of coordination services, applications might end up using sub-optimal coordination algorithms, possibly leading to low performance. Adding extensibility, however, requires mechanisms that constrain extensions to be able to make reasonable security and performance guarantees. We propose a scheme that enables extensions to be introduced and removed dynamically in a secure way. To avoid performance overheads due to poorly designed extensions, it constrains the access of extensions to resources. Evaluation results for extensible versions of ZooKeeper and DepSpace show that it is possible to increase the throughput of a distributed queue by more than an order of magnitude (17x for ZooKeeper, 24x for DepSpace) while keeping the underlying coordination kernel small.
Keywords: DepSpace, ZooKeeper, coordination services, distributed algorithms, extensibility (ID#: 15-7570)
URL: http://doi.acm.org/10.1145/2741948.2741954
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Tamper Resistance 2015 |
Tamper resistance is an important element for composability of software systems and for security of cyber physical system resilience. The research articles cited here were presented in 2015.
Joseph Gan; Roddy Kok; Pankaj Kohli; Yun Ding; Benjamin Mah, “Using Virtual Machine Protections to Enhance Whitebox Cryptography,” in Software Protection (SPRO), 2015 IEEE/ACM 1st International Workshop on, vol., no., pp. 17–23, 19–19 May 2015. doi:10.1109/SPRO.2015.12
Abstract: Since attackers can gain full control of the mobile execution environment, they are able to examine the inputs, outputs, and, with the help of a disassembler/debugger the result of every intermediate computation a cryptographic algorithm carries out. Essentially, attackers have total visibility into the cryptographic operation. Whitebox cryptography aims at protecting keys from disclosed in software implementation. With theoretically unbounded resources a determined attacker is able to recover any confidential keys and data. A strong whitebox cipher implementation as the cornerstone of security is essential for the overall security in mobile environments. Our goal is to provide an increased degree of protection given the constraints of a software solution and the resource constrained, hostile-host environments. We seek neither perfect protection nor long-term guarantees, but rather a practical level of protection to balance cost, security and usability. Regular software updates can be applied such that the protection will need to withstand a limited period of time. V-OS operates as a virtual machine (VM) within the native mobile operating system to provide a secure software environment within which to perform critical processes and computations for a mobile app.
Keywords: cryptography; mobile computing; virtual machines; V-OS; confidential keys; cryptographic algorithm; mobile application; mobile execution environment; secure software environment; software implementation; virtual machine protection; whitebox cipher implementation; whitebox cryptography; Androids; Encryption; Microprogramming; Mobile communication; Object recognition; Virtual machining; Anti-Debugging; Anti-Reverse Engineering; Code Obfuscation; Data Obfuscation; Fingerprinting; Mobile Code; Software Licensing; Software Renewability; Software Tamper Resistance; Virtual Machine Protections (VMP); Whitebox Cryptography (WBC) (ID#: 15- )
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7174806&isnumber=7174794
Junod, P.; Rinaldini, J.; Wehrli, J.; Michielin, J., “Obfuscator-LLVM — Software Protection for the Masses,” in Software Protection (SPRO), 2015 IEEE/ACM 1st International Workshop on, vol., no., pp. 3–9, 19–19 May 2015. doi:10.1109/SPRO.2015.10
Abstract: Software security with respect to reverse-engineering is a challenging discipline that has been researched for several years and which is still active. At the same time, this field is inherently practical, and thus of industrial relevance: indeed, protecting a piece of software against tampering, malicious modifications or reverse-engineering is a very difficult task. In this paper, we present and discuss a software obfuscation prototype tool based on the LLVM compilation suite. Our tool is built as different passes, where some of them have been open-sourced and are freely available, that work on the LLVM Intermediate Representation (IR) code. This approach brings several advantages, including the fact that it is language-agnostic and mostly independent of the target architecture. Our current prototype supports basic instruction substitutions, insertion of bogus control-flow constructs mixed with opaque predicates, control-flow flattening, procedures merging as well as a code tamper-proofing algorithm embedding code and data checksums directly in the control-flow flattening mechanism.
Keywords: reverse engineering; security of data; LLVM compilation suite; LLVM intermediate representation code; code tamper-proofing algorithm embedding code; control-flow flattening mechanism; obfuscator-LLVM; reverse-engineering; software obfuscation prototype tool; software protection; software security; Cryptography; Merging; Resistance; Routing; Software; Software algorithms (ID#: 15-7736)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7174804&isnumber=7174794
Nozaki, Y.; Asahi, K.; Yoshikawa, M., “Countermeasure of TWINE Against Power Analysis Attack,” in Future of Electron Devices, Kansai (IMFEDK), 2015 IEEE International Meeting for, vol., no., pp. 68–69, 4–5 June 2015. doi:10.1109/IMFEDK.2015.7158553
Abstract: Lightweight block ciphers, which can be embedded using small area, have attracted much attention. This study proposes a new countermeasure for TWINE which is one of the most popular light weight block ciphers. The proposed method masks the correlation between power consumption and confidential information by adding random numbers to intermediate data of encryption. Experiments prove effective tamper-resistance of the proposed method.
Keywords: cryptography; random number generation; TWINE; confidential information; encryption; lightweight block cipher; power analysis attack; power consumption; random number; tamper-resistance; Ciphers; Correlation; Encryption; Hamming distance; Power demand; Registers; lightweight block cipher; power analysis of semiconductor; security of semiconductor; tamper resistance (ID#: 15-7737)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158553&isnumber=7158481
Yoshikawa, M.; Tsukadaira, T.; Kumaki, T., “Design and LSI Prototyping of Security Module with Hardware Trojan,” in Consumer Electronics (ICCE), 2015 IEEE International Conference on, vol., no., pp. 426–427, 9–12 Jan. 2015. doi:10.1109/ICCE.2015.7066472
Abstract: To examine the tamper resistance of consumer security products using cryptographic circuits, the present study develops countermeasure-annulled hardware Trojan and manufactures its prototyping as ASIC using 018 μm CMOS. The present study also verifies the validity and effect of the hardware Trojan on the prototyping LSI by performing evaluation tests.
Keywords: CMOS integrated circuits; application specific integrated circuits; consumer products; cryptography; integrated circuit manufacture; large scale integration; rapid prototyping (industrial); ASIC; CMOS technology; LSI prototyping; Trojan; consumer security product; countermeasure-annulled hardware; cryptographic circuit; security module; size 0.18 μm; tamper resistance; Circuit faults; Conferences; Encryption; Hardware; Large scale integration; Trojan horses (ID#: 15-7738)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066472&isnumber=7066289
Nozaki, Y.; Asai, T.; Asahi, K.; Yoshikawa, M., “Power Analysis for Clock Fluctuation LSI,” in Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015 16th IEEE/ACIS International Conference on, vol., no., pp. 1–4, 1–3 June 2015. doi:10.1109/SNPD.2015.7176195
Abstract: Several measures against power analysis attacks have been proposed. A clock fluctuation LSI, which achieves tamper resistance against electromagnetic analysis attacks, is one of popular measures. The present study proposes an alignment method which can analyze the clock fluctuation LSI. The proposed method corrects a shift of power consumption waveforms in the time axis direction caused by periodic fluctuation of clocks. Evaluation experiments using an actual device prove the validity of the proposed method.
Keywords: cryptography; standards; AES; advanced encryption standard; clock fluctuation LSI; electromagnetic analysis attacks; power consumption waveforms; tamper resistance; Clocks; Delays; Encryption; Fluctuations; Large scale integration; Power demand; clock fluctuation; power analysis; security (ID#: 15-7739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176195&isnumber=7176160
Shiozaki, M.; Kubota, T.; Nakai, T.; Takeuchi, A.; Nishimura, T.; Fujino, T., “Tamper-Resistant Authentication System with Side-Channel Attack Resistant AES and PUF Using MDR-ROM,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, vol., no., pp. 1462–1465, 24–27 May 2015. doi:10.1109/ISCAS.2015.7168920
Abstract: As a threat of security devices, side-channel attacks (SCAs) and invasive attacks have been identified in the last decade. The SCA reveals a secret key on a cryptographic circuit by measuring power consumption or electromagnetic radiation during the cryptographic operations. We have proposed the MDR-ROM scheme as the low-power and small-area counter-measure against SCAs. Meanwhile, secret data in a nonvolatile memory is analyzed by invasive attacks, and the cryptographic device is counterfeited and cloned by an adversary. We proposed to combine the MDR-ROM scheme with the Physical Unclonable Function (PUF) technique, which is expected as the counter-measure against the counterfeit, and the prototype chip was fabricated with a 180nm CMOS technology. In addition, the keyless entry demonstration system was produced in order to present the effectiveness of SCA resistance and PUF technique. Our experiments confirmed that this demonstration system achieved sufficient tamper resistance.
Keywords: CMOS integrated circuits; cryptography; random-access storage; read-only storage;180nm CMOS technology; AES; MDR-ROM scheme; PUF; SCA; cryptographic circuit; cryptographic operations; electromagnetic radiation measurement; invasive attacks; low-power counter-measure; nonvolatile memory; physical unclonable function technique; power consumption measurement; secret key; security devices; side-channel attack resistant; small-area counter-measure; tamper-resistant authentication system; Authentication; Correlation; Cryptography; Large scale integration; Power measurement; Read only memory; Resistance; IO-masked dual-rail ROM (MDR-ROM); Side channel attacks (SCA); physical unclonable function (PUF) (ID#: 15-7740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168920&isnumber=7168553
Wang, X.; Karri, R., “Reusing Hardware Performance Counters to Detect and Identify Kernel Control-Flow Modifying Rootkits,” in Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 35, no. 3, pp. 485– 498, March 2016. doi:10.1109/TCAD.2015.2474374
Abstract: Kernel rootkits are formidable threats to computer systems. They are stealthy and can have unrestricted access to system resources. This paper presents NumChecker, a new Virtual Machine Monitor (VMM) based framework to detect and identify control-flow modifying kernel rootkits in a guest Virtual Machine (VM). NumChecker detects and identifies malicious modifications to a system call in the guest VM by measuring the number of certain hardware events that occur during the system call’s execution. To automatically count these events, NumChecker leverages the Hardware Performance Counters (HPCs), which exist in modern processors. By using HPCs, the checking cost is significantly reduced and the tamper-resistance is enhanced. We implement a prototype of NumChecker on Linux with the Kernel-based Virtual Machine (KVM). An HPC-based two-phase kernel rootkit detection and identification technique is presented and evaluated on a number of real-world kernel rootkits. The results demonstrate its practicality and effectiveness.
Keywords: Hardware; Kernel; Linux; Monitoring; Radiation detectors; Virtual machining; Virtualization; Controlflow Modifying Kernel Rootkits; Hardware Performance Counters; Rootkit Detection and Identification (ID#: 15-7741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7229276&isnumber=6917053
Yu-Shen Ho; Ruay-Lien Ma; Cheng-En Sung; I-Chen Tsai; Li-Wei Kang; Chia-Mu Yu, “Deterministic Detection of Node Replication Attacks in Sensor Networks,” in Consumer Electronics - Taiwan (ICCE-TW), 2015 IEEE International Conference on, vol., no., pp. 468–469, 6–8 June 2015. doi:10.1109/ICCE-TW.2015.7217002
Abstract: In Wireless Sensor Networks (WSNs), because sensor nodes do not equip with tamper resistance hardwares, they are vulnerable to the capture and compromise performed by the adversary. By launching the node replication attack, the adversary can place the replicas of captured sensor nodes back into the sensor networks so as to eavesdrop the transmitted messages or compromise the functionality of the network. Although several protocols are proposed to defend against node replication attacks, all the proposed methods can only detect the node replication attacks probabilistically. In this paper, we propose Quorum-Based Multicast (QBM) and Star-shape Line-Selected Multicast (SLSM) to detect the node replication attacks, both of which can deterministically detect the replicas.
Keywords: multicast communication; telecommunication security; wireless sensor networks; deterministic detection; node replication attacks; quorum-based multicast; star-shape line-selected multicast; Cloning; Hardware; Mobile communication; Probabilistic logic; Protocols; Security; Wireless sensor networks (ID#: 15-7742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7217002&isnumber=7216784
Oliveira, L.C.; Pereira, E.G.; Oliveira, R.C.; Morais, M.R.A.; Lima, A.M.N.; Neff, H., “SPR Sensor for Tampering Detection in Biofuels,” in Instrumentation and Measurement Technology Conference (I2MTC), 2015 IEEE International, vol., no.,
pp. 1471–1476, 11–14 May 2015. doi:10.1109/I2MTC.2015.7151494
Abstract: This work presents a sensor based on surface plasmon resonance phenomenon and combined with dc-sheet resistance monitoring for detecting tampering of ethanol fuel. To demonstrated the feasibility of the proposed sensor, hydrated and anhydride ethanol fuel were tested. Tampering ethanol with methanol and with increasing water content have been used to evaluate the capabilities of the proposed sensing arrangement.
Keywords: biofuel; chemical sensors; surface plasmon resonance; anhydride ethanol fuel; biofuels; dc-sheet resistance monitoring; ethanol fuel; hydrated ethanol fuel; sensing arrangement; surface plasmon resonance phenomenon; tampering detection; water content; Ethanol; Fuels; Metals; Optical refraction; Optical sensors; Optical surface waves; Refractive index (ID#: 15-7743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7151494&isnumber=7151225
Dhole, V.S.; Patil, N.N., “Self Embedding Fragile Watermarking for Image Tampering Detection and Image Recovery Using Self Recovery Blocks,” in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no., pp. 752–757, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.150
Abstract: Fragile watermarking is discovered for authentication and content integrity verification. This paper introduces a modified fragile watermarking technique for image recovery. Here we can detect as well as recovered the tampered image with its tampered region. This modified approach helps us to produce resistance on various attacks like birthday attack, college attack and quantization attacks. Using a non-sequential block chaining and randomized block chaining, which is created on the basis of secrete key this modified technique produces great amount of recovery from tampered regions. In this modified technique we put a watermark information and information of recovery of image block into the image block. These blocks are linked with next randomly generated block of image. In this modified process of block chaining to obtained first watermark image, modified technique uses original image and watermarked image. While to obtained self-embedded image we merge shuffled original image on original image so that we get final shuffled image. At last we merge first watermark image with shuffled image to produce final watermarked image. During recovery we follow reverse process of above to obtained original image from tampered image. By comparing block by block mean values of tampered blocks recovery of tampered blocks can be done. This modified technique can be used for color as well as gray scale images. The implementation shows that, the proposed modified technique can be used with promising result as an alternative approach to image recovery from tampered area effectively.
Keywords: image colour analysis; image watermarking; object detection; authentication; birthday attack; college attack; color image; content integrity verification; gray scale image; image recovery; image tampering detection; quantization attack; self-embedded image; self-embedding fragile watermarking; self-recovery image blocks; Authentication; Digital images; Discrete cosine transforms; Indexes; Robustness; Sequential analysis; Watermarking; Self embedding; block-chaining; image recovery; shuffled; tamper detection (ID#: 15-7744)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155948&isnumber=7155781
Rajendran, J.; Karri, R.; Wendt, J.B.; Potkonjak, M.; McDonald, N.; Rose, G.S.; Wysocki, B., “Nano Meets Security: Exploring Nanoelectronic Devices for Security Applications,” in Proceedings of the IEEE, vol. 103, no. 5, pp. 829–849, May 2015. doi:10.1109/JPROC.2014.2387353
Abstract: Information security has emerged as an important system and application metric. Classical security solutions use algorithmic mechanisms that address a small subset of emerging security requirements, often at high-energy and performance overhead. Further, emerging side-channel and physical attacks can compromise classical security solutions. Hardware security solutions overcome many of these limitations with less energy and performance overhead. Nanoelectronics-based hardware security preserves these advantages while enabling conceptually new security primitives and applications. This tutorial paper shows how one can develop hardware security primitives by exploiting the unique characteristics such as complex device and system models, bidirectional operation, and nonvolatility of emerging nanoelectronic devices. This paper then explains the security capabilities of several emerging nanoelectronic devices: memristors, resistive random-access memory, contact-resistive random-access memory, phase change memories, spin torque-transfer random-access memory, orthogonal spin transfer random access memory, graphene, carbon nanotubes, silicon nanowire field-effect transistors, and nanoelectronic mechanical switches. Further, the paper describes hardware security primitives for authentication, key generation, data encryption, device identification, digital forensics, tamper detection, and thwarting reverse engineering. Finally, the paper summarizes the outstanding challenges in using emerging nanoelectronic devices for security.
Keywords: carbon nanotubes; cryptography; digital forensics; elemental semiconductors; field effect transistors; graphene devices; microswitches; nanoelectronics; nanowires; phase change memories; resistive RAM; silicon; C; Si; algorithmic mechanisms; authentication; bidirectional operation; classical security solutions; complex device; contact resistive random access memory; data encryption; device identification; graphene; hardware security primitives; information security; key generation; memristors; nanoelectronic devices; nanoelectronic mechanical switches; orthogonal spin transfer random access memory; physical attacks; security requirements; side channel; silicon nanowire field effect transistors; spin torque-transfer random-access memory; tamper detection; thwarting reverse engineering; CMOS integrated circuits; Digital forensics; Memristors; Nanoelectronics; Nanoscale devices; Network security; Phase change materials; Random access memory; Resistance; Emerging technologies; PCMs; hardware security; physical unclonable functions (ID#: 15-7745)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106562&isnumber=7112213
Mercier, H.; Augier, M.; Lenstra, A.K., “STEP-Archival: Storage Integrity and Anti-Tampering Using Data Entanglement,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 1590–1594, 14–19 June 2015. doi:10.1109/ISIT.2015.7282724
Abstract: We present STEP-archives, a model for censorship-resistant storage systems where an attacker cannot censor or tamper with data without causing a large amount of obvious collateral damage. MDS erasure codes are used to entangle unrelated data blocks, in addition to providing redundancy against storage failures. We show a tradeoff for the attacker between attack complexity, irrecoverability, and collateral damage. We also show that the system can efficiently recover from attacks with imperfect irrecoverability, making the problem asymmetric between attackers and defenders. Finally, we present sample heuristic attack algorithms that are efficient and irrecoverable (but not collateral-damage-optimal), and demonstrate how some strategies and parameter choices allow to resist these sample attacks.
Keywords: data integrity; data protection; information retrieval; redundancy; STEP-archival; antitampering; censorship-resistant storage system; data entanglement; heuristic attack algorithm; imperfect irrecoverability; storage integrity; Censorship; Complexity theory; Decoding; Grippers; Memory; Resistance; Security; Distributed storage; MDS codes; anti-tampering (ID#: 15-7746)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282724&isnumber=7282397
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Text Analytics 2015 |
The term “text analytics” refers to linguistic, statistical, and machine learning techniques that model and structure the information content of textual sources for intelligence, exploratory data analysis, research, or investigation. The research cited here focuses on large volumes of text mined to identify insider threats, intrusions, and malware detection. It is of interest to the Science of Security community relative to metrics, scalability and composability, and human factors. Research cited here was published in 2015.
Hsia-Ching Chang; Chen-Ya Wang, “Cloud Incident Data Analytics: Change-Point Analysis and Text Visualization,” in System Sciences (HICSS), 2015 48th Hawaii International Conference on, vol., no., pp. 5320–5330, 5–8 Jan. 2015. doi:10.1109/HICSS.2015.626
Abstract: When security incidents occur in a cloud computing environment, it constitutes a wake-up call to acknowledge potential threats and risks. Compared to other types of incidents (e.g., Extreme climate events, terror attacks and natural disasters), incidents pertaining to the cloud security issues seem to receive little attention from academia. This study aims to provide a starting point for further studies via analytics. Bayesian change-point analysis, often employed to detect abrupt regime shifts in a variety of events, was performed to identify the salient changes in the cloud incident count data retrieved from Cloutage.org database. Additionally, to get to the root of such incidents, this study utilized text mining techniques with word clouds to visualize non-obvious patterns in the summaries of cloud incidents. Both quantitative and qualitative analyses for exploring cloud incident data offer new insights in finding commonality and differences among the causes of cloud vulnerabilities over time.
Keywords: Bayes methods; cloud computing; data analysis; data mining; data visualisation; security of data; text analysis; Bayesian change-point analysis; Cloutage.org database; abrupt regime shifts; change-point analysis; cloud computing environment; cloud incident count data; cloud incident data analytics; cloud security issues; cloud vulnerabilities; nonobvious pattern visualization; security incidents; text mining techniques; text visualization; wake-up call; word clouds; Bayes methods; Cloud computing; Computational modeling; Data analysis; Data visualization; Security; Tag clouds; cloud computing security; cloud incidents
(ID#: 15-7592)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070455&isnumber=7069647
Skillicorn, D.B., “Empirical Assessment of Al Qaeda, ISIS, and Taliban Propaganda,” in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, vol., no., pp. 61–66, 27–29 May 2015. doi:10.1109/ISI.2015.7165940
Abstract: The jihadist groups AQAP, ISIS, and the Taliban have all produced glossy English magazines designed to influence Western sympathizers. We examine these magazines empirically with respect to models of the intensity of informative, imaginative, deceptive, jihadist, and gamification language. This allows their success to be estimated and their similarities and differences to be exposed. We also develop and validate an empirical model of propaganda; according to this model Dabiq, ISIS’s magazine ranks highest of the three.
Keywords: natural language processing; social sciences computing; AQAP; Al Qaeda; ISIS; gamification language; glossy English magazines; jihadist groups; taliban propaganda; Complexity theory; Decision support systems; Corpus linguistics; radicalization; terrorism; text analytics (ID#: 15-7593)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165940&isnumber=7165923
Martinez, E.; Fallon, E.; Fallon, S.; MingXue Wang, “ADAMANT — An Anomaly Detection Algorithm for MAintenance and Network Troubleshooting,” in Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, vol., no., pp. 1292–1297, 11–15 May 2015. doi:10.1109/INM.2015.7140484
Abstract: Network operators are increasingly using analytic applications to improve the performance of their networks. Telecommunications analytical applications typically use SQL and Complex Event Processing (CEP) for data processing, network analysis and troubleshooting. Such approaches are hindered as they require an in-depth knowledge of both the telecommunications domain and telecommunications data structures in order to create the required queries. Valuable information contained in free form text data fields such as “additional_info”, “user_text” or “problem_text” can also be ignored. This work proposes An Anomaly Detection Algorithm for MAintenance and Network Troubleshooting (ADAMANT), a text analytic based network anomaly detection approach. Once telecommunications data records have been indexed, ADAMANT uses distance based outlier detection within sliding windows to detect abnormal terms at configurable time intervals. Traditional approaches focus on a specific type of record and create specific cause and effect rules. With the ADAMANT approach all free form text fields of alarms, logs, etc. are treated as text documents similar to Twitter feeds. All documents within a window represent a snapshot of the network state that is processed by ADAMANT. The ADAMANT approach focuses on text analytics to provide automated analysis without the requirement for SQL/CEP queries. Such an approach provides distinct network insights in comparison to traditional approaches.
Keywords: performance evaluation; search engines; security of data; text analysis; ADAMANT; CEP; SQL; Twitter feeds; abnormal terms detection; additional_info; anomaly detection algorithm for maintenance and network troubleshooting; complex event processing; configurable time intervals; data processing; distance based outlier detection; network analysis; network operators; network performance; network state; problem_text; search engine; sliding windows; telecommunications analytical applications; telecommunications data records; telecommunications data structures; telecommunications domain; text analytic based network anomaly detection approach; text documents; user_text; Algorithm design and analysis; Big data; Conferences; Detection algorithms; Indexes; Search engines; Telecommunications; distance based; outlier; search Engine; sliding windows; text anomaly (ID#: 15-7594)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140484&isnumber=7140257
Stoll, J.; Bengez, R.Z., “Visual Structures for Seeing Cyber Policy Strategies,” in Cyber Conflict: Architectures in Cyberspace (CyCon), 2015 7th International Conference on, vol., no., pp. 135–152, 26–29 May 2015. doi:10.1109/CYCON.2015.7158474
Abstract: In the pursuit of cyber security for organizations, there are tens of thousands of tools, guidelines, best practices, forensics, platforms, toolkits, diagnostics, and analytics available. However according to the Verizon 2014 Data Breach Report: “after analysing 10 years of data... organizations cannot keep up with cyber crime-and the bad guys are winning.” Although billions are expended worldwide on cyber security, organizations struggle with complexity, e.g., the NISTIR 7628 guidelines for cyber-physical systems are over 600 pages of text. And there is a lack of information visibility. Organizations must bridge the gap between technical cyber operations and the business/social priorities since both sides are essential for ensuring cyber security. Identifying visual structures for information synthesis could help reduce the complexity while increasing information visibility within organizations. This paper lays the foundation for investigating such visual structures by first identifying where current visual structures are succeeding or failing. To do this, we examined publicly available analyses related to three types of security issues:
1) epidemic, 2) cyber attacks on an industrial network, and 3) threat of terrorist attack. We found that existing visual structures are largely inadequate for reducing complexity and improving information visibility. However, based on our analysis, we identified a range of different visual structures, and their possible trade-offs/limitation is framing strategies for cyber policy. These structures form the basis of evolving visualization to support information synthesis for policy actions, which has rarely been done but is promising based on the efficacy of existing visualizations for cyber incident detection, attacks, and situation awareness.
Keywords: data visualisation; security of data; terrorism; Verizon 2014 Data Breach Report; cyber attacks; cyber incident detection; cyber policy strategies; cyber security; information synthesis; information visibility; situation awareness; terrorist attack; visual structures; Complexity theory; Computer security; Data visualization; Organizations; Terrorism; Visualization; cyber security policy; human-computer interaction; organizations; visualization (ID#: 15-7595)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158474&isnumber=7158456
Arora, D.; Malik, P., “Analytics: Key to Go from Generating Big Data to Deriving Business Value,” in Big Data Computing Service and Applications (BigDataService), 2015 IEEE First International Conference on, vol., no., pp. 446–452, March 30
2015–April 2 2015. doi:10.1109/BigDataService.2015.62
Abstract: The potential to extract actionable insights from Big Data has gained increased attention of researchers in academia as well as several industrial sectors. The field has become interesting and problems look even more exciting to solve ever since organizations have been trying to tame large volumes of complex and fast arriving Big Data streams through newer computing paradigms. However, extracting meaningful and actionable information from Big Data is a challenging and daunting task. The ability to generate value from large volumes of data is an art which combined with analytical skills needs to be mastered in order to gain competitive advantage in business. The ability of organizations to leverage the emerging technologies and integrate Big Data into their enterprise architectures effectively depends on the maturity level of the technology and business teams, capabilities they develop as well as the strategies they adopt. In this paper, through selected use cases, we demonstrate how statistical analyses, machine learning algorithms, optimization and text mining algorithms can be applied to extract meaningful insights from the data available through social media, online commerce, telecommunication industry, smart utility meters and used for variety of business benefits, including improving security. The nature of applied analytical techniques largely depends on the underlying nature of the problem so a one-size-fits-all solution hardly exists. Deriving information from Big Data is also subject to challenges associated with data security and privacy. These and other challenges are discussed in context of the selected problems to illustrate the potential of Big Data analytics.
Keywords: Big Data; business data processing; data integration; data mining; data privacy; learning (artificial intelligence); optimisation; statistical analysis; text analysis; Big Data integration; Big Data streams; analytical skills; analytical techniques; business benefits; business teams; business value; computing paradigms; data privacy; data security; enterprise architectures; information extraction; large-data volumes; machine learning algorithm; maturity level; one-size-fits-all solution; online commerce; optimization algorithm; security improvement; smart utility meters; social media; statistical analysis; telecommunication industry; text mining algorithm; Algorithm design and analysis; Big data; Companies; Machine learning algorithms; Security; Sentiment analysis; algorithms; big data; machine learning; review (ID#: 15-7596)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184914&isnumber=7184847
Wingyan Chung; Saike He; Daniel Dajun Zeng; Victor Benjamin, “Emotion Extraction and Entrainment in Social Media: The Case of U.S. Immigration and Border Security,” in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, vol., no., pp. 55–60, 27–29 May 2015. doi:10.1109/ISI.2015.7165939
Abstract: Emotion plays an important role in shaping public policy and business decisions. The growth of social media has allowed people to express their emotion publicly in an unprecedented manner. Textual content and user linkages fostered by social media networks can be used to examine emotion types, intensity, and contagion. However, research into how emotion evolves and entrains in social media that influence security issues is scarce. In this research, we developed an approach to analyzing emotion expressed in political social media. We compared two methods of emotion analysis to identify influential users and to trace their contagion effects on public emotion, and report preliminary findings of analyzing the emotion of 105,304 users who posted 189,012 tweets on the U.S. immigration and border security issues in November 2014. The results provide strong implication for understanding social actions and for collecting social intelligence for security informatics. This research should contribute to helping decision makers and security personnel to use public emotion effectively to develop appropriate strategies.
Keywords: behavioural sciences computing; public administration; security; social networking (online); US immigration; border security; business decisions; decision makers; emotion entrainment; emotion extraction; political social media; public emotion; public policy; security informatics; security issues; security personnel; social intelligence; social media networks; textual content; user linkages; Communities; Couplings; Informatics; Media; Public policy; Security; emotion; entrainment; influence; social media analytics; social network analysis; text mining (ID#: 15-7597)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165939&isnumber=7165923
Heath, F.F.; Hull, R.; Khabiri, E.; Riemer, M.; Sukaviriya, N.; Vaculin, R., “Alexandria: Extensible Framework for Rapid Exploration of Social Media,” in Big Data (BigData Congress), 2015 IEEE International Congress on, vol., no., pp. 483–490,
June 27 2015–July 2 2015. doi:10.1109/BigDataCongress.2015.77
Abstract: The Alexandria system under development at IBM Research provides an extensible framework and platform for supporting a variety of big-data analytics and visualizations. The system is currently focused on enabling rapid exploration of text-based social media data. The system provides tools to help with constructing “domain models” (i.e., Families of keywords and extractors to enable focus on tweets and other social media documents relevant to a project), to rapidly extract and segment the relevant social media and its authors, to apply further analytics (such as finding trends and anomalous terms), and visualizing the results. The system architecture is centered around a variety of REST-based service APIs to enable flexible orchestration of the system capabilities, these are especially useful to support knowledge-worker driven iterative exploration of social phenomena. The architecture also enables rapid integration of Alexandria capabilities with other social media analytics system, as has been demonstrated through an integration with IBM Research’s SystemG. This paper describes a prototypical usage scenario for Alexandria, along with the architecture and key underlying analytics.
Keywords: Big Data; data analysis; data visualisation; social networking (online); text analysis; Alexandria system; IBM Research SystemG; REST-based service API; big-data analytics; big-data visualization; domain model; extensible framework; knowledge-worker driven iterative exploration; rapid text-based social media data exploration; social media documents; social media extraction; social media segmentation; tweets; Analytical models; Data visualization; Government; Indexes; Media; Twitter; analytics exploration; analytics process management; social media analytics; text analytics (ID#: 15-7598)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207261&isnumber=7207183
Agrawal, P.K.; Alvi, A.S., “Textual Feedback Analysis: Review,” in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no., pp. 457–460, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.95
Abstract: Internet has become a more popular medium of sharing the opinions or feedback about particular topics. The feedbacks are often in the form of numerical ratings and text. Numerical ratings are easily processed but waste amount of unstructured textual data present on the internet in the forms of web blogs, emails, customer experiences, tweets etc that is left unprocessed. This data should be processed in order to retrieve more specific opinions that will be helpful in making more appropriate decision. In this paper we reviewed all the different approaches that are used for processing the text data. The different approaches help us to identified challenges and scope present in the textual feedback analysis.
Keywords: information analysis; information retrieval; text analysis; feedback sharing; numerical ratings; opinion retrieval; opinion sharing; textual feedback analysis; Accuracy; Data mining; Electronic mail; Internet; Organizations; Pragmatics; Sentiment analysis; Natural language processing; Text analytics; feedback analysis; opinion and sentiment analysis (ID#: 15-7599)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155888&isnumber=7155781
Polig, R.; Giefers, H.; Stechele, W., “A Soft-Core Processor Array for Relational Operators,” in Application-specific Systems, Architectures and Processors (ASAP), 2015 IEEE 26th International Conference on, vol., no., pp. 17–24, 27–29 July 2015. doi:10.1109/ASAP.2015.7245699
Abstract: Despite the performance and power efficiency gains achieved by FPGAs for text analytics queries, analysis shows a low utilization of the custom hardware operator modules. Furthermore the long synthesis times limit the accelerator’s use in enterprise systems to static queries. To overcome these limitations we propose the use of an overlay architecture to share area resources among multiple operators and reduce compilation times. In this paper we present a novel soft-core architecture tailored to efficiently perform relational operations of text analytics queries on multiple virtual streams. It combines the ability to perform efficient streaming based operations while adding the flexibility of an instruction programmable core. It is used as a processing element in an array of cores to execute large query graphs and has access to shared co-processors to perform string-and context-based operations. We evaluate the core architecture in terms of area and performance compared to the custom hardware modules, and show how a minimum number of cores can be calculated to avoid stalling the document processing.
Keywords: field programmable gate arrays; query processing; text analysis; FPGA; area resource sharing; context-based operation; document processing; field programmable gate array; hardware operator modules; instruction programmable core; overlay architecture; processing element; relational operators; soft-core processor array; static queries; streaming based operation; string-based operation; text analytics queries; virtual streams; Arrays; Field programmable gate arrays; Hardware; Radio frequency; Random access memory; Registers (ID#: 15-7600)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245699&isnumber=7245687
Jain, S.; Gupta, L.; Bora, J.; Baghel, A.K., “Context-Preserving Concept Cloud,” in Computing, Communication & Automation (ICCCA), 2015 International Conference on, vol., no., pp. 595–599, 15–16 May 2015. doi:10.1109/CCAA.2015.7148477
Abstract: Word cloud is an amazing tool that is accelerating in the Internet and has gained much appreciation in text analytics. Though word clouds can summarize the document at one go without actually going through the document but they do not preserve the context of the source text. A concept cloud considers the topics or phrases as keywords rather than words. It is context preserving in the sense that it indulges the importance of those keywords that are importuned in nature. Hence it helps a user to gain the actual insight and deduce ideas after examining the relatedness among the keywords. This paper introduces an approach where a concept cloud will be generated by preserving the context of the document.
Keywords: cloud computing; text analysis; Internet; context-preserving concept cloud; document context preservation; keywords; phrases; text analytics; topics; word cloud; Automation; Color; Context; Layout; Semantics; Tag clouds; Visualization; concept cloud; semantic preservingness; text visualization (ID#: 15-7601)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148477&isnumber=7148334
Seonggyu Lee; Jinho Kim; Sung-Hyon Myaeng, “An Extension of Topic Models for Text Classification: A Term Weighting Approach,” in Big Data and Smart Computing (BigComp), 2015 International Conference on, vol., no., pp. 217–224, 9–11 Feb. 2015. doi:10.1109/35021BIGCOMP.2015.7072834
Abstract: Text classification has become a critical step in big data analytics. For supervised machine learning approaches to text classification, availability of sufficient training data with classification labels attached to individual text units is essential to the performance. Since labeled data are usually scarce, however, it is always desirable to devise a semi-supervised method where unlabeled data are used in addition to labeled ones. A solution is to apply a latent factor model to generate clustered text features and use them for text classification. The main thrust of the current research is to extend Latent Dirichlet Allocation (LDA) for this purpose by considering word weights in sampling and maintaining balances of topic distributions. A series of experiments were conducted to evaluate the proposed method for classification tasks. The result shows that the topic distributions generated by the balance weighted topic modeling method add some discriminative power to feature generations for classification.
Keywords: Big Data; data analysis; learning (artificial intelligence); natural language processing; pattern classification; pattern clustering; text analysis; Big Data analytics; LDA; balance weighted topic modeling method; classification labels; clustered text feature generation; discriminative power; individual text units; labeled data; latent Dirichlet allocation; latent factor model; supervised machine learning approach; term weighting approach; text classification; topic distribution; training data; word weights; Data models; Feature extraction; Resource management; Text categorization; Training; Training data; Vocabulary; Latent Dirichlet Allocation; Topic modeling; feature generation; text clustering (ID#: 15-7602)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7072834&isnumber=7072806
O’Shea, N.; Pence, J.; Mohaghegh, Z.; Ernie Kee, “Physics of Failure, Predictive Modeling & Data Analytics for LOCA Frequency,” in Reliability and Maintainability Symposium (RAMS), 2015 Annual, vol., no., pp. 1–7, 26–29 Jan. 2015. doi:10.1109/RAMS.2015.7105125
Abstract: This paper presents: (a) the Data-Theoretic methodology as part of an ongoing research which integrates Physics-of-Failure (PoF) theories and data analytics to be applied in Probabilistic Risk Assessment (PRA) of complex systems and (b) the status of the application of the proposed methodology for the estimation of the frequency of the location-specific loss-of-coolant accident (LOCA), which is a critical initiating event in PRA and one of the challenges of the risk-informed resolution for Generic Safety Issue 191 (GSI-191) [1]. The proposed methodology has the following unique characteristics: (1) it uses predictive causal modeling along with sensitivity and uncertainty analysis to find the most important contributing factors in the PoF models of failure mechanisms. This model-based approach utilizes importance-ranking techniques, scientifically reduces the number of factors, and focuses on a detailed quantification strategy for critical factors rather than conducting expensive experiments and time-consuming simulations for a large number of factors. This adds validity and practicality to the proposed methodology. (2) Because of the evolving nature of computational power and information-sharing technologies, the Data-Theoretic method for PRA expands the classical approach of data extraction and implementation for risk analysis. It utilizes advanced data analytic techniques (e.g., data mining and text mining) to extract risk and reliability information from diverse data sources (academic literature, service data, regulatory and laboratory reports, expert opinion, maintenance logs, news, etc.) and executes them in theory-based PoF networks. (3) The Data-Theoretic approach uses comprehensive underlying PoF theory to avoid potentially misleading results from use of solely data-oriented approaches, as well as support the completeness of the contextual physical factors and the accuracy of their causal relationships. (4) When the important factors are identified, the Data-Theoretic approach applies all potential theory-based techniques (e.g., simulation and experimentation).
Keywords: data analysis; fission reactor accidents; academic literature; complex systems; computational power; data analytics; data extraction; data mining; data sources; data-theoretic methodology; failure mechanisms; frequency estimation; generic safety issue; importance-ranking techniques; information-sharing technologies; location-specific loss-of-coolant accident; maintenance logs; model-based approach; physics-of-failure theories; potential theory-based techniques; predictive causal modeling; probabilistic risk assessment; risk-informed resolution; sensitivity analysis; service data; text mining; time-consuming simulations; uncertainty analysis; Analytical models; Data models; Estimation; Failure analysis; Frequency estimation; Mathematical model; Stress; LOCA frequency; Probabilistic Physics of Failure; Probabilistic Risk Assessment (ID#: 15-7604)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7105125&isnumber=7105053
Leggieri, M.; Davis, B.; Breslin, J.G., “Distributional Semantics and Unsupervised Clustering for Sensor Relevancy Prediction,” in Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, vol., no.,
pp. 210–215, 24–28 Aug. 2015. doi:10.1109/IWCMC.2015.7289084
Abstract: The logging of Activities of Daily Living (ADLs) is becoming increasingly popular mainly thanks to wearable devices. Currently, most sensors used for ADLs logging are queried and filtered mainly by location and time. However, in an Internet of Things future, a query will return a large amount of sensor data. Therefore, existing approaches will not be feasible because of resource constraints and performance issues. Hence more fine-grained queries will be necessary. We propose to filter on the likelihood that a sensor is relevant for the currently sensed activity. Our aim is to improve system efficiency by reducing the amount of data to query, store and process by identifying which sensors are relevant for different activities during the ADLs logging by relying on Distributional Semantics over public text corpora and unsupervised hierarchical clustering. We have evaluated our system over a public dataset for activity recognition and compared our clusters of sensors with the sensors involved in the logging of manually-annotated activities. Our results show an average precision of 89% and an overall accuracy of 69%, thus outperforming the state of the art by 5% and 32% respectively. To support the uptake of our approach and to allow replication of our experiments, a Web service has been developed and open sourced.
Keywords: Internet of Things; computerised instrumentation; query processing; sensors; unsupervised learning; ADL logging; Web service; activities of daily living; distributional semantics; fine grained queries; public text corpora; sensor data; sensor relevancy prediction; unsupervised clustering; unsupervised hierarchical clustering; wearable devices; Accuracy; Art; Artificial intelligence; Cleaning; Hidden Markov models; Semantics; Sensors; Distributional Semantics; Human Activity Recognition; Sensor Network; Sensor Selection; Unsupervised Learning (ID#: 15-7605)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289084&isnumber=7288920
Trovati, Marcello; Hodgsons, Philip; Hargreaves, Charlotte, “A Preliminary Investigation of a Semi-Automatic Criminology Intelligence Extraction Method: A Big Data Approach,” in Intelligent Networking and Collaborative Systems (INCOS), 2015 International Conference on, vol., no., pp. 454–458, 2–4 Sept. 2015. doi:10.1109/INCoS.2015.37
Abstract: The aim of any science is to advance the state-of-the-art knowledge via a rigorous investigation and analysis of empirical observations, as well as the development of new theoretical frameworks. Data acquisition and ultimately the extraction of novel knowledge, is therefore the foundation of any scientific advance. However, with the increasing creation of data in various forms and shapes, identifying relevant information from structured and unstructured data sets raises several challenges, as well as opportunities. In this paper, we discuss a semi-automatic method to identify, analyse and generate knowledge specifically focusing on Criminology. The main motivation is to provide a toolbox to help criminology experts, which would potentially lead to a better understanding and prediction of the properties to facilitate the decision making process. Our initial validation shows the potential of our method providing relevant and accurate results.
Keywords: Algorithm design and analysis; Big data; Focusing; Force; Sentiment analysis; Text mining; Criminology; Data analytics; Information extraction; Knowledge discovery; Networks (ID#: 15-7606)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312116&isnumber=7312007
Bari, N.; Vichr, R.; Kowsari, K.; Berkovich, S.Y., “Novel Metaknowledge-Based Processing Technique for Multimedia Big Data Clustering Challenges,” in Multimedia Big Data (BigMM), 2015 IEEE International Conference on, vol., no., pp. 204–207, 20–22 April 2015. doi:10.1109/BigMM.2015.78
Abstract: Past research has challenged us with the task of showing relational patterns between text-based data and then clustering for predictive analysis using Golay Code technique. We focus on a novel approach to extract metaknowledge in multimedia datasets. Our collaboration has been an on-going task of studying the relational patterns between data points based on met features extracted from metaknowledge in multimedia datasets. Those selected are significant to suit the mining technique we applied, Golay Code algorithm. In this research paper we summarize findings in optimization of metaknowledge representation for 23-bit representation of structured and unstructured multimedia data in order to be processed in 23-bit Golay Code for cluster recognition.
Keywords: data mining; knowledge representation; multimedia computing; pattern clustering; text analysis; 23-bit representation; Golay code technique; cluster recognition; metaknowledge representation; metaknowledge-based processing technique; mining technique; multimedia big data clustering challenges; multimedia datasets; predictive analysis; relational patterns; structured multimedia data; text-based data; unstructured multimedia data; Big data; Conferences; Multimedia communication; 23-Bit Meta-knowledge template; Big Multimedia Data Processing and Analytics; Content Identification; Golay Code; Information Retrieval Challenges; Knowledge Discovery; Meta-feature Extraction and Selection; Metalearning System (ID#: 15-7607)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153879&isnumber=7153824
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Time-Frequency Analysis and Security 2015 |
Time-frequency analysis is a useful method that allows simultaneous consideration of both time and frequency domains. It is useful to the Science of Security community for analysis in cyber-physical systems and for working toward solving the hard problems of resilience, predictive metrics, and scalability. The work cited here was presented in 2015.
Lücken, V.; Singh, T.; Cepheli, O.; Kurt, G.K.; Ascheid, G.; Dartmann, G., “Filter Hopping: Physical Layer Secrecy Based on FBMC,” in Wireless Communications and Networking Conference (WCNC), 2015 IEEE, vol., no., pp. 568–573, 9–12 March 2015. doi:10.1109/WCNC.2015.7127532
Abstract: This paper presents a novel physical layer secrecy enhancement technique for multicarrier communications based on dynamic filter hopping. Using the Filter Bank Multicarrier (FBMC) waveform, an efficient eavesdropping mitigation technique is developed using time- and frequency-varying prototype filters. Without knowledge of the filter assignment pattern, an eavesdropper will experience a high level of inter-carrier (ICI) and inter-symbol interference (ISI). With this severe receive signal-to-interference-plus-noise ratio (SINR) degradation for an illegitimate receiver, the secrecy capacity of the communication system is increased. At the same time, the interference at the legitimate receiver is designed to be negligible in comparison to the channel noise.
Keywords: channel bank filters; intercarrier interference; intersymbol interference; telecommunication security; time-frequency analysis; waveform analysis; FBMC; ICI; ISI; SINR degradation; channel noise; dynamic filter hopping; eavesdropping mitigation technique; filter assignment pattern; filter bank multicarrier waveform; multicarrier communication; physical layer secrecy enhancement technique; signal-to-interference-plus-noise ratio; time-frequency-varying prototype filter; Degradation; Interference; Interpolation; Prototypes; Receivers; Signal to noise ratio; Time-frequency analysis; Filter bank multicarrier; eavesdropping; filter hopping; secrecy (ID#: 15-7856)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127532&isnumber=7127309
Fangyue Chen; Yunke Wang; Heng Song; Xiangyang Li, “A Statistical Study of Covert Timing Channels Using Network Packet Frequency,” in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, vol., no., pp.166–168, 27–29 May 2015. doi:10.1109/ISI.2015.7165963
Abstract: This paper first reviews covert timing channels with network packet frequencies as information carriers. Then, based on the study of communication and statistical models, it proposes a method to detect an enhanced covert timing channel and its use of carrier frequencies. With the help of MATLAB for simulation, several experiments have been conducted for the verification of the proposed method.
Keywords: decoding; protocols; statistical analysis; telecommunication channels; telecommunication traffic; MATLAB; carrier frequencies; communication models; covert timing channels; information carriers; network packet frequency; statistical models; timing channels; Data models; Encoding; Mathematical model; Protocols; Time-frequency analysis; Timing; Covert Timing Channel; Detection; Mathematical Simulation (ID#: 15-7857)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165963&isnumber=7165923
Marijan, D., “Multi-perspective Regression Test Prioritization for Time-Constrained Environments,” in Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, vol., no., pp. 157–162, 3–5 Aug. 2015. doi:10.1109/QRS.2015.31
Abstract: Test case prioritization techniques are widely used to enable reaching certain performance goals during regression testing faster. A commonly used goal is high fault detection rate, where test cases are ordered in a way that enables detecting faults faster. However, for optimal regression testing, there is a need to take into account multiple performance indicators, as considered by different project stakeholders. In this paper, we introduce a new optimal multi-perspective approach for regression test case prioritization. The approach is designed to optimize regression testing for faster fault detection integrating three different perspectives: business perspective, performance perspective, and technical perspective. The approach has been validated in regression testing of industrial mobile device systems developed in continuous integration. The results show that our proposed framework efficiently prioritizes test cases for faster and more efficient regression fault detection, maximizing the number of executed test cases with high failure frequency, high failure impact, and cross-functional coverage, compared to manual practice.
Keywords: fault diagnosis; program testing; regression analysis; business perspective; continuous integration; cross-functional coverage; failure frequency; failure impact; fault detection rate; industrial mobile device system; multiperspective regression test prioritization; optimal multiperspective approach; optimal regression testing; performance indicator; performance perspective; regression fault detection; regression test case prioritization; technical perspective; test case prioritization technique; time-constrained environment; Business; Fault detection; Manuals; Software; Testing; Time factors; Time-frequency analysis; regression testing; software testing; test case prioritization (ID#: 15-7858)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272927&isnumber=7272893
Prokopenko, I.; Prokopenko, K.; Martynchuk, I., “Moving Objects Recognition by Micro-Doppler Spectrum,” in Radar Symposium (IRS), 2015 16th International, vol., no., pp. 186–190, 24–26 June 2015. doi:10.1109/IRS.2015.7226365
Abstract: Doppler signals are widely used for applications in security systems, surveillance systems, radar detection and recognition of airplanes, helicopters. Also, Doppler signals recognition algorithms are widespread in automotive radar systems. There is an important task of pedestrian recognition when parking the car and when driving on the highways. Methods based on micro-Doppler spectrum, can solve the problem of objects classification by the difference in movement dynamics. Purpose of this article is to develop the algorithm for recognition of moving objects by their micro-Doppler signature.
Keywords: Doppler radar; object recognition; radar signal processing; road vehicle radar; signal classification; Doppler signal recognition; airplane recognition; automotive radar system; micro-Doppler signature; micro-Doppler spectrum; moving object recognition; objects classification; pedestrian recognition; radar detection; security system; surveillance system; Doppler shift; Legged locomotion; Radar imaging; Time-frequency analysis (ID#: 15-7859)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226365&isnumber=7226207
Molière, R.; Delaveau, F.; Kameni Ngassa, C.L.; Lemenager, C.; Mazloum, T.; Sibille, A., “Tag Signals for Early Authentication and Secret Key Generation in Wireless Public Networks,” in Networks and Communications (EuCNC), 2015 European Conference on, vol., no., pp. 108–112, June 29 2015–July 2 2015. doi:10.1109/EuCNC.2015.7194050
Abstract: In this paper, a new protocol is proposed for securing both authentication and communication in wireless public networks. It relies on the combination of two techniques presented in the article: Tag Signal (TS) and Secret Key Generation (SKG). First, tag signals are used to securely exchange identification information, perform Channel Frequency Response (CFR) and provide a controlled radio advantage to legitimate users. Then secret keys are generated using authenticated CFR to protect communication. In addition to the presentation of the techniques, measured CFR and SKG performance are also provided for real WiFi communications.
Keywords: computer network security; frequency response; private key cryptography; wireless LAN; wireless channels; CFR; SKG; TS; Wi-Fi communication; channel frequency response; early authentication; identification information exchange security; secret key generation; tag signal; wireless public network; Authentication; Bit error rate; Decision support systems; IEEE 802.11 Standard; Protocols; Time-frequency analysis; Channel Frequency Response (CFR); Direct Spread Spectrum (DSS); Full-Duplex (FuDu); Physical layer Security (Physec); Secret Key Generation (SKG); Tag Signal (TS) (ID#: 15-7860)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194050&isnumber=7194024
Slimeni, F.; Scheers, B.; Chtourou, Z.; Le Nir, V., “Jamming Mitigation in Cognitive Radio Networks Using a Modified Q-Learning Algorithm,” in Military Communications and Information Systems (ICMCIS), 2015 International Conference on, vol., no., pp. 1–7, 18–19 May 2015. doi:10.1109/ICMCIS.2015.7158697
Abstract: The jamming attack is one of the most severe threats in cognitive radio networks, because it can lead to network degradation and even denial of service. However, a cognitive radio can exploit its ability of dynamic spectrum access and its learning capabilities to avoid jammed channels. In this paper, we study how Q-learning can be used to learn the jammer strategy in order to pro-actively avoid jammed channels. The problem with Q-learning is that it needs a long training period to learn the behavior of the jammer. To address the above concern, we take advantage of the wideband spectrum sensing capabilities of the cognitive radio to speed up the learning process and we make advantage of the already learned information to minimize the number of collisions with the jammer during training. The effectiveness of this modified algorithm is evaluated by simulations in the presence of different jamming strategies and the simulation results are compared to the original Q-learning algorithm applied to the same scenarios.
Keywords: cognitive radio; interference suppression; jamming; learning (artificial intelligence); radio spectrum management; telecommunication security; cognitive radio networks; denial of service; dynamic spectrum access; jamming attack mitigation; modified Q-learning algorithm; network degradation; wideband spectrum sensing capability; Cognitive radio; Convergence; Jamming; Markov processes; Standards; Time-frequency analysis; Training; Cognitive radio network; Q-learning algorithm; jamming attack; markov decision process (ID#: 15-7861)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158697&isnumber=7158667
Daly, P.; Flynn, D.; Cunniffe, N., “Inertia Considerations Within Unit Commitment and Economic Dispatch for Systems with High Non-Synchronous Penetrations,” in PowerTech, 2015 IEEE Eindhoven, vol., no., pp.1–6, June 29 2015–July 2 2015. doi:10.1109/PTC.2015.7232567
Abstract: The priority dispatch status of non-synchronous renewable generation (wind, wave, solar), and increasing levels of installed high voltage direct current interconnection between synchronous systems, is fundamentally changing unit commitment and economic dispatch (UCED) schedules. Conventional synchronous plant, the traditional provider of services which ensure frequency stability—synchronising torque, synchronous inertia and governor response— are being displaced by marginally zero cost non-synchronous renewables. Such a trend has operational security implications, as systems—particularly synchronously isolated systems—may be subject to higher rates of change of frequency and more extreme frequency nadirs/zeniths following a system disturbance. This paper proposes UCED-based strategies to address potential shortfalls in synchronous inertia associated with high non-synchronous penetrations. The effectiveness of the day-ahead strategies is assessed by weighing the cost of the schedules against the risk level incurred (the initial rate of change of frequency following a generation-load imbalance), and the level of wind curtailment engendered.
Keywords: frequency stability; power generation dispatch; power generation economics; power generation scheduling; power system security; power system stability; wind power; UCED schedules; day-ahead strategies; economic dispatch; frequency nadirs-zeniths; frequency stability; governor response; installed high voltage direct current interconnection; nonsynchronous penetrations; nonsynchronous renewable generation; operational security implications; priority dispatch status; risk level; synchronising torque; synchronous inertia; synchronous plant; synchronous systems; system disturbance; unit commitment; wind curtailment; Frequency synchronization; HVDC transmission; Mathematical model; Schedules; Security; Time-frequency analysis; inertia; wind generation (ID#: 15-7862)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232567&isnumber=7232233
Rodriguez-Calvo, A.; Izadkhast, S.; Cossent, R.; Frias, P., “Evaluating the Determinants of the Scalability and Replicability of Islanded Operation in Medium Voltage Networks with Cogeneration,” in Smart Electric Distribution Systems and Technologies (EDST), 2015 International Symposium on, vol., no., pp. 80–87, 8–11 Sept. 2015. doi:10.1109/SEDST.2015.7315187
Abstract: The development of smart grid solution concepts, such as islanding, make it possible to improve the security of supply in networks. The results experimented in real-life test systems must be extrapolated to wider areas and in other locations, which is not straightforward. The scalability and replicability analysis (SRA) aims to identify the relevant factors that affect smart grid implementations and understand the effects of their variation on the results achieved by smart grid solutions. This paper presents the SRA of an islanding use case in a medium voltage network using cogeneration. Furthermore, the results obtained have been used to obtain a set of scalability and replicability rules for islanding use cases that can be applied in other cases.
Keywords: cogeneration; distributed power generation; extrapolation; power distribution; power system security; smart power grids; determinant evaluation; islanded operation; medium voltage networks; scalability-and-replicability analysis; smart grid implementations; supply security improvement; Cogeneration; Islanding; Load modeling; Production; Scalability; Smart grids; Time-frequency analysis; Islanding; replicability; scalability; smart grids (ID#: 15-7863)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7315187&isnumber=7315169
Sundararajan, A.; Pons, A.; Sarwat, A.I., “A Generic Framework for EEG-Based Biometric Authentication,” in Information Technology - New Generations (ITNG), 2015 12th International Conference on, vol., no., pp. 139–144, 13–15 April 2015. doi:10.1109/ITNG.2015.27
Abstract: Biometric systems are a part and parcel of everyone’s lives these days. However, with the increase in their use, the security risks associated with them have equally increased. Hence, there is an increased need to develop systems which use biometrics efficiently and ensure the authentication is integral and effective. This paper aims to introduce the concept of using Electro Encephalogram (EEG), commonly known as brain waves, as a biometric. A wavelet based feature extraction method is proposed, that uses visual and auditory evoked potentials. The future scope, pros and cons of this biometric are analyzed next.
Keywords: biometrics (access control); electroencephalography; feature extraction; medical image processing; wavelet transforms; EEG-based biometric authentication; auditory evoked potentials; biometric systems; brain waves; electroencephalogram; generic framework; security risks; visual evoked potentials; wavelet based feature extraction method; Authentication; Electroencephalography; Feature extraction; Noise; Time-frequency analysis; Wavelet transforms; EEG; biometric; brain wave; evoked potentials; pros and cons; security; wavelets (ID#: 15-7864)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113462&isnumber=7113432
Zhao Wang; Ming Xiao; Skoglund, M.; Poor, H.V., “Secrecy Degrees of Freedom of Wireless X Networks Using Artificial Noise Alignment,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 616–620, 14–19 June 2015. doi:10.1109/ISIT.2015.7282528
Abstract: The problem of transmitting confidential messages in the M × K wireless X network is considered, in which each transmitter intends to send one confidential message to every receiver. In particular, the secrecy degrees of freedom (SDOF) of the considered network are studied by an artificial noise alignment (ANA) approach, which integrates interference alignment and artificial noise transmission. At first, an SDOF upper bound is derived for the M × K X network with confidential messages (XNCM) K(M-1)/K+M-2 to be equation. By proposing an ANA approach, it is shown that the SDOF upper bound is tight when either K = 2 or M = 2 for the considered XNCM with time/frequency varying channels. For K, M ≥ 3, it is shown that an SDOF of K(M-1)/K+M1 equation can be achieved, even when an external eavesdropper appears. The key idea of the proposed scheme is to inject artificial noise into the network, which can be aligned in the interference space at receivers for confidentiality. The proposed method provides a linear approach for secure interference alignment.
Keywords: radio receivers; radio transmitters; radiofrequency interference; telecommunication security; time-varying channels; ANA; M × K wireless X network; SDOF; artificial noise alignment; artificial noise transmission; eavesdropper; frequency varying channels; interference alignment; interference space; secrecy degrees of freedom; time varying channels; Interference; Noise; Receivers; Time-frequency analysis; Transmitters; Upper bound; Wireless communication; Secrecy degrees of freedom; artificial noise; wireless X networks (ID#: 15-7865)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282528&isnumber=7282397
Zaher, A.A., “A Cryptography Algorithm for Transmitting Multimedia Data Using Quadruple-State CSK,” in Computer, Communications, and Control Technology (I4CT), 2015 International Conference on, vol., no., pp. 87–92, 21–23 April 2015. doi:10.1109/I4CT.2015.7219543
Abstract: A new technique for secure communication is introduced that aims at robustifying classical Chaotic Shift Keying (CSK) methods. The secret data are hidden within the chaotic transmitter states that can change among four different chaotic attractors such that binary information is effectively diffused. A novel cryptography algorithm is used to change the transmitter parameters such that they have a quadruple form; thus, breaking into the public communication channel using return map attacks will fail. At the receiver side, an adaptive control method is used to estimate the time-varying transmitter parameters via adopting a complete synchronization approach. Simulation results demonstrate the superior performance of the proposed technique in both time and frequency domains. A Duffing oscillator is used to build the proposed system using only the time series for the output. Different implementation issues are investigated for various digital multimedia data and an experimental investigation is carried out to verify the effectiveness of the proposed technique. Finally, generalizations to other chaotic systems as well as real-time compatibility of the design are discussed.
Keywords: adaptive control; chaotic communication; cryptography; digital communication; multimedia communication; receivers; synchronisation; telecommunication security; time series; time-frequency analysis; time-varying channels; transmitters; Duffing oscillator; adaptive control method; chaotic shift keying method; chaotic transmitter; cryptography algorithm; digital multimedia data transmission; public communication channel; quadruple-state CSK; receiver side; return map attack; secret data hiding; secure communication; synchronization approach; time-frequency domain; time-varying transmitter parameter estimation; Chaotic communication; Cryptography; Oscillators; Receivers; Synchronization; Transmitters; CSK; Duffing Oscillators; Secure Communication (ID#: 15-7866)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219543&isnumber=7219513
Çatak, E.; Ata, L.D.; Mantar, H.A., “Enhanced Physical Layer Security by OFDM Signal Transmission in Fractional Fourier Domains,” in Signal Processing and Communications Applications Conference (SIU), 2015 23rd, vol., no., pp. 1881–1884, 16–19 May 2015. doi:10.1109/SIU.2015.7130225
Abstract: The main idea of physical layer security for wireless communications is making the transmitted signal as meaningless for eavesdroppers. It can be achieved by signal processing techniques. In this study, fractional Fourier transform is used for secure communication. Transmitted signal is divided to equal and random intervals, then each interval was taken fractional Fourier transform with four degrees. In this way, signals can be transmitted with an angle between time and frequency domain. Receiver needs to know which angular parameters are used in each interval for obtaining the signal correctly. It is difficult to obtain signals by eavesdropper without any parameter knowledge. In this study, the bit error rate performances of legitimate users and eavesdropper are compared and the bit error rate performance of eavesdropper becomes close to 0,5.
Keywords: Fourier transforms; OFDM modulation; error statistics; radiocommunication; signal processing; telecommunication security; time-frequency analysis; OFDM signal transmission; angular parameters; bit error rate performance; communication security; eavesdroppers; enhanced physical layer security; fractional Fourier domains; fractional Fourier transform; frequency domain; legitimate users; signal processing technique; time domain; transmitted signal; wireless communication; Bit error rate; Fourier transforms; OFDM; Physical layer; Security; Signal processing; Wireless communication; Physical layer security (ID#: 15-7867)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130225&isnumber=7129794
Callan, R.; Zajić, A.; Prvulovic, M., “FASE: Finding Amplitude-Modulated Side-channel Emanations,” in Computer Architecture (ISCA), 2015 ACM/IEEE 42nd Annual International Symposium on, vol., no., pp. 592–603, 13–17 June 2015. doi:10.1145/2749469.2750394
Abstract: While all computation generates electromagnetic (EM) side-channel signals, some of the strongest and farthest-propagating signals are created when an existing strong periodic signal (e.g. a clock signal) becomes stronger or weaker (amplitude-modulated) depending on processor or memory activity. However, modern systems create emanations at thousands of different frequencies, so it is a difficult, error-prone, and time-consuming task to find those few emanations that are AM-modulated by processor/memory activity. This paper presents a methodology for rapidly finding such activity-modulated signals. This method creates recognizable spectral patterns generated by specially designed micro-benchmarks and then processes the recorded spectra to identify signals that exhibit amplitude-modulation behavior. We apply this method to several computer systems and find several such modulated signals. To illustrate how our methodology can benefit side-channel security research and practice, we also identify the physical mechanisms behind those signals, and find that the strongest signals are created by voltage regulators, memory refreshes, and DRAM clocks. Our results indicate that each signal may carry unique information about system activity, potentially enhancing an attacker’s capability to extract sensitive information. We also confirm that our methodology correctly separates emanated signals that are affected by specific processor or memory activities from those that are not.
Keywords: amplitude modulation; cryptography; spectral analysis; DRAM clocks; EM side-channel signals; FASE; activity-modulated signals; amplitude-modulation behavior; clock signal; computer systems; electromagnetic side-channel signals; emanated signals; finding amplitude-modulated side-channel emanations; memory activity; memory refreshes; microbenchmarks; periodic signal; physical mechanisms; processor; propagating signals; side-channel security; signals identification; spectral patterns; system activity; voltage regulators; Clocks; Computers; Frequency modulation; Noise; Switches (ID#: 15-7868)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284097&isnumber=7284049
Nemati, A.; Feizi, S.; Ahmadi, A.; Haghiri, S.; Ahmadi, M.; Alirezaee, S., “An Efficient Hardware Implementation of FeW Lightweight Block Cipher,” in Artificial Intelligence and Signal Processing (AISP), 2015 International Symposium on, vol., no.,
pp. 273–278, 3–5 March 2015. doi:10.1109/AISP.2015.7123493
Abstract: Radio-frequency identification (RFID) are becoming a part of our everyday life with a wide range of applications such as labeling products and supply chain management and etc. These smart and tiny devices have extremely constrained resources in terms of area, computational abilities, memory, and power. At the same time, security and privacy issues remain as an important problem, thus with the large deployment of low resource devices, increasing need to provide security and privacy among such devices, has arisen. Resource-efficient cryptographic incipient become basic for realizing both security and efficiency in constrained environments and embedded systems like RFID tags and sensor nodes. Among those primitives, lightweight block cipher plays a significant role as a building block for security systems. In 2014 Manoj Kumar et al proposed a new Lightweight block cipher named as FeW, which are suitable for extremely constrained environments and embedded systems. In this paper, we simulate and synthesize the FeW block cipher. Implementation results of the FeW cryptography algorithm on a FPGA are presented. The design target is efficiency of area and cost.
Keywords: cryptography; field programmable gate arrays; radiofrequency identification; FPGA; FeW cryptography algorithm; FeW lightweight block cipher; RFID; hardware implementation; radio-frequency identification; resource-efficient cryptographic incipient; security system; sensor node; Algorithm design and analysis; Ciphers; Encryption; Hardware; Schedules; Block Cipher; FeW Algorithm; Feistel structure; Field Programmable Gate Array (FPGA); High Level Synthesis (ID#: 15-7869)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7123493&isnumber=7123478
Xiaona Li; Xue Xie; Juan Zeng; Yongming Wang, “Vulnerability Analysis and Verification for LTE Initial Synchronization Mechanism,” in Sarnoff Symposium, 2015 36th IEEE, vol., no., pp. 150–154, 20–22 Sept. 2015. doi:10.1109/SARNOF.2015.7324660
Abstract: Vulnerability analysis is significant for the security of LTE public networks and private networks. The current research on LTE vulnerability considers little about the balance between effectiveness and complexity of jamming. This paper analyzes the vulnerability of LTE initial synchronization mechanism, and puts forward a LTE jamming method based on spoofing synchronization signals to verify this vulnerability. By changing the correlation peaks’ positions of initial synchronization with the spoofing signals, the method can make synchronization fail. Simulation results verify the effectiveness of this method, prove the vulnerability of the initial synchronization mechanism and also reveal the optimal time shifting between the spoofing signals and the actual signals.
Keywords: Base stations; Error analysis; Frequency estimation; Frequency synchronization; Jamming; Long Term Evolution; Synchronization; LTE; jamming; synchronization mechanism; vulnerability (ID#: 15-7870)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7324660&isnumber=7324628
Chunhua He; Bo Hou; Liwei Wang; Yunfei En; Shaofeng Xie, “A Failure Physics Model for Hardware Trojan Detection
Based on Frequency Spectrum Analysis,” in Reliability Physics Symposium (IRPS), 2015 IEEE International, vol., no.,
pp. PR.1.1–PR.1.4, 19–23 April 2015. doi:10.1109/IRPS.2015.7112822
Abstract: Hardware Trojan embedded by adversaries has emerged as a serious security threat. Until now, there is no a universal method for effective and accurate detection. Since traditional analysis approaches sometime seem helpless when the Trojan area is extremely tiny, this paper will focus on the novel detection method based on frequency spectrum analysis. Meanwhile, a failure physics model is presented and depicted in detail. A digital CORDIC IP core is adopted as a golden circuit, while a counter is utilized as a Trojan circuit. The automatic test platform is set up with Xilinx FPGA, LabVIEW software, and high precision oscilloscope. The power trace of the core power supply in FPGA is monitored and saved for frequency spectrum analysis. Experimental results in time domain and frequency domain both accord with those of theoretical analysis, which verifies that the proposed failure physics model is accurate. In addition, due to immunity to vast measurement noise, the novel method processing in frequency domain is superior to the traditional method conducting in time domain. It can easily achieve about 0.1% Trojan detection sensitivity, which indicates that the novel detection method is effective.
Keywords: field programmable gate arrays; invasive software; multiprocessing systems; FPGA; LabVIEW software; Trojan area; Trojan circuit; Xilinx FPGA; automatic test platform; core power supply; digital CORDIC IP core; failure physics model; frequency spectrum analysis; golden circuit; hardware Trojan detection; novel detection method; security threat; Frequency-domain analysis; Hardware; Noise; Physics; Spectral analysis; Time-domain analysis; Trojan horses; HardwareTrojan; failure physics model; frequency spectrum analysis; side-channel analysis (ID#: 15-7871)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7112822&isnumber=7112653
Namdev, D.; Bansal, A., “Frequency Domain Analysis for Audio Data Forgery Detection,” in Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, vol., no., pp. 702–705, 4–6 April 2015. doi:10.1109/CSNT.2015.168
Abstract: Information security is a classical need of human, from ancient time the sensitive and private data is secured using different kinds of techniques. Now in these days data are becomes digitized and due to use of internet based applications spread world-wide. The available data on internet can easily duplicated or manipulated for re-distribution purpose, this act is known as forgery of digital contents. In this presented work the digital data forgery is investigated and their possible solutions are explored. In observations that are found there are a number of preventive methods are available such as watermarking and copy right acts. But due to diversity of data they become less effective. In this paper the audio data based forgery detection is proposed. For that purpose different audio file formats and their attributes are investigated and wav file format is adopted for analysis. For analysing the audio files Fourier transform for time and frequency domain analysis is performed on files. In addition of that using the cyclic data preservation and similarity computation the similarity between two audio files are computed for detecting the forged audio files. The implementation of the desired audio forgery detection tool is performed using the visual studio environment. Additionally for justification of work the performance of the proposed system is evaluated in terms of time complexity, space complexity, time domain error, frequency domain error and the best overlapped file parts. During the experimentations the performance of the proposed audio file analysis system found optimum and adoptable for forensic usage.
Keywords: Fourier transforms; audio watermarking; digital forensics; frequency-domain analysis; security of data; time-domain analysis; Internet; ancient time; audio data forgery detection; audio file analysis system; audio files Fourier transform; audio forgery detection; copy right acts; cyclic data preservation; digital content forgery; digital data forgery; forensic; frequency domain analysis; frequency domain error; information security; private data; sensitive data; space complexity; time complexity; time domain analysis; time domain error; visual studio environment; watermarking; Accuracy; Arrays; Error analysis; Forgery; Frequency-domain analysis; Memory management; Time-domain analysis; audio forgery; frequency domain analysis; implementation; information security; performance study (ID#: 15-7872)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280009&isnumber=7279856
Abirami, T.; Meenalochini, M.; Thilakraj, S., “Secure Continuous Aggregation and Load Balancing with False Temporal Pattern Identification for Wireless Sensor Networks,” in Engineering and Technology (ICETECH), 2015 IEEE International Conference on, vol., no., pp. 1–3, 20–20 March 2015. doi:10.1109/ICETECH.2015.7275014
Abstract: Continuous aggregation is required in sensor applications to obtain the temporal variation information of aggregates. It helps the users to understand how the environment changes over time and track real time measurements for trend analysis. In the continuous aggregation, the attacker could manipulate a series of aggregation results through compromised nodes to fabricate false temporal variation patterns of the aggregates. Existing secure aggregation schemes conduct one individual verification for each aggregation result. Due to the high frequency and the long period of a continuous aggregation in every epoch, the false temporal variation pattern would incur a great communication cost. In this paper, we detect and verify a false temporal variations pattern by checking only a small part of aggregation results to reduces a verification cost. A sampling based approach is used to check the aggregation results and we also proposed a security mechanisms to protect the sampling process.
Keywords: telecommunication security; wireless sensor networks; false temporal pattern identification; false temporal variation patterns; load balancing; real time measurements; sampling process; secure continuous aggregation; security mechanisms; Aggregates; Authentication; Base stations; Conferences; Monitoring; Wireless sensor networks; continuous data aggregation; sampling (ID#: 15-7873)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275014&isnumber=7274993
Xingsi Zhong; Ahmadi, A.; Brooks, R.; Venayagamoorthy, G.K.; Lu Yu; Yu Fu, “Side Channel Analysis of Multiple PMU Data in Electric Power Systems,” in Power Systems Conference (PSC), 2015 Clemson University, vol., no., pp. 1–6, 10–13 March 2015. doi:10.1109/PSC.2015.7101704
Abstract: The deployment of Phasor Measurement Units (PMUs) in an electric power grid will enhance real-time monitoring and analysis of grid operations. The PMU collects bus voltage phasors, branch current phasors, and bus frequency measurements and uses a communication network to transmit the measurements to the respective substation(s)/control center(s). PMU information is sensitive, since missing or incorrect PMU data could lead to grid failure and/or damage. It is important to use encrypted communicate channels to avoid cyber attacks. In this study, a side-channel attack using inter-packet delays to isolate the stream of packets of one PMU from an encrypted tunnel is shown. Also, encryption in power system VPNs and vulnerabilities due to side channel analysis is discussed.
Keywords: phasor measurement; power grids; security of data; branch current phasors; bus frequency measurements; bus voltage phasors; electric power grid; electric power systems; encrypted tunnel; inter-packet delays; multiple PMU data; real-time monitoring; side channel analysis; Cryptography; Delays; Hidden Markov models; Logic gates; Phasor measurement units; Cybersecurity; grid operations; phasor measurement units; power system; side channel analysis (ID#: 15-7874)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7101704&isnumber=7101673
Bartholemy, A.; Weifeng Chen, “An Examination of Distributed Denial of Service Attacks,” in Electro/Information Technology (EIT), 2015 IEEE International Conference on, vol., no., pp. 274–279, 21–23 May 2015. doi:10.1109/EIT.2015.7293352
Abstract: Denial of service (DoS) attacks have been around for a significant period of time and is exponentially growing in popularity. This paper discusses various DoS attacks and the evolution towards distributed denial of service (DDoS) attacks. An analysis of high profile attacks will be conducted to evaluate the methods used by the attackers. We will also describe the roles of enterprise, consumers, and ISPs in reducing the damage and frequency of the attacks.
Keywords: computer network security; DDoS attack; Distributed Denial of Service attack; consumer role; enterprise role; Bandwidth; Computer crime; Games; Internet; Organizations; Servers (ID#: 15-7875)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293352&isnumber=7293314
Hasan, S.R.; Mossa, S.F.; Elkeelany, O.S.A.; Awwad, F., “Tenacious Hardware Trojans Due to High Temperature in Middle Tiers of 3-D ICs,” in Circuits and Systems (MWSCAS), 2015 IEEE 58th International Midwest Symposium on, vol., no.,
pp. 1–4, 2–5 Aug. 2015. doi:10.1109/MWSCAS.2015.7282148
Abstract: Hardware security is a major concern in the intellectual property (IP) centric integrated circuits (IC). 3-D IC design augments IP centric designs. However, 3-D ICs suffer from high temperatures in their middle tiers due to long heat dissipation paths. We anticipate that this problem would exacerbate the hardware security issues in 3-D ICs. Because, high temperature leads to undesired timing characteristics in ICs. In this paper we provide a detailed analysis on how these delay variations can lead to non-ideal behavior of control paths. It is demonstrated that a hardware intruder can leverage this phenomenon to trigger the payload, without requiring a separate triggering circuit. Our simulation results show that a state machine can lead to temporary glitches long enough to cause malfunctioning at temperatures of 87°C or above, under nominal frequencies. The overall area overhead of the payload compared to a very small Mod-3 counter is 6%.
Keywords: finite state machines; integrated circuit design; invasive software; three-dimensional integrated circuits; 3D IC design; IP centric designs; IP centric integrated circuits; delay variations; hardware intruder; hardware security; intellectual property centric integrated circuits; state machine; tenacious hardware trojans; undesired timing characteristics; Flip-flops; Hardware; Integrated circuits; Law; Radiation detectors; Trojan horses; 3-D IC; hardware Trojan; high temperature (ID#: 15-7876)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282148&isnumber=7281994
Younghyun Kim; Woo Suk Lee; Vijay Raghunathan; Niraj K. Jha; Anand Raghunathan, “Vibration-Based Secure Side Channel for Medical Devices,” in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, vol., no., pp. 1–6, 8–12 June 2015. doi:10.1145/2744769.2744928
Abstract: Implantable and wearable medical devices are used for monitoring, diagnosis, and treatment of an ever-increasing range of medical conditions, leading to an improved quality of life for patients. The addition of wireless connectivity to medical devices has enabled post-deployment tuning of therapy and access to device data virtually anytime and anywhere but, at the same time, has led to the emergence of security attacks as a critical concern. While cryptography and secure communication protocols may be used to address most known attacks, the lack of a viable secure connection establishment and key exchange mechanism is a fundamental challenge that needs to be addressed. We propose a vibration-based secure side channel between an external device (medical programmer or smartphone) and a medical device. Vibration is an intrinsically short-range, user-perceptible channel that is suitable for realizing physically secure communication at low energy and size/weight overheads. We identify and address key challenges associated with the vibration channel, and propose a vibration-based wakeup and key exchange scheme, named SecureVibe, that is resistant to battery drain attacks. We analyze the risk of acoustic eavesdropping attacks and propose an acoustic masking countermeasure. We demonstrate and evaluate vibration-based wakeup and key exchange between a smartphone and a prototype medical device in the context of a realistic human body model.
Keywords: bioacoustics; biomedical telemetry; body sensor networks; patient diagnosis; patient monitoring; smart phones; telemedicine; vibrations; Implantable medical devices; SecureVibe; acoustic eavesdropping attacks; acoustic masking countermeasure; battery drain attacks; cryptography; device data virtually; intrinsically short-range user-perceptible channel; key exchange mechanism; key exchange scheme; medical condition; medical programmer; patient treatment; physically secure communication; post-deployment tuning; realistic human body model; secure communication protocols; size-weight overheads; smartphone; therapy; viable secure connection establishment; vibration-based secure side channel; vibration-based wakeup; wearable medical devices; wireless connectivity; Accelerometers; Acoustics; Batteries; Cryptography; Protocols; Radio frequency; Vibrations (ID#: 15-7877)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167216&isnumber=7167177
ElSayed, A.; Elleithy, A.; Thunga, P.; Zhengping Wu, “Highly Secure Image Steganography Algorithm Using Curvelet Transform and DCT Encryption,” in Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island, vol., no., pp. 1–6, 1–1 May 2015. doi:10.1109/LISAT.2015.7160204
Abstract: This paper presents a highly secure data hiding system (a.k.a. Steganography) in a cover image using the low frequency Curvelet domain. The contribution of the suggested technique is its high security, because it is using four secret keys (encryption key, two shuffling keys, data hiding key) and using only the low frequency component of Curvelet domain. The use of low frequency component of Curvelet transform in steganography provides a number of advantages compared to other techniques such as: 1) Computation time reduction and, 2) Curvelet transform are designed to handle curves discontinuities using only a small number of coefficients, so hiding in the low frequency components will not affect edges coefficients, which produces better stego object quality.
Keywords: curvelet transforms; discrete cosine transforms; image coding; steganography; DCT encryption; computation time reduction; curvelet transform; data hiding system; discrete cosine transform; image steganography algorithm; low frequency curvelet domain; stego object quality; Discrete cosine transforms; Encryption; Frequency-domain analysis; Image reconstruction; Wavelet transforms; Curevelet Transform; DCT; Secure Data Hiding; Steganography (ID#: 15-7878)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160204&isnumber=7160171
Le An; Khomh, F., “An Empirical Study of Highly Impactful Bugs in Mozilla Projects,” in Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, vol., no., pp. 262–271, 3–5 Aug. 2015. doi:10.1109/QRS.2015.45
Abstract: Bug triaging is the process that consists in screening and prioritising bugs to allow a software organisation to focus its limited resources on bugs with high impact on software quality. In a previous work, we proposed an entropy-based crash triaging approach that can help software organisations identify crash-types that affect a large user base with high frequency. We refer to bugs associated to these crash-types as highly-impactful bugs. The proposed triaging approach can identify highly-impactful bugs only after they have led to crashes in the field for a certain period of time. Therefore, to reduce the impact of highly-impactful bugs on user perceived quality, an early identification of these bugs is necessary. In this paper, we examine the characteristics of highly-impactful bugs in Mozilla Firefox and Fennec for Android, and propose statistical models to help software organisations predict them early on before they impact a large population of users. Results show that our proposed prediction models can achieve a precision up to 64.2% (in Firefox) and a recall up to 98.3% (in Fennec). We also evaluate the benefits of our proposed models and found that, on average, they could help reduce 23.0% of Firefox’ crashes and 13.4% of Fennec’s crashes, while reducing 28.6% of impacted machine profiles for Firefox and 49.4% for Fennec. Software organisations could use our prediction models to catch highly-impactful bugs early during the triaging process, preventing them from impacting a larger user base.
Keywords: entropy; program debugging; software quality; software tools; Android; Mozilla Fennec; Mozilla Firefox; automatic crash reporting tool; bug triaging; entropy-based crash triaging approach; highly-impactful bug; software organisation; software quality; Computer bugs; Data mining; Entropy; Measurement; Predictive models; Software; bug triaging; crash report; entropy analysis; mining software repositories; prediction model (ID#: 15-7879)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272941&isnumber=7272893
Pugh, M.; Brewer, J.; Kvam, J., “Sensor Fusion for Intrusion Detection Under False Alarm Constraints,” in Sensors Applications Symposium (SAS), 2015 IEEE, vol., no., pp. 1–6, 13–15 April 2015. doi:10.1109/SAS.2015.7133634
Abstract: Sensor fusion algorithms allow the combination of many heterogeneous data types to make sophisticated decisions. In many situations, these algorithms give increased performance such as better detectability and/or reduced false alarm rates. To achieve these benefits, typically some system or signal model is given. This work focuses on the situation where the event signal is unknown and a false alarm criterion must be met. Specifically, the case where data from multiple passive infrared (PIR) sensors are processed to detect intrusion into a room while satisfying a false alarm constraint is analyzed. The central challenge is the space of intrusion signals is unknown and we want to quantify analytically the probability of false alarm. It is shown that this quantification is possible by estimating the background noise statistics and computing the Mahalanobis distance in the frequency domain. Using the Mahalanobis distance as the decision metric, a threshold is computed to satisfy the false alarm constraint.
Keywords: frequency-domain analysis; infrared detectors; probability; safety systems; security of data; sensor fusion; signal detection; statistics; Mahalanobis distance; PIR sensor; background noise statistics; false alarm probability constraint; frequency domain analysis; intrusion detection; multiple passive infrared sensor; sensor fusion algorithm; Frequency-domain analysis; Gaussian distribution; Noise; Noise measurement; Principal component analysis; Sensor fusion; Time series analysis
(ID#: 15-7880)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133634&isnumber=7133559
Kitsos, P.; Voyiatzis, A.G., “A Comparison of TERO and RO Timing Sensitivity for Hardware Trojan Detection Applications,” in Digital System Design (DSD), 2015 Euromicro Conference on, vol., no., pp. 547–550, 26–28 Aug. 2015. doi:10.1109/DSD.2015.32
Abstract: A Ring Oscillator (RO) integrated in a design can be used for detecting insertion of malicious logic i.e., a hardware Trojan horse. Recently, the Transition Effect Ring Oscillator (TERO) was proposed as a means for implementing True Random Number Generators (TRNGs) and Physically Uncloneable Functions (PUFs). In this paper, we explore the timing sensitivity of TERO against RO, towards introducing TERO as an alternative means for detecting Trojans on FPGAs.
Keywords: feature extraction; field programmable gate arrays; invasive software; random number generation; FPGA; PUF; TERO timing sensitivity; TRNG; field programmable gate array; hardware Trojan horse detection application; physically uncloneable function; transition effect ring oscillator; true random number generator; Field programmable gate arrays; Frequency measurement; Hardware; Logic gates; Ring oscillators; Table lookup; Trojan horses; FPGA security; Transition Effect Ring Oscillator; hardware Trojan horse; ring oscillators; time analysis (ID#: 15-7881)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7302323&isnumber=7302233
Yogeshwaran, S.; Venkatesh, S., “Real Time Voice Identification Based Gear Control System in LMV Using MFCC,” in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, vol., no., pp. 1–7, 25–27 Feb. 2015. doi:10.1109/ICSNS.2015.7292393
Abstract: Speech recognition and speaker recognition have wide range of applications in security systems and smart home designs. In this paper we discuss a method by which text dependent speaker recognition can be used to control gear shifting in light motor vehicles which could be helpful for people who lost one hand in accidents to drive cars. Speaker recognition involves two processes namely feature extraction and feature matching. In feature extraction we extract the dominant features from the voice of the speaker for standard text commands during the training session. There are methods such as Linear Predictive Coding (LPC), Mel Frequency Cepstral Coefficients (MFCC) used for feature extraction. After obtaining these features we form a codebook where characteristics of all the speakers are stored. In feature matching we compare the characteristics of the speaker and intelligent decision making based on the predefined threshold identifies the speaker i.e., driver in our scenario. Hidden Markov Model, Gaussian Mixture Model, Vector Quantization and Neural network as multiclass classifier are some of the methods used for feature matching, while here we make use of neural network. Once the command of the driver is detected, then the gear shifting can be done by an electro mechanical system.
Keywords: Gaussian processes; cepstral analysis; control engineering computing; decision making; feature extraction; gears; hidden Markov models; linear predictive coding; mixture models; neural nets; road vehicles; speaker recognition; traffic engineering computing; vector quantisation; Gaussian mixture model; LMV; LPC; electromechanical system; feature matching; hidden Markov model; intelligent decision making; light motor vehicles; linear predictive coding; mel frequency cepstral coefficients; multiclass classifier; neural network; real time voice identification based gear control system; speech recognition; text dependent speaker recognition; vector quantization; Feature extraction; Filter banks; Gears; Mel frequency cepstral coefficient; Noise; Speaker recognition; Speech; BPNN; MFCC (ID#: 15-7882)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292393&isnumber=7292366
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Trusted Platform Modules 2015 |
A Trusted Platform Module (TPM) is a computer chip that can securely store artifacts used to authenticate a network or platform. These artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to store platform measurements that help ensure the platform remains trustworthy. Interest in TPMs is growing due to their potential for solving hard problems in security such as composability and cyber-physical system security, and resilience. The works cited here were published in 2015.
Jaewon Yang, Xiuwen Liu, Shamik Bose; “Preventing Cyber-Induced Irreversible Physical Damage to Cyber-Physical Systems,” CISR ’15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article
No. 8. doi:10.1145/2746266.2746274
Abstract: Ever since the discovery of the Stuxnet malware, there have been widespread concerns about disasters via cyber-induced physical damage on critical infrastructures. Cyber physical systems (CPS) integrate computation and physical processes; such infrastructure systems are examples of cyber-physical systems, where computation and physical processes are integrated to optimize resource usage and system performance. The inherent security weaknesses of computerized systems and increased connectivity could allow attackers to alter the systems’ behavior and cause irreversible physical damage, or even worse cyber-induced disasters. However, existing security measures were mostly developed for cyber-only systems and they cannot be effectively applied to CPS directly. Thus, new approaches to preventing cyber physical system disasters are essential. We recognize very different characteristics of cyber and physical components in CPS, where cyber components are flexible with large attack surfaces while physical components are inflexible and relatively simple with very small attack surfaces. This research focuses on the components where cyber and physical components interact. Securing cyber-physical interfaces will complete a layer-based defense strategy in the “Defense in Depth Framework”. In this paper we propose Trusted Security Modules as a systematic solution to provide a guarantee of preventing cyber-induced physical damage even when operating systems and controllers are compromised. TSMs will be placed at the interface between cyber and physical components by adapting the existing integrity enforcing mechanisms such as Trusted Platform Module, Control-Flow Integrity, and Data-Flow Integrity.
Keywords: Cyber-induced physical damage, Trusted Security Module (ID#: 15-7630)
URL: http://doi.acm.org/10.1145/2746266.2746274
Tobias Rauter, Andrea Höller, Nermin Kajtazovic, Christian Kreiner; “Privilege-Based Remote Attestation: Towards Integrity Assurance for Lightweight Clients,” IoTPTS ’15 Proceedings of the 1st ACM Workshop on IoT Privacy, Trust, and Security, April 2015, Pages 3–9. doi:10.1145/2732209.2732211
Abstract: Remote attestation is used to assure the integrity of a trusted platform (prover) to a remote party (challenger). Traditionally, plain binary attestation (i.e., attesting the integrity of software by measuring their binaries) is the method of choice. Especially in the resource-constrained embedded domain with the ever-growing number of integrated services per platform, this approach is not feasible since the challenger has to know all possible ‘good’ configurations of the prover. In this work, a new approach based on software privileges is presented. It reduces the number of possible configurations the challenger has to know by ignoring all services on the prover that are not used by the challenger. For the ignored services, the challenger ensures that they do not have the privileges to manipulate the used services. To achieve this, the prover measures the privileges of its software modules by parsing their binaries for particular system API calls. The results show significant reduction of need-to-know configurations. The implementation of the central system parts show its practicability, especially if combined with a fine-grained system API.
Keywords: embedded systems, privilege classification, remote attestation, trusted computing (ID#: 15-7631)
URL: http://doi.acm.org/10.1145/2732209.2732211
Jianbao Ren, Yong Qi, Yuehua Dai, Xiaoguang Wang, Yi Shi; “AppSec: A Safe Execution Environment for Security Sensitive Applications,” VEE ’15 Proceedings of the 11th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, March 2015, Pages 187–199. doi:10.1145/2731186.2731199
Abstract: Malicious OS kernel can easily access user’s private data in main memory and pries human-machine interaction data, even one that employs privacy enforcement based on application level or OS level. This paper introduces AppSec, a hypervisor-based safe execution environment, to protect both the memory data and human-machine interaction data of security sensitive applications from the untrusted OS transparently. AppSec provides several security mechanisms on an untrusted OS. AppSec introduces a safe loader to check the code integrity of application and dynamic shared objects. During runtime, AppSec protects application and dynamic shared objects from being modified and verifies kernel memory accesses according to application’s intention. AppSec provides a devices isolation mechanism to prevent the human-machine interaction devices being accessed by compromised kernel. On top of that, AppSec further provides a privileged-based window system to protect application’s X resources. The major advantages of AppSec are threefold. First, AppSec verifies and protects all dynamic shared objects during runtime. Second, AppSec mediates kernel memory access according to application’s intention but not encrypts all application’s data roughly. Third, AppSec provides a trusted I/O path from end-user to application. A prototype of AppSec is implemented and shows that AppSec is efficient and practical.
Keywords: human-machine interaction, kernel, privacy, vmm (ID#: 15-7632)
URL: http://doi.acm.org/10.1145/2731186.2731199
Jing (Dave) Tian, Kevin R.B. Butler, Patrick D. McDaniel, Padma Krishnaswamy; “Securing ARP from the Ground Up,” CODASPY ’15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 305–312. doi:10.1145/2699026.2699123
Abstract: The basis for all IPv4 network communication is the Address Resolution Protocol (ARP), which maps an IP address to a device’s Media Access Control (MAC) identifier. ARP has long been recognized as vulnerable to spoofing and other attacks, and past proposals to secure the protocol have often involved modifying the basic protocol. This paper introduces arpsec, a secure ARP/RARP protocol suite which a) does not require protocol modification, b) enables continual verification of the identity of the tar- get (respondent) machine by introducing an address binding repository derived using a formal logic that bases additions to a host’s ARP cache on a set of operational rules and properties, c) utilizes the TPM, a commodity component now present in the vast majority of modern computers, to augment the logic-prover-derived assurance when needed, with TPM-facilitated attestations of system state achieved at viably low processing cost. Using commodity TPMs as our attestation base, we show that arpsec incurs an overhead ranging from 7% to 15.4% over the standard Linux ARP implementation and provides a first step towards a formally secure and trustworthy networking stack.
Keywords: arp, logic, spoofing, trusted computing, trusted protocols (ID#: 15-7633)
URL: http://doi.acm.org/10.1145/2699026.2699123
Seongwook Jin, Jinho Seol, Jaehyuk Huh, Seungryoul Maeng; “Hardware-Assisted Secure Resource Accounting Under a Vulnerable Hypervisor,” VEE ’15 Proceedings of the 11th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, March 2015, Pages 201–213. doi:10.1145/2731186.2731203
Abstract: With the proliferation of cloud computing to outsource computation in remote servers, the accountability of computational resources has emerged as an important new challenge for both cloud users and providers. Among the cloud resources, CPU and memory are difficult to verify their actual allocation, since the current virtualization techniques attempt to hide the discrepancy between physical and virtual allocations for the two resources. This paper proposes an online verifiable resource accounting technique for CPU and memory allocation for cloud computing. Unlike prior approaches for cloud resource accounting, the proposed accounting mechanism, called Hardware-assisted Resource Accounting (HRA), uses the hardware support for system management mode (SMM) and virtualization to provide secure resource accounting, even if the hypervisor is compromised. Using a secure isolated execution support of SMM, this study investigates two aspects of verifiable resource accounting for cloud systems. First, this paper presents how the hardware-assisted SMM and virtualization techniques can be used to implement the secure resource accounting mechanism even under a compromised hypervisor. Second, the paper investigates a sample-based resource accounting technique to minimize performance overheads. Using a statistical random sampling method, the technique estimates the overall CPU and memory allocation status with 99%~100% accuracies and performance degradations of 0.1%~0.5%.
Keywords: cloud, resource accounting, virtualization (ID#: 15-7634)
URL: http://doi.acm.org/10.1145/2731186.2731203
Fernando A. Teixeira, Gustavo V. Machado, Fernando M.Q. Pereira, Hao Chi Wong, José M. S. Nogueira, Leonardo B. Oliveira; “SIoT: Securing the Internet of Things Through Distributed System Analysis,” IPSN ’15 Proceedings of the 14th International Conference on Information Processing in Sensor Networks, April 2015, Pages 310–321. doi:10.1145/2737095.2737097
Abstract: The Internet of Things (IoT) is increasingly more relevant. This growing importance calls for tools able to provide users with correct, reliable and secure systems. In this paper, we claim that traditional approaches to analyze distributed systems are not expressive enough to address this challenge. As a solution to this problem, we present SIoT, a framework to analyze networked systems. SIoT’s key insight is to look at a distributed system as a single body, and not as separate programs that exchange messages. By doing so, we can crosscheck information inferred from different nodes. This crosschecking increases the precision of traditional static analyses. To construct this global view of a distributed system we introduce a novel algorithm that discovers inter-program links efficiently. Such links lets us build a holistic view of the entire network, a knowledge that we can thus forward to a traditional tool. We prove that our algorithm always terminates and that it correctly models the semantics of a distributed system. To validate our solution, we have implemented SIoT on top of the LLVM compiler, and have used one instance of it to secure 6 ContikiOS applications against buffer overflow attacks. This instance of SIoT produces code that is as safe as code secured by more traditional analyses; however, our binaries are on average 18% more energy-efficient.
Keywords: buffer overflow, distributed system analysis, internet of things, software security (ID#: 15-7635)
URL: http://doi.acm.org/10.1145/2737095.2737097
Ahmad-Reza Sadeghi, Christian Wachsmann, Michael Waidner; “Security and Privacy Challenges in Industrial Internet of Things,” DAC ’15 Proceedings of the 52nd Annual Design Automation Conference, June 2015, Article No. 54. doi:10.1145/2744769.2747942
Abstract: Today, embedded, mobile, and cyberphysical systems are ubiquitous and used in many applications, from industrial control systems, modern vehicles, to critical infrastructure. Current trends and initiatives, such as “Industrie 4.0” and Internet of Things (IoT), promise innovative business models and novel user experiences through strong connectivity and effective use of next generation of embedded devices. These systems generate, process, and exchange vast amounts of security-critical and privacy-sensitive data, which makes them attractive targets of attacks. Cyberattacks on IoT systems are very critical since they may cause physical damage and even threaten human lives. The complexity of these systems and the potential impact of cyberattacks bring upon new threats. This paper gives an introduction to Industrial IoT systems, the related security and privacy challenges, and an outlook on possible solutions towards a holistic security framework for Industrial IoT systems.
Keywords: (not provided) (ID#: 15-7636)
URL: http://doi.acm.org/10.1145/2744769.2747942
Ferdinand Brasser, Brahim El Mahjoub, Ahmad-Reza Sadeghi, Christian Wachsmann, Patrick Koeberl; “TyTAN: Tiny Trust Anchor for Tiny Devices,” DAC ’15 Proceedings of the 52nd Annual Design Automation Conference, June 2015, Article No. 34. doi:10.1145/2744769.2744922
Abstract: Embedded systems are at the core of many security-sensitive and safety-critical applications, including automotive, industrial control systems, and critical infrastructures. Existing protection mechanisms against (software-based) malware are inflexible, too complex, expensive, or do not meet real-time requirements. We present TyTAN, which, to the best of our knowledge, is the first security architecture for embedded systems that provides (1) hardware-assisted strong isolation of dynamically configurable tasks and (2) real-time guarantees. We implemented TyTAN on the Intel® Siskiyou Peak embedded platform and demonstrate its efficiency and effectiveness through extensive evaluation.
Keywords: (not provided) (ID#: 15-7637)
URL: http://doi.acm.org/10.1145/2744769.2744922
Sumra, Irshad Ahmed; Hasbullah, Halabi Bin; Manan, Jamalul-lail Ab, “Using TPM to Ensure Security, Trust and Privacy (STP) in VANET,” in Information Technology: Towards New Smart World (NSITNSW), 2015 5th National Symposium on, vol., no., pp. 103–108, 17-19 Feb. 2015. doi:10.1109/NSITNSW.2015.7176402
Abstract: Safety and non-safety applications of VANET provides solutions for road accidents in current traffic system. Security is one of the key research area for successful implementation of safety and non-safety applications of VANET in real environment. Trust and Privacy are two major components of security and dynamic topology and high mobility of vehicles make it more challenging task for end users in network. We propose a new and practical card-based scheme to ensure the Security, Trust and Privacy (STP) in vehicular network. Proposed scheme is based on security hardware module i-e trusted platform module (TPM). The basic objective of proposed scheme is to create trusted security environment for end users in network and user take benefits of potential application of VANET.
Keywords: telecommunication network topology; telecommunication security; telecommunication traffic; vehicular ad hoc networks; STP; TPM; VANET; current traffic system; dynamic topology; i-e trusted platform module; nonsafety applications; practical card; real environment; road accidents; security hardware module; security-trust-privacy; trusted security environment; vehicle mobility; vehicular ad hoc network; Hardware; Principal component analysis; Privacy; Safety; Security; Vehicles; Vehicular ad hoc networks; Card based scheme; trusted platform module (TPM); Safety Applications; Security; Trust and Privacy (STP); Vehicular Ad hoc Network (VANET) (ID#: 15-7638)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176402&isnumber=7176382
Karter, L.; Ferhati, L.; Tafa, I.; Saatciu, D.; Fejzaj, J., “Security Evaluation of Embedded Hardware Implementation,” in Science and Information Conference (SAI) 2015, vol., no., pp. 1272–1276, 28–30 July 2015. doi:10.1109/SAI.2015.7237307
Abstract: The main objective of this paper is the evaluation of security features in TPM implementations. Nowadays security is very important, especially for those who keep important information on their computers, such as passwords, bank accounts and certificates. TPM can help to keep the information protected from possible adversaries. In order to trust in a TPM-enabled computing device, one must be sure that it really secures the stored information in it. In this paper are investigated security features and concerns of TPM and also is performed an evaluation of the advantages and disadvantages its shows. The evaluation is based on different experimental results which include TPM implementation and capabilities involving various software and computers equipped with embedded TPM.
Keywords: cryptography; data protection; embedded systems; trusted computing; TPM capabilities; TPM implementation; TPM-enabled computing device; embedded hardware implementation; information protection; information secures; security evaluation; security feature evaluation; trusted platform module; Computers; Encryption; Hardware; Linux; Software; Computer Attacks; Cryptographic and Protection Capabilities; Security Features; Time Stamping; Trusted Platform Module (ID#: 15-7639)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237307&isnumber=7237120
Kanstrén, T.; Lehtonen, S.; Savola, R.; Kukkohovi, H.; Hätönen, K., “Architecture for High Confidence Cloud Security Monitoring,” in Cloud Engineering (IC2E), 2015 IEEE International Conference on, vol., no., pp. 195–200, 9–13 March 2015. doi:10.1109/IC2E.2015.21
Abstract: Operational security assurance of a networked system requires providing constant and up-to-date evidence of its operational state. In a cloud-based environment we deploy our services as virtual guests running on external hosts. As this environment is not under our full control, we have to find ways to provide assurance that the security information provided from this environment is accurate, and our software is running in the expected environment. In this paper, we present an architecture for providing increased confidence in measurements of such cloud-based deployments. The architecture is based on a set of deployed measurement probes and trusted platform modules (TPM) across both the host infrastructure and guest virtual machines. The TPM are used to verify the integrity of the probes and measurements they provide. This allows us to ensure that the system is running in the expected environment, the monitoring probes have not been tampered with, and the integrity of measurement data provided is maintained. Overall this gives us a basis for increased confidence in the security of running parts of our system in an external cloud-based environment.
Keywords: cloud computing; security of data; virtual machines; TPM; external cloud-based environment; external hosts; guest virtual machines; high confidence cloud security monitoring; host infrastructure; measurement probes; networked system; operational security assurance; operational state; trusted platform modules; Computer architecture; Cryptography; Monitoring; Probes; Servers; Virtual machining; TPM; cloud; monitoring; secure element; security assurance (ID#: 15-7640)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092917&isnumber=7092808
Sungjin Park; Jae Nam Yoon; Cheoloh Kang; Kyong Hoon Kim; Taisook Han, “TGVisor: A Tiny Hypervisor-Based Trusted Geolocation Framework for Mobile Cloud Clients,” in Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2015 3rd IEEE International Conference on, vol., no., pp. 99–108, March 30 2015–April 3 2015. doi:10.1109/MobileCloud.2015.17
Abstract: In cloud computing, geographic location of data is one of major security concerns of cloud users. To resolve this problem, most of previous work has been done on trusted relocation service in cloud service providers. For example, users are allowed to determine the physical location of their cloud servers and ensured about their requirements of relocation-based restrictions. However, it is also essential to handle trusted relocation service at cloud users’ devices in mobile cloud computing. As mobile cloud tenants use cloud services everywhere, trusted relocation of cloud users arises a new security issue. Thus, in this paper, we present a novel trusted relocation system named Devisor for cloud user devices. The key mechanism of Devisor is providing a trusted channel between the relocation server and the GPS module in each mobile client device. We leverage Trusted Platform Module (TPM) and tiny hyper visor in order to securely perform the attestation of the relocation of client devices. To prove the practicality of Devisor, we design and implement a cloud word processor with trusted relocation service based on Ether pad. We also evaluate the performance of Devisor in cloud devices and show that it causes only 8.3% overhead in JavaScript benchmark, which indicates the feasibility of TGVisor.
Keywords: Global Positioning System; Java; cloud computing; geographic information systems; mobile computing; network servers; trusted computing; word processing; Etherpad; GPS module; JavaScript benchmark; TGVisor; TPM; cloud servers; cloud service providers; cloud user devices; cloud word processor; geographic location; geolocation-based restrictions; mobile client device; mobile cloud clients; mobile cloud computing; mobile cloud tenants; security issue; tiny hypervisor-based trusted geolocation framework; trusted geolocation service; trusted platform module; Cryptography; Geology; Mobile communication; Protocols; Servers; Virtual machine monitors; tiny hypervisor; trusted geolocation (ID#: 15-7641)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130874&isnumber=7130853
Kashif, U.A.; Memon, Z.A.; Balouch, A.R.; Chandio, J.A., “Distributed Trust Protocol for IaaS Cloud Computing,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 275–279, 13–17 Jan. 2015. doi:10.1109/IBCAST.2015.7058516
Abstract: Due to economic benefits of cloud computing, consumers have rushed to adopt Cloud Computing. Apart from rushing into cloud, security concerns are also raised. These security concerns cause trust issue in adopting cloud computing. Enterprises adopting cloud, will have no more control over data, application and other computing resources that are outsourced from cloud computing provider. In this paper we propose a novel technique that will not leave consumer alone in cloud environment. Firstly we present theoretical analysis of selected state of the art technique and identified issues in IaaS cloud computing. Secondly we propose Distributed Trust Protocol for IaaS Cloud Computing in order to mitigate trust issue between cloud consumer and provider. Our protocol is distributed in nature that lets the consumer to check the integrity of cloud computing platform that is in the premises of provider’s environment. We follow the rule of security duty separation between the premises of consumer and provider and let the consumer be the actual owner of the platform. In our protocol, user VM hosted at IaaS Cloud Computing uses Trusted Boot process by following specification of Trusted Computing Group (TCG) and by utilizing Trusted Platform Module (TPM) Chip of the consumer. The protocol is for the Infrastructure as a Service IaaS i.e. lowest service delivery model of cloud computing.
Keywords: cloud computing; formal specification; security of data; trusted computing; virtual machines; IaaS cloud computing; Infrastructure as a Service; TCG specification; TPM chip; Trusted Computing Group; cloud computing platform integrity checking; cloud consumer; cloud environment; cloud provider; computing resources; distributed trust protocol; economic benefit; security concern; security duty separation; service delivery model; trust issue mitigation; trusted boot process; trusted platform module chip; user VM; Hardware; Information systems; Security; Virtual machine monitors; Trusted cloud computing; cloud security and trust; virtualization (ID#: 15-7642)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058516&isnumber=7058466
Syed, T.A.; Musa, S.; Rahman, A.; Jan, S., “Towards Secure Instance Migration in the Cloud,” in Cloud Computing (ICCC), 2015 International Conference on, vol., no., pp. 1-6, 26-29 April 2015. doi:10.1109/CLOUDCOMP.2015.7149664
Abstract: Hosting service providers are completely shifting towards cloud computing from dedicated hardware. However, corporates waffles to move their sensitive data to such a solution where data is no more in their control. The pay-as-you-go is primary notion of cloud service providers. However, they share infrastructure between different tenants that brings security issues. There is a need to provide trust and confidence to corporates that security mechanisms being used by the service providers are secure. Existing IaaS (Infrastructure as a Service) providers have adopted all standard software-based security solutions. However, recent research shows that softwares security solutions are itself vulnerable to attack. In this regard Trusted Computing Group (TCG) introduced hardware root-of-trust concept where highly sensitive information is stored in co-processor called Trusted Platform Module (TPM) rather than the software. Migration is an important process in cloud infrastructures. There are many solutions offered by service providers that improve performance of their client’s services such as web and database. For example, CloudFront, Elastic Load Balancing (ELB) etc., offered by Amazon AWS. These services move customer’s data between cloud infrastructure quit often. However, they do not provide hardware backed solutions, such as Trusted Computing, to migrate customer’s data between infrastructures. In this paper we have incorporated a new component in OpenStack called Secure Instance Migration Module (SIMM). SIMM is backed by Trusted Computing constructs that protects integrity of instance data while migration takes place. By incorporation of SIMM module, cloud customers will have more confidence regarding their sensitive data. We have also discussed architecture and implementation of SIMM module.
Keywords: cloud computing; data integrity; resource allocation; trusted computing; Amazon AWS; CloudFront; IaaS providers; OpenStack; SIMM module; TCG; TPM; attack vulnerablility; client services; cloud infrastructures; cloud service providers; coprocessor; data integrity protection; elastic load balancing; hardware root-of-trust concept; infrastructure as a service providers; secure instance migration module; security mechanisms; software-based security solutions; trusted computing group; trusted platform module; Cloud computing; Clouds; Cryptography; Hardware; Servers; Virtual machine monitors (ID#: 15-7643)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149664&isnumber=7149613
Kanstrén, T.; Lehtonen, S.; Kukkohovi, H., “Opportunities in Using a Secure Element to Increase Confidence in Cloud Security Monitoring,” in Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, vol., no., pp. 1093–1098, June 27 2015–July 2 2015. doi:10.1109/CLOUD.2015.159
Abstract: In this paper we discuss applications of a secure element (SE) such as trusted platform module (TPM) for increasing confidence in cloud security monitoring from the cloud customer viewpoint. Monitoring security of cloud-based systems is similar in many ways to traditional in-house networks, but with the difference that the actual hardware is hosted by an external party and not under our control. This provides some unique challenges and opportunities for security monitoring. We discuss these challenges, identify related opportunities for SE use, and use these to present solutions to the identified challenges. This is based on three different use cases identified together with our industry partners. These are the monitoring of elements of the host infrastructure, monitoring our virtualized guest instances running on this infrastructure, and collecting and archiving log data for later external auditing of the cloud customer services. For each of these, we describe the problem area and different ways we have applied a TPM to increase trust and visibility.
Keywords: cloud computing; security of data; trusted computing; SE; TPM; cloud customer service; cloud security monitoring confidence; secure element; trusted platform module; Cloud computing; Cryptography; Monitoring; Probes; Virtual machining; cloud; security monitoring; tpm
(ID#: 15-7644)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214169&isnumber=7212169
Hao, F.; Clarke, D.; Zorzo, A., “Deleting Secret Data with Public Verifiability,” in Dependable and Secure Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1–1, April 2015. doi:10.1109/TDSC.2015.2423684
Abstract: Existing software-based data erasure programs can be summarized as following the same one-bit-return protocol: the deletion program performs data erasure and returns either success or failure. However, such a onebit- return protocol turns the data deletion system into a black box – the user has to trust the outcome but cannot easily verify it. This is especially problematic when the deletion program is encapsulated within a Trusted Platform Module (TPM), and the user has no access to the code inside. In this paper, we present a cryptographic solution that aims to make the data deletion process more transparent and verifiable. In contrast to the conventional black/white assumptions about TPM (i.e., either completely trust or distrust), we introduce a third assumption that sits in between: namely, “trust-but-verify”. Our solution enables a user to verify the correct implementation of two important operations inside a TPM without accessing its source code: i.e., the correct encryption of data and the faithful deletion of the key. Finally, we present a proof-of-concept implementation of the SSE system on a resource-constrained Java card to demonstrate its practical feasibility. To our knowledge, this is the first systematic solution to the secure data deletion problem based on a “trust-but-verify” paradigm, together with a concrete prototype implementation.
Keywords: Encryption; Protocols; Public key; Resistance; Software (ID#: 15-7645)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087355&isnumber=4358699
Pasquier, T.F.J.-M.; Singh, J.; Eyers, D.; Bacon, J., “CamFlow: Managed Data-sharing for Cloud Services,” in Cloud Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, October 2105. doi:10.1109/TCC.2015.2489211
Abstract: A model of cloud services is emerging whereby a few trusted providers manage the underlying hardware and communications whereas many companies build on this infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS applications. From the start, strong isolation between cloud tenants was seen to be of paramount importance, provided first by virtual machines (VM) and later by containers, which share the operating system (OS) kernel. Increasingly it is the case that applications also require facilities to effect isolation and protection of data managed by those applications. They also require flexible data sharing with other applications, often across the traditional cloud-isolation boundaries; for example, when government provides many related services for its citizens on a common platform. Similar considerations apply to the end-users of applications. But in particular, the incorporation of cloud services within ‘Internet of Things’ architectures is driving the requirements for both protection and cross-application data sharing. These concerns relate to the management of data. Traditional access control is application and principal/role specific, applied at policy enforcement points, after which there is no subsequent control over where data flows; a crucial issue once data has left its owner’s control by cloud-hosted applications and within cloud-services. Information Flow Control (IFC), in addition, offers system-wide, end-to-end, flow control based on the properties of the data. We discuss the potential of cloud-deployed IFC for enforcing owners’ dataflow policy with regard to protection and sharing, as well as safeguarding against malicious or buggy software. In addition, the audit log associated with IFC provides transparency, giving configurable system-wide visibility over data flows. This helps those responsible to meet their data management obligations, providing evidence of compliance, and aids in the identification of policy errors and misconfigurations. We present our IFC model and describe and evaluate our IFC architecture and implementation (CamFlow). This comprises an OS level implementation of IFC with support for application management, together with an IFC-enabled middleware. Our contribution is to demonstrate the feasibility of incorporating IFC into cloud services: we show how the incorporation of IFC into cloud-provided OSs underlying PaaS and SaaS would address application sharing and protection requirements, and more generally, greatly enhance the trustworthiness of cloud services at all levels, at little overhead, and transparently to tenants.
Keywords: Access control; Cloud computing; Computational modeling; Computer architecture; Containers; Context; Audit; Cloud; Compliance; Data Management; Information Flow Control; Linux Security Module; Middleware; PaaS; Provenance; Security
(ID#: 15-7646)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7295590&isnumber=6562694
Aversa, R.; Panza, N.; Tasquier, L., “An Agent-Based Platform for Cloud Applications Performance Monitoring,” in Complex, Intelligent, and Software Intensive Systems (CISIS), 2015 Ninth International Conference on, vol., no., pp. 535–540, 8–10 July 2015. doi:10.1109/CISIS.2015.79
Abstract: The monitoring of the resources is one among the major challenges that the virtualization brings with it within the Cloud environments. In order to ensure scalability and dependability, the user’s applications are often distributed on several computational resources, such as Virtual Machines, storages and so on. For this reason, the customer is able to retrieve information about the Cloud infrastructure only by acquiring monitoring services provided by the same vendor that is offering the Cloud resources, thus being forced to trust the Cloud provider about the detected performance indexes. In this work we present a complete architecture that covers all the monitoring activities that take place within a Cloud application lifecycle: we also propose an agent-based implementation of a particular module of the designed architecture that ensures high customization of the monitoring facility and more tolerance to network and resource failures.
Keywords: cloud computing; information retrieval; multi-agent systems; software performance evaluation; virtualisation; agent-based platform; cloud application performance monitoring; computational resources; monitoring facility; resource monitoring; virtual machines; virtualization; Charge measurement; Cloud computing; Computer architecture; Monitoring; Probes; Standards; Time measurement; Cloud Monitoring; IaaS Cloud; Mobile Agents; Service Level Agreement (ID#: 15-7647)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7185244&isnumber=7185122
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.