Biblio
In recent years, with growing demands towards big data application, various research on context-awareness has once again become active. This paper proposes a new type of context-aware user authentication that controls the authentication level of users, using the context of “physical trust relationship” that is built between users by visual contact. In our proposal, the authentication control is carried out by two mechanisms; “i-Contact” and “k-Contact”. i-Contact is the mechanism that visually confirms the user (owner of a mobile device) using the surrounding users' eyes. The authenticity of users can be reliably assessed by the people (witnesses), even when the user exhibits ambiguous behavior. k-Contact is the mechanism that dynamically changes the authentication level of each user using the context information collected through i-Contact. Once a user is authenticated by eyewitness reports, the user is no longer prompted for a password to unlock his/her mobile device and/or to access confidential resources. Thus, by leveraging the proposed authentication system, the usability for only trusted users can be securely enhanced. At the same time, our proposal anticipates the promotion of physical social communication as face-to-face communication between users is triggered by the proposed authentication system.
Revolution in the field of technology leads to the development of cloud computing which delivers on-demand and easy access to the large shared pools of online stored data, softwares and applications. It has changed the way of utilizing the IT resources but at the compromised cost of security breaches as well such as phishing attacks, impersonation, lack of confidentiality and integrity. Thus this research work deals with the core problem of providing absolute security to the mobile consumers of public cloud to improve the mobility of user's, accessing data stored on public cloud securely using tokens without depending upon the third party to generate them. This paper presents the approach of simplifying the process of authenticating and authorizing the mobile user's by implementing middleware-centric framework called MiLAMob model with the huge online data storage system i.e. HDFS. It allows the consumer's to access the data from HDFS via mobiles or through the social networking sites eg. facebook, gmail, yahoo etc using OAuth 2.0 protocol. For authentication, the tokens are generated using one-time password generation technique and then encrypting them using AES method. By implementing the flexible user based policies and standards, this model improves the authorization process.
Although wireless communication is integral to our daily lives, there are numerous crucial questions related to coverage, energy consumption, reliability, and security when it comes to industrial deployment. The authors provide an overview of wireless machine-to-machine (M2M) technologies in the context of a smart factory.
The paradigm shift from traditional BPM to Subject-oriented BPM (S-BPM) is accounted to identifying independently acting subjects. As such, they can perform arbitrary actions on arbitrary objects. Abstract State Machines (ASMs) work on a similar basis. Exploring their capabilities with respect to representing and executing S-BPM models strengthens the theoretical foundations of S-BPM, and thus, validity of S-BPM tools. Moreover it enables coherent intertwining of business process modeling with executing of S-BPM representations. In this contribution we introduce the framework and roadmap tackling the exploration of the ASM approach in the context of S-BPM. We also report the major result, namely the implementation of an executable workflow engine with an Abstract State Machine interpreter based on an existing abstract interpreter model for S-BPM (applying the ASM refinement concept). This workflow engine serves as a baseline and reference implementation for further language and processing developments, such as simulation tools, as it has been developed within the Open-S-BPM initiative.
Detecting stationary crowd groups and analyzing their behaviors have important applications in crowd video surveillance, but have rarely been studied. The contributions of this paper are in two aspects. First, a stationary crowd detection algorithm is proposed to estimate the stationary time of foreground pixels. It employs spatial-temporal filtering and motion filtering in order to be robust to noise caused by occlusions and crowd clutters. Second, in order to characterize the emergence and dispersal processes of stationary crowds and their behaviors during the stationary periods, three attributes are proposed for quantitative analysis. These attributes are recognized with a set of proposed crowd descriptors which extract visual features from the results of stationary crowd detection. The effectiveness of the proposed algorithms is shown through experiments on a benchmark dataset.
Threats to modern ICT systems are rapidly changing these days. Organizations are not mainly concerned about virus infestation, but increasingly need to deal with targeted attacks. This kind of attacks are specifically designed to stay below the radar of standard ICT security systems. As a consequence, vendors have begun to ship self-learning intrusion detection systems with sophisticated heuristic detection engines. While these approaches are promising to relax the serious security situation, one of the main challenges is the proper evaluation of such systems under realistic conditions during development and before roll-out. Especially the wide variety of configuration settings makes it hard to find the optimal setup for a specific infrastructure. However, extensive testing in a live environment is not only cumbersome but usually also impacts daily business. In this paper, we therefore introduce an approach of an evaluation setup that consists of virtual components, which imitate real systems and human user interactions as close as possible to produce system events, network flows and logging data of complex ICT service environments. This data is a key prerequisite for the evaluation of modern intrusion detection and prevention systems. With these generated data sets, a system's detection performance can be accurately rated and tuned for very specific settings.
Being the most important critical infrastructure in Cyber-Physical Systems (CPSs), a smart grid exhibits the complicated nature of large scale, distributed, and dynamic environment. Taxonomy of attacks is an effective tool in systematically classifying attacks and it has been placed as a top research topic in CPS by a National Science Foundation (NSG) Workshop. Most existing taxonomy of attacks in CPS are inadequate in addressing the tight coupling of cyber-physical process or/and lack systematical construction. This paper attempts to introduce taxonomy of attacks of agent-based smart grids as an effective tool to provide a structured framework. The proposed idea of introducing the structure of space-time and information flow direction, security feature, and cyber-physical causality is innovative, and it can establish a taxonomy design mechanism that can systematically construct the taxonomy of cyber attacks, which could have a potential impact on the normal operation of the agent-based smart grids. Based on the cyber-physical relationship revealed in the taxonomy, a concrete physical process based cyber attack detection scheme has been proposed. A numerical illustrative example has been provided to validate the proposed physical process based cyber detection scheme.
The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
Aside from massive advantages in safety and convenience on the road, Vehicular Ad Hoc Networks (VANETs) introduce security risks to the users. Proposals of new security concepts to counter these risks are challenging to verify because of missing real world implementations of VANETs. To fill this gap, we introduce VANETsim, an event-driven simulation platform, specifically designed to investigate application-level privacy and security implications in vehicular communications. VANETsim focuses on realistic vehicular movement on real road networks and communication between the moving nodes. A powerful graphical user interface and an experimentation environment supports the user when setting up or carrying out experiments.
Abnormal crowd behavior detection is an important research issue in video processing and computer vision. In this paper we introduce a novel method to detect abnormal crowd behaviors in video surveillance based on interest points. A complex network-based algorithm is used to detect interest points and extract the global texture features in scenarios. The performance of the proposed method is evaluated on publicly available datasets. We present a detailed analysis of the characteristics of the crowd behavior in different density crowd scenes. The analysis of crowd behavior features and simulation results are also demonstrated to illustrate the effectiveness of our proposed method.
We present an analysis of a heuristic for abrupt change detection of systems with bounded state variations. The proposed analysis is based on the Singular Value Decomposition (SVD) of a history matrix built from system observations. We show that monitoring the largest singular value of the history matrix can be used as a heuristic for detecting abrupt changes in the system outputs. We provide sufficient detectability conditions for the proposed heuristic. As an application, we consider detecting malicious cyber data attacks on power systems and test our proposed heuristic on the IEEE 39-bus testbed.
This paper addresses the potential danger using integrated circuits which contain malicious hardware modifications hidden in the silicon structure. A so called hardware Trojan may be added at several stages of the chip development process. This work concentrates on formal hardware Trojan detection during the design phase and highlights applied verification techniques. Selected methods are discussed and their combination used to increase an introduced “Trojan Assurance Level”.
Outsourcing spatial databases to the cloud provides an economical and flexible way for data owners to deliver spatial data to users of location-based services. However, in the database outsourcing paradigm, the third-party service provider is not always trustworthy, therefore, ensuring spatial query integrity is critical. In this paper, we propose an efficient road network k-nearest-neighbor query verification technique which utilizes the network Voronoi diagram and neighbors to prove the integrity of query results. Unlike previous work that verifies k-nearest-neighbor results in the Euclidean space, our approach needs to verify both the distances and the shortest paths from the query point to its kNN results on the road network. We evaluate our approach on real-world road networks together with both real and synthetic points of interest datasets. Our experiments run on Google Android mobile devices which communicate with the service provider through wireless connections. The experiment results show that our approach leads to compact verification objects (VO) and the verification algorithm on mobile devices is efficient, especially for queries with low selectivity.
Network traffic is a rich source of information for security monitoring. However the increasing volume of data to treat raises issues, rendering holistic analysis of network traffic difficult. In this paper we propose a solution to cope with the tremendous amount of data to analyse for security monitoring perspectives. We introduce an architecture dedicated to security monitoring of local enterprise networks. The application domain of such a system is mainly network intrusion detection and prevention, but can be used as well for forensic analysis. This architecture integrates two systems, one dedicated to scalable distributed data storage and management and the other dedicated to data exploitation. DNS data, NetFlow records, HTTP traffic and honeypot data are mined and correlated in a distributed system that leverages state of the art big data solution. Data correlation schemes are proposed and their performance are evaluated against several well-known big data framework including Hadoop and Spark.
We introduce a cloud-enabled defense mechanism for Internet services against network and computational Distributed Denial-of-Service (DDoS) attacks. Our approach performs selective server replication and intelligent client re-assignment, turning victim servers into moving targets for attack isolation. We introduce a novel system architecture that leverages a "shuffling" mechanism to compute the optimal re-assignment strategy for clients on attacked servers, effectively separating benign clients from even sophisticated adversaries that persistently follow the moving targets. We introduce a family of algorithms to optimize the runtime client-to-server re-assignment plans and minimize the number of shuffles to achieve attack mitigation. The proposed shuffling-based moving target mechanism enables effective attack containment using fewer resources than attack dilution strategies using pure server expansion. Our simulations and proof-of-concept prototype using Amazon EC2 [1] demonstrate that we can successfully mitigate large-scale DDoS attacks in a small number of shuffles, each of which incurs a few seconds of user-perceived latency.
IP technology for resource-constrained devices enables transparent end-to-end connections between a vast variety of devices and services in the Internet of Things (IoT). To protect these connections, several variants of traditional IP security protocols have recently been proposed for standardization, most notably the DTLS protocol. In this paper, we identify significant resource requirements for the DTLS handshake when employing public-key cryptography for peer authentication and key agreement purposes. These overheads particularly hamper secure communication for memory-constrained devices. To alleviate these limitations, we propose a delegation architecture that offloads the expensive DTLS connection establishment to a delegation server. By handing over the established security context to the constrained device, our delegation architecture significantly reduces the resource requirements of DTLS-protected communication for constrained devices. Additionally, our delegation architecture naturally provides authorization functionality when leveraging the central role of the delegation server in the initial connection establishment. Hence, in this paper, we present a comprehensive, yet compact solution for authentication, authorization, and secure data transmission in the IP-based IoT. The evaluation results show that compared to a public-key-based DTLS handshake our delegation architecture reduces the memory overhead by 64 %, computations by 97 %, network transmissions by 68 %.
Hardware Trojan Threats (HTTs) are stealthy components embedded inside integrated circuits (ICs) with an intention to attack and cripple the IC similar to viruses infecting the human body. Previous efforts have focused essentially on systems being compromised using HTTs and the effectiveness of physical parameters including power consumption, timing variation and utilization for detecting HTTs. We propose a novel metric for hardware Trojan detection coined as HTT detectability metric (HDM) that uses a weighted combination of normalized physical parameters. HTTs are identified by comparing the HDM with an optimal detection threshold; if the monitored HDM exceeds the estimated optimal detection threshold, the IC will be tagged as malicious. As opposed to existing efforts, this work investigates a system model from a designer perspective in increasing the security of the device and an adversary model from an attacker perspective exposing and exploiting the vulnerabilities in the device. Using existing Trojan implementations and Trojan taxonomy as a baseline, seven HTTs were designed and implemented on a FPGA testbed; these Trojans perform a variety of threats ranging from sensitive information leak, denial of service to beat the Root of Trust (RoT). Security analysis on the implemented Trojans showed that existing detection techniques based on physical characteristics such as power consumption, timing variation or utilization alone does not necessarily capture the existence of HTTs and only a maximum of 57% of designed HTTs were detected. On the other hand, 86% of the implemented Trojans were detected with HDM. We further carry out analytical studies to determine the optimal detection threshold that minimizes the summation of false alarm and missed detection probabilities.
Cell discontinuous transmission (DTX) is a new feature that enables sleep mode operations at base station (BS) side during the transmission time intervals when there is no traffic. In this letter, we analyze the maximum achievable energy saving of the cell DTX. We incorporate the cell DTX with a clean-slate network deployment and obtain optimal BS density for lowest energy consumption satisfying a certain quality of service requirement considering daily traffic variation. The numerical result indicates that the fast traffic adaptation capability of cell DTX favors dense network deployment with lightly loaded cells, which brings about considerable improvement in energy saving.
The emergence of new technologies, in addition with the popularization of mobile devices and wireless communication systems, demands a variety of requirements that current Internet is not able to comply adequately. In this scenario, the innovative information-centric Entity Title Architecture (ETArch), a Future Internet (FI) clean slate approach, was design to efficiently cope with the increasing demand of beyond-IP networking services. Nevertheless, despite all ETArch capabilities, it was not projected with reliable networking functions, which limits its operability in mobile multimedia networking, and will seriously restrict its scope in Future Internet scenarios. Therefore, our work extends ETArch mobility control with advanced quality-oriented mobility functions, to deploy mobility prediction, Point of Attachment (PoA) decision and handover setup meeting both session quality requirements of active session flows and current wireless quality conditions of neighbouring PoA candidates. The effectiveness of the proposed additions were confirmed through a preliminary evaluation carried out by MATLAB, in which we have considered distinct applications scenario, and showed that they were able to outperform the most relevant alternative solutions in terms of performance and quality of service.
With the growing demand for increased spectral efficiencies, there has been renewed interest in enabling full-duplex communications. However, existing approaches to enable full-duplex require a clean-slate approach to address the key challenge in full-duplex, namely self-interference suppression. This serves as a big deterrent to enabling full-duplex in existing cellular networks. Towards our vision of enabling full-duplex in legacy cellular, specifically LTE networks, with no modifications to existing hardware at BS and client as well as technology specific industry standards, we present the design of our experimental system FD-LTE, that incorporates a combination of passive SI cancellation schemes, with legacy LTE half-duplex BS and client devices. We build a prototype of FD-LTE, integrate it with LTE's evolved packet core and conduct over-the-air experiments to explore the feasibility and potential for full-duplex with legacy LTE networks. We report promising experimental results from FD-LTE, which currently applies to scenarios with limited ranges that is typical of small cells.
This paper presents a credibility model to assess trust of Web services. The model relies on consumers' ratings whose accuracy can be questioned due to different biases. A category of consumers known as strict are usually excluded from the process of reaching a majority consensus. We demonstrated that this exclusion should not be. The proposed model reduces the gap between these consumers' ratings and the current majority rating. Fuzzy clustering is used to compute consumers' credibility. To validate this model a set of experiments are carried out.
In a converging world, where borders between countries are surpassed in the digital environment, it is necessary to develop systems that effectively replace the recognition “vis-a-vis” with digital means of recognizing and identifying entities and people. In this work we summarize the current standardization efforts in the area of digital identity management. We identify a number of open challenges that need to be addressed in the near future to ensure the interoperability and usability of digital identity management services in an efficient and privacy maintaining international framework. These challenges for standardization include: the management of identifiers for digital identities at the global level; attribute management including attribute format, structure, and assurance; procedures and protocols to link attributes to digital identities. Attention is drawn to key elements that should be considered in addressing these issues through standardization.
Physical impairments in long-haul optical networks mandate that optical signals be regenerated within the (so-called translucent) network. Being expensive devices, regenerators are expected to be allocated sparsely and must be judiciously utilized. Next-generation optical-transport networks will include multiple domains with diverse technologies, protocols, granularities, and carriers. Because of confidentiality and scalability concerns, the scope of network-state information (e.g., topology, wavelength availability) may be limited to within a domain. In such networks, the problem of routing and wavelength assignment (RWA) aims to find an adequate route and wavelength(s) for lightpaths carrying end-to-end service demands. Some state information may have to be explicitly exchanged among the domains to facilitate the RWA process. The challenge is to determine which information is the most critical and make a wise choice for the path and wavelength(s) using the limited information. Recently, a framework for multidomain path computation called backward-recursive path-computation (BRPC) was standardized by the Internet Engineering Task Force. In this paper, we consider the RWA problem for connections within a single domain and interdomain connections so that the quality of transmission (QoT) requirement of each connection is satisfied, and the network-level performance metric of blocking probability is minimized. Cross-layer heuristics that are based on dynamic programming to effectively allocate the sparse regenerators are developed, and extensive simulation results are presented to demonstrate their effectiveness.