Biblio
With crimes on the rise all around the world, video surveillance is becoming more important day by day. Due to the lack of human resources to monitor this increasing number of cameras manually, new computer vision algorithms to perform lower and higher level tasks are being developed. We have developed a new method incorporating the most acclaimed Histograms of Oriented Gradients, the theory of Visual Saliency and the saliency prediction model Deep Multi-Level Network to detect human beings in video sequences. Furthermore, we implemented the k - Means algorithm to cluster the HOG feature vectors of the positively detected windows and determined the path followed by a person in the video. We achieved a detection precision of 83.11% and a recall of 41.27%. We obtained these results 76.866 times faster than classification on normal images.
In this paper, we propose a theoretical framework to investigate the eavesdropping behavior in underwater acoustic sensor networks. In particular, we quantify the eavesdropping activities by the eavesdropping probability. Our derived results show that the eavesdropping probability heavily depends on acoustic signal frequency, underwater acoustic channel characteristics (such as spreading factor and wind speed) and different hydrophones (such as isotropic hydrophones and array hydrophones). Simulation results have further validate the effectiveness and the accuracy of our proposed model.
Today the technology advancement in communication technology permits a malware author to introduce code obfuscation technique, for example, Application Programming Interface (API) hook, to make detecting the footprints of their code more difficult. A signature-based model such as Antivirus software is not effective against such attacks. In this paper, an API graph-based model is proposed with the objective of detecting hook attacks during malicious code execution. The proposed model incorporates techniques such as graph-generation, graph partition and graph comparison to distinguish a legitimate system call from malicious system call. The simulation results confirm that the proposed model outperforms than existing approaches.
In this paper, a practical quantum public-key encryption model is proposed by studying the recent quantum public-key encryption. This proposed model makes explicit stipulations on the generation, distribution, authentication, and usage of the secret keys, thus forms a black-box operation. Meanwhile, this proposed model encapsulates the process of encryption and decryption for the users, and forms a blackbox client-side. In our models, each module is independent and can be replaced arbitrarily without affecting the proposed model. Therefore, this model has a good guiding significance for the design and development of the quantum public key encryption schemes.
In this paper, we initiate the study of garbled protocols - a generalization of Yao's garbled circuits construction to distributed protocols. More specifically, in a garbled protocol construction, each party can independently generate a garbled protocol component along with pairs of input labels. Additionally, it generates an encoding of its input. The evaluation procedure takes as input the set of all garbled protocol components and the labels corresponding to the input encodings of all parties and outputs the entire transcript of the distributed protocol. We provide constructions for garbling arbitrary protocols based on standard computational assumptions on bilinear maps (in the common random string model). Next, using garbled protocols we obtain a general compiler that compresses any arbitrary round multiparty secure computation protocol into a two-round UC secure protocol. Previously, two-round multiparty secure computation protocols were only known assuming witness encryption or learning-with errors. Benefiting from our generic approach we also obtain protocols (i) for the setting of random access machines (RAM programs) while keeping communication and computational costs proportional to running times, while (ii) making only a black-box use of the underlying group, eliminating the need for any expensive non-black-box group operations. Our results are obtained by a simple but powerful extension of the non-interactive zero-knowledge proof system of Groth, Ostrovsky and Sahai [Journal of ACM, 2012].
We use symbolic formal models to study the composition of public key-based protocols with public key infrastructures (PKIs). We put forth a minimal set of requirements which a PKI should satisfy and then identify several reasons why composition may fail. Our main results are positive and offer various trade-offs which align the guarantees provided by the PKI with those required by the analysis of protocol with which they are composed. We consider both the case of ideally distributed keys but also the case of more realistic PKIs.,,Our theorems are broadly applicable. Protocols are not limited to specific primitives and compositionality asks only for minimal requirements on shared ones. Secure composition holds with respect to arbitrary trace properties that can be specified within a reasonably powerful logic. For instance, secrecy and various forms of authentication can be expressed in this logic. Finally, our results alleviate the common yet demanding assumption that protocols are fully tagged.
Business or military missions are supported by hardware and software systems. Unanticipated cyber activities occurring in supporting systems can impact such missions. In order to quantify such impact, we describe a layered graphical model as an extension of forensic investigation. Our model has three layers: the upper layer models operational tasks that constitute the mission and their inter-dependencies. The middle layer reconstructs attack scenarios from available evidence to reconstruct their inter-relationships. In cases where not all evidence is available, the lower level reconstructs potentially missing attack steps. Using the three levels of graphs constructed in these steps, we present a method to compute the impacts of attack activities on missions. We use NIST National Vulnerability Database's (NVD)-Common Vulnerability Scoring System (CVSS) scores or forensic investigators' estimates in our impact computations. We present a case study to show the utility of our model.
In this paper, we address the problem of demand response of electrical vehicles (EVs) during microgrid outages in the smart grid through the application of Vehicle-to-Grid (V2G) technology. Particularly, we present a novel privacy-preserving double auction scheme. In our auction market, the MicroGrid Center Controller (MGCC) acts as the auctioneer, solving the social welfare maximization problem of matching buyers to sellers, and the cloud is used as a broker between bidders and the auctioneer, protecting privacy through homomorphic encryption. Theoretical analysis is conducted to validate our auction scheme in satisfying the intended economic and privacy properties (e.g., strategy-proofness and k-anonymity). We also evaluate the performance of the proposed scheme to confirm its practical effectiveness.
Power system simulation environments with appropriate time-fidelity are needed to enable rapid testing of new smart grid technologies and for coupled simulations of the underlying cyber infrastructure. This paper presents such an environment which operates with power system models in the PMU time frame, including data visualization and interactive control action capabilities. The flexible and extensible capabilities are demonstrated by interfacing with a cyber infrastructure simulation.
In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds "more noise" into features which are "less relevant" to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions.
The Air Force is shifting its cybersecurity paradigm from an information technology (IT)-centric toward a mission oriented approach. Instead of focusing on how to defend its IT infrastructure, it seeks to provide mission assurance by defending mission relevant cyber terrain enabling mission execution in a contested environment. In order to actively defend a mission in cyberspace, efforts must be taken to understand and document that mission's dependence on cyberspace and cyber assets. This is known as cyber terrain mission mapping. This paper seeks to define mission mapping and overview methodologies. We also analyze current tools seeking to provide cyber situational awareness through mission mapping or cyber dependency impact analysis and identify existing shortfalls.
Securing their critical documents on the cloud from data threats is a major challenge faced by organizations today. Controlling and limiting access to such documents requires a robust and trustworthy access control mechanism. In this paper, we propose a semantically rich access control system that employs an access broker module to evaluate access decisions based on rules generated using the organizations confidentiality policies. The proposed system analyzes the multi-valued attributes of the user making the request and the requested document that is stored on a cloud service platform, before making an access decision. Furthermore, our system guarantees an end-to-end oblivious data transaction between the organization and the cloud service provider using oblivious storage techniques. Thus, an organization can use our system to secure their documents as well as obscure their access pattern details from an untrusted cloud service provider.
5G, the fifth generation of mobile communication networks, is considered as one of the main IoT enablers. Connecting billions of things, 5G/IoT will be dealing with trillions of GBytes of data. Securing such large amounts of data is a very challenging task. Collected data varies from simple temperature measurements to more critical transaction data. Thus, applying uniform security measures is a waste of resources (processing, memory, and network bandwidth). Alternatively, a multi-level security model needs to be applied according to the varying requirements. In this paper, we present a multi-level security scheme (BLP) applied originally in the information security domain. We review its application in the network domain, and propose a modified version of BLP for the 5G/IoT case. The proposed model is proven to be secure and compliant with the model rules.
The increasing demand for secure interactions between network domains brings in new challenges to access control technologies. In this paper we design an access control framework which provides a multilevel mapping method between hierarchical access control structures for achieving multilevel security protection in cross-domain networks. Hierarchical access control structures ensure rigorous multilevel security in intra domains. And the mapping method based on subject attributes is proposed to determine the subject's security level in its target domain. Experimental results we obtained from simulations are also reported in this paper to verify the effectiveness of the proposed access control model.
Edge Computing is a scheme to improve the performance, latency and security guidelines for IoT applications. However, edge deployment of an application also comes with additional complexity in management, an increased attack surface for security vulnerability, and could potentially result in a more expensive solution. As a result, the conditions under which an edge deployment of IoT applications delivers a better solution is not always obvious. Metrics which would be able to predict whether or not an IoT application is suitable for edge deployment can provide useful insights to address this question. In this paper, we examine the key performance indicators for IoT applications, namely the responsiveness, scalability and cost models for different types of IoT applications. Our analysis identifies that network centrality of an IoT application is a key characteristic which determines whether or not an IoT application is a good candidate for edge deployment. We discuss the different measures of network centrality that can be used to characterize applications, and the relative performance of edge deployment compared to centralized deployment for various IoT applications.
In the process of big data analysis and processing, a key concern blocking users from storing and processing their data in the cloud is their misgivings about the security and performance of cloud services. There is an urgent need to develop an approach that can help each cloud service provider (CSP) to demonstrate that their infrastructure and service behavior can meet the users' expectations. However, most of the prior research work focused on validating the process compliance of cloud service without an accurate description of the basic service behaviors, and could not measure the security capability. In this paper, we propose a novel approach to verify cloud service security conformance called CloudSec, which reduces the description gap between the cloud provider and customer through modeling cloud service behaviors (CloudBeh Model) and security SLA (SecSLA Model). These models enable a systematic integration of security constraints and service behavior into cloud while using UPPAAL to check the conformance, which can not only check CloudBeh performance metrics conformance, but also verify whether the security constraints meet the SecSLA. The proposed approach is validated through case study and experiments with a cloud storage service based on OpenStack, which illustrates CloudSec approach effectiveness and can be applied in real cloud scenarios.
The collaborative recommendation mechanism is beneficial for the subject in an open network to find efficiently enough referrers who directly interacted with the object and obtain their trust data. The uncertainty analysis to the collected trust data selects the reliable trust data of trustworthy referrers, and then calculates the statistical trust value on certain reliability for any object. After that the subject can judge its trustworthiness and further make a decision about interaction based on the given threshold. The feasibility of this method is verified by three experiments which are designed to validate the model's ability to fight against malicious service, the exaggeration and slander attack. The interactive success rate is significantly improved by using the new model, and the malicious entities are distinguished more effectively than the comparative model.
Application of trust principals in internet of things (IoT) has allowed to provide more trustworthy services among the corresponding stakeholders. The most common method of assessing trust in IoT applications is to estimate trust level of the end entities (entity-centric) relative to the trustor. In these systems, trust level of the data is assumed to be the same as the trust level of the data source. However, most of the IoT based systems are data centric and operate in dynamic environments, which need immediate actions without waiting for a trust report from end entities. We address this challenge by extending our previous proposals on trust establishment for entities based on their reputation, experience and knowledge, to trust estimation of data items [1-3]. First, we present a hybrid trust framework for evaluating both data trust and entity trust, which will be enhanced as a standardization for future data driven society. The modules including data trust metric extraction, data trust aggregation, evaluation and prediction are elaborated inside the proposed framework. Finally, a possible design model is described to implement the proposed ideas.
This paper proposes a prototype of a level 3 autonomous vehicle using Raspberry Pi, capable of detecting the nearby vehicles using an IR sensor. We make the first attempt to analyze autonomous vehicles from a microscopic level, focusing on each vehicle and their communications with the nearby vehicles and road-side units. Two sets of passive and active experiments on a pair of prototypes were run, demonstrating the interconnectivity of the developed prototype. Several sensors were incorporated into an emulation based on System-on-Chip to further demonstrate the feasibility of the proposed model.
The connection of automotive systems with other systems such as road-side units, other vehicles, and various servers in the Internet opens up new ways for attackers to remotely access safety relevant subsystems within connected cars. The security of connected cars and the whole vehicular ecosystem is thus of utmost importance for consumer trust and acceptance of this emerging technology. This paper describes an approach for on-board detection of unanticipated sequences of events in order to identify suspicious activities. The results show that this approach is fast enough for in-vehicle application at runtime. Several behavior models and synchronization strategies are analyzed in order to narrow down suspicious sequences of events to be sent in a privacy respecting way to a global security operations center for further in-depth analysis.
Verifying that hardware design implementations adhere to specifications is a time intensive and sometimes intractable problem due to the massive size of the system's state space. Formal methods techniques can be used to prove certain tractable specification properties; however, they are expensive, and often require subject matter experts to develop and solve. Nonetheless, hardware verification is a critical process to ensure security and safety properties are met, and encapsulates problems associated with trust and reliability. For complex designs where coverage of the entire state space is unattainable, prioritizing regions most vulnerable to security or reliability threats would allow efficient allocation of valuable verification resources. Stackelberg security games model interactions between a defender, whose goal is to assign resources to protect a set of targets, and an attacker, who aims to inflict maximum damage on the targets after first observing the defender's strategy. In equilibrium, the defender has an optimal security deployment strategy, given the attacker's best response. We apply this Stackelberg security framework to synthesized hardware implementations using the design's network structure and logic to inform defender valuations and verification costs. The defender's strategy in equilibrium is thus interpreted as a prioritization of the allocation of verification resources in the presence of an adversary. We demonstrate this technique on several open-source synthesized hardware designs.
Cloud computing is a solution to reduce the cost of IT by providing elastic access to shared resources. It also provides solutions for on-demand computing power and storage for devices at the edge networks with limited resources. However, increasing the number of connected devices caused by IoT architecture leads to higher network traffic and delay for cloud computing. The centralised architecture of cloud computing also makes the edge networks more susceptible to challenges in the core network. Fog computing is a solution to decrease the network traffic, delay, and increase network resilience. In this paper, we study how fog computing may improve network resilience. We also conduct a simulation to study the effect of fog computing on network traffic and delay. We conclude that using fog computing prepares the network for better response time in case of interactive requests and makes the edge networks more resilient to challenges in the core network.
The IoT (Internet of Things) is one of the primary reasons for the massive growth in the number of connected devices to the Internet, thus leading to an increased volume of traffic in the core network. Fog and edge computing are becoming a solution to handle IoT traffic by moving timesensitive processing to the edge of the network, while using the conventional cloud for historical analysis and long-term storage. Providing processing, storage, and network communication at the edge network are the aim of fog computing to reduce delay, network traffic, and decentralise computing. In this paper, we define a framework that realises fog computing that can be extended to install any service of choice. Our framework utilises fog nodes as an extension of the traditional switch to include processing, networking, and storage. The fog nodes act as local decision-making elements that interface with software-defined networking (SDN), to be able to push updates throughout the network. To test our framework, we develop an IP spoofing security application and ensure its correctness through multiple experiments.