Biblio
This paper presents a framework for privacy-preserving video delivery system to fulfill users' privacy demands. The proposed framework leverages the inference channels in sensitive behavior prediction and object tracking in a video surveillance system for the sequence privacy protection. For such a goal, we need to capture different pieces of evidence which are used to infer the identity. The temporal, spatial and context features are extracted from the surveillance video as the observations to perceive the privacy demands and their correlations. Taking advantage of quantifying various evidence and utility, we let users subscribe videos with a viewer-dependent pattern. We implement a prototype system for off-line and on-line requirements in two typical monitoring scenarios to construct extensive experiments. The evaluation results show that our system can efficiently satisfy users' privacy demands while saving over 25% more video information compared to traditional video privacy protection schemes.
The huge popularity of online social networks and the potential financial gain have led to the creation and proliferation of zombie accounts, i.e., fake user accounts. For considerable amount of payment, zombie accounts can be directed by their managers to provide pre-arranged biased reactions to different social events or the quality of a commercial product. It is thus critical to detect and screen these accounts. Prior arts are either inaccurate or relying heavily on complex posting/tweeting behaviors in the classification process of normal/zombie accounts. In this work, we propose to use a bi-level penalized logistic classifier, an efficient high-dimensional data analysis technique, to detect zombie accounts based on their publicly available profile information and the statistics of their followers' registration locations. Our approach, termed (B)i-level (P)enalized (LO)gistic (C)lassifier (BPLOC), is data adaptive and can be extended to mount more accurate detections. Our experimental results are based on a small number of SINA WeiBo accounts and have demonstrated that BPLOC can classify zombie accounts accurately.
Interactive systems are developed according to requirements, which may be, for instance, documentation, prototypes, diagrams, etc. The informal nature of system requirements may be a source of problems: it may be the case that a system does not implement the requirements as expected, thus, a way to validate whether an implementation follows the requirements is needed. We propose a novel approach to validating a system using formal models of the system. In this approach, a set of traces generated from the execution of the real interactive system is searched over the state space of the formal model. The scalability of the approach is demonstrated by an application to an industrial system in the nuclear plant domain. The combination of trace analysis and formal methods provides feedback that can bring improvements to both the real interactive system and the formal model.
We present a process for detection of IP theft in VLSI devices that exploits the internal test scan chains. The IP owner learns implementation details in the suspect device to find evidence of the theft, while the top level function is public. The scan chains supply direct access to the internal registers in the device, thus making it possible to learn the logic functions of the internal combinational logic chunks. Our work introduces an innovative way of applying Boolean function analysis techniques for learning digital circuits with the goal of IP theft detection. By using Boolean function learning methods, the learner creates a partial dependency graph of the internal flip-flops. The graph is further partitioned using the SNN graph clustering method, and individual blocks of combinational logic are isolated. These blocks can be matched with known building blocks that compose the original function. This enables reconstruction of the function implementation to the level of pipeline structure. The IP owner can compare the resulting structure with his own implementation to confirm or refute that an IP violation has occurred. We demonstrate the power of the presented approach with a test case of an open source Bitcoin SHA-256 accelerator, containing more than 80,000 registers. With the presented method we discover the microarchitecture of the module, locate all the main components of the SHA-256 algorithm, and learn the module's flow control.
Currently, different forms of ransomware are increasingly threatening Internet users. Modern ransomware encrypts important user data, and it is only possible to recover it once a ransom has been paid. In this article we show how software-defined networking can be utilized to improve ransomware mitigation. In more detail, we analyze the behavior of popular ransomware - CryptoWall - and, based on this knowledge, propose two real-time mitigation methods. Then we describe the design of an SDN-based system, implemented using OpenFlow, that facilitates a timely reaction to this threat, and is a crucial factor in the case of crypto ransomware. What is important is that such a design does not significantly affect overall network performance. Experimental results confirm that the proposed approach is feasible and efficient.
We study the value of data privacy in a game-theoretic model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The private data of each individual represents her knowledge about an underlying state, which is the information that the data collector desires to learn. Different from most of the existing work on privacy-aware surveys, our model does not assume the data collector to be trustworthy. Then, an individual takes full control of its own data privacy and reports only a privacy-preserving version of her data. In this paper, the value of ε units of privacy is measured by the minimum payment of all nonnegative payment mechanisms, under which an individual's best response at a Nash equilibrium is to report the data with a privacy level of ε. The higher ε is, the less private the reported data is. We derive lower and upper bounds on the value of privacy which are asymptotically tight as the number of data subjects becomes large. Specifically, the lower bound assures that it is impossible to use less amount of payment to buy ε units of privacy, and the upper bound is given by an achievable payment mechanism that we designed. Based on these fundamental limits, we further derive lower and upper bounds on the minimum total payment for the data collector to achieve a given learning accuracy target, and show that the total payment of the designed mechanism is at most one individual's payment away from the minimum.
Motor vehicles are widely used, quite valuable, and often targeted for theft. Preventive measures include car alarms, proximity control, and physical locks, which can be bypassed if the car is left unlocked, or if the thief obtains the keys. Reactive strategies like cameras, motion detectors, human patrolling, and GPS tracking can monitor a vehicle, but may not detect car thefts in a timely manner. We propose a fast automatic driver recognition system that identifies unauthorized drivers while overcoming the drawbacks of previous approaches. We factor drivers' trips into elemental driving events, from which we extract their driving preference features that cannot be exactly reproduced by a thief driving away in the stolen car. We performed real world evaluation using the driving data collected from 31 volunteers. Experiment results show we can distinguish the current driver as the owner with 97% accuracy, while preventing impersonation 91% of the time.
The problem of securely outsourcing computation has received widespread attention due to the development of cloud computing and mobile devices. In this paper, we first propose a secure verifiable outsourcing algorithm of single modular exponentiation based on the one-malicious model of two untrusted servers. The outsourcer could detect any failure with probability 1 if one of the servers misbehaves. We also present the other verifiable outsourcing algorithm for multiple modular exponentiations based on the same model. Compared with the state-of-the-art algorithms, the proposed algorithms improve both checkability and efficiency for the outsourcer. Finally, we utilize the proposed algorithms as two subroutines to achieve outsource-secure polynomial evaluation and ciphertext-policy attributed-based encryption (CP-ABE) scheme with verifiable outsourced encryption and decryption.
We describe the formalization of a correctness proof for a conflict detection algorithm for XACML (eXtensible Access Control Markup Language). XACML is a standardized declarative access control policy language that is increasingly used in industry. In practice it is common for rule sets to grow large, and contain unintended errors, often due to conflicting rules. A conflict occurs in a policy when one rule permits a request and another denies that same request. Such errors can lead to serious risks involving both allowing access to an unauthorized user as well as denying access to someone who needs it. Removing conflicts is thus an important aspect of debugging policies, and the use of a verified algorithm provides the highest assurance in a domain where security is important. In this paper, we focus on several complex XACML constructs, including time ranges and integer intervals, as well as ways to combine any number of functions using the boolean operators and, or, and not. The latter are the most complex, and add significant expressive power to the language. We propose an algorithm to find conflicts and then use the Coq Proof Assistant to prove the algorithm correct. We develop a library of tactics to help automate the proof.
With the outgrowth of video editing tools, video information trustworthiness becomes a hypersensitive field. Today many devices have the capability of capturing digital videos such as CCTV, digital cameras and mobile phones and these videos may transmitted over the Internet or any other non secure channel. As digital video can be used to as supporting evidence, it has to be protected against manipulation or tampering. As most video authentication techniques are based on watermarking and digital signatures, these techniques are effectively used in copyright purposes but difficult to implement in other cases such as video surveillance or in videos captured by consumer's cameras. In this paper we propose an intelligent technique for video authentication which uses the video local information which makes it useful for real world applications. The proposed algorithm relies on the video's statistical local information which was applied on a dataset of videos captured by a range of consumer video cameras. The results show that the proposed algorithm has potential to be a reliable intelligent technique in digital video authentication without the need to use for SVM classifier which makes it faster and less computationally expensive in comparing with other intelligent techniques.
Wearable devices, which are small electronic devices worn on a human body, are equipped with low level of processing and storage capacities and offer some types of integrated functionalities. Recently, wearable device is becoming increasingly popular, various kinds of wearable device are launched in the market; however, wearable devices require a powerful local-hub, most are smartphone, to replenish processing and storage capacities for advanced functionalities. Sometime it may be inconvenient to carry the local-hub (smartphone); thus, many wearable devices are equipped with Wi-Fi interface, enabling them to exchange data with local-hub though the Internet when the local-hub is not nearby. However, this results in long response time and restricted functionalities. In this paper, we present a virtual local-hub solution, which utilizes network equipment nearby (e.g., Wi-Fi APs) as the local-hub. Since migrating all applications serving the wearable devices respectively takes too much networking and storage resources, the proposed solution deploys function modules to multiple network nodes and enables remote function module sharing among different users and applications. To reduce the impact of the solution on the network bandwidth, we propose a heuristic algorithm for function module allocation with the objective of minimizing total bandwidth consumption. We conduct series of experiments, and the results show that the proposed solution can reduce the bandwidth consumption by up to half and still serve all requests given a large number of service requests.
This paper considers the physical layer security for the cluster-based cooperative wireless sensor networks (WSNs), where each node is equipped with a single antenna and sensor nodes cooperate at each cluster of the network to form a virtual multi-input multi-output (MIMO) communication architecture. We propose a joint cooperative beamforming and jamming scheme to enhance the security of the WSNs where a part of sensor nodes in Alice's cluster are deployed to transmit beamforming signals to Bob while a part of sensor nodes in Bob's cluster are utilized to jam Eve with artificial noise. The optimization of beamforming and jamming vectors to minimize total energy consumption satisfying the quality-of-service (QoS) constraints is a NP-hard problem. Fortunately, through reformulation, the problem is proved to be a quadratically constrained quadratic problem (QCQP) which can be solved by solving constraint integer programs (SCIP) algorithm. Finally, we give the simulation results of our proposed scheme.
This study examines the effectiveness of virtual reality technology at creating an immersive user experience in which participants experience first hand the extreme negative consequences of smartphone use while driving. Research suggests that distracted driving caused by smartphones is related to smartphone addiction and causes fatalities. Twenty-two individuals participated in the virtual reality user experience (VRUE) in which they were asked to drive a virtual car using a Oculus Rift headset, LeapMotion hand tracking device, and a force feedback steering wheel and pedals. While driving in the simulation participants were asked to interact with a smartphone and after a period of time trying to manage both tasks a vehicle appears before them and they are involved in a head-on collision. Initial results indicated a strong sense of presence was felt by participants and a change or re-enforcement of the participant's perception of the dangers of smartphone use while driving was observed.
This paper has presented an approach of vTPM (virtual Trusted Platform Module) Dynamic Trust Extension (DTE) to satisfy the requirements of frequent migrations. With DTE, vTPM is a delegation of the capability of signing attestation data from the underlying pTPM (physical TPM), with one valid time token issued by an Authentication Server (AS). DTE maintains a strong association between vTPM and its underlying pTPM, and has clear distinguishability between vTPM and pTPM because of the different security strength of the two types of TPM. In DTE, there is no need for vTPM to re-acquire Identity Key (IK) certificate(s) after migration, and pTPM can have a trust revocation in real time. Furthermore, DTE can provide forward security. Seen from the performance measurements of its prototype, DTE is feasible.
With an immense number of threats pouring in from nation states and hacktivists as well as terrorists and cybercriminals, the requirement of a globally secure infrastructure becomes a major obligation. Most critical infrastructures were primarily designed to work isolated from the normal communication network, but due to the advent of the "Smart Grid" that uses advanced and intelligent approaches to control critical infrastructure, it is necessary for these cyber-physical systems to have access to the communication system. Consequently, such critical systems have become prime targets; hence security of critical infrastructure is currently one of the most challenging research problems. Performing an extensive security analysis involving experiments with cyber-attacks on a live industrial control system (ICS) is not possible. Therefore, researchers generally resort to test beds and complex simulations to answer questions related to SCADA systems. Since all conclusions are drawn from the test bed, it is necessary to perform validation against a physical model. This paper examines the fidelity of a virtual SCADA testbed to a physical test bed and allows for the study of the effects of cyber- attacks on both of the systems.
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of high-performance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.
The power grid is a prime target of cyber criminals and warrants special attention as it forms the backbone of major infrastructures that drive the nation's defense and economy. Developing security measures for the power grid is challenging since it is physically dispersed and interacts dynamically with associated cyber infrastructures that control its operation. This paper presents a mathematical framework to investigate stability of two area systems due to data attacks on Automatic Generation Control (AGC) system. Analytical and simulation results are presented to identify attack levels that could drive the AGC system to potentially become unstable.
This paper proposes a practical time-phased model to analyze the vulnerability of power systems over a time horizon, in which the scheduled maintenance of network facilities is considered. This model is deemed as an efficient tool that could be used by system operators to assess whether how their systems become vulnerable giving a set of scheduled facility outages. The final model is presented as a single level Mixed-Integer Linear Programming (MILP) problem solvable with commercially available software. Results attained based on the well-known IEEE 24-Bus Reliability Test System (RTS) appreciate the applicability of the model and highlight the necessity of considering the scheduled facility outages in assessing the vulnerability of a power system.
Vulnerability Detection Tools (VDTs) have been researched and developed to prevent problems with respect to security. Such tools identify vulnerabilities that exist on the server in advance. By using these tools, administrators must protect their servers from attacks. They have, however, different results since methods for detection of different tools are not the same. For this reason, it is recommended that results are gathered from many tools rather than from a single tool but the installation which all of the tools have requires a great overhead. In this paper, we propose a novel vulnerability detection mechanism using Open API and use OpenVAS for actual testing.
System administrators have unlimited access to system resources. As the Snowden case shows, these permissions can be exploited to steal valuable personal, classified, or commercial data. In this work we propose a strategy that increases the organizational information security by constraining IT personnel's view of the system and monitoring their actions. To this end, we introduce the abstraction of perforated containers – while regular Linux containers are too restrictive to be used by system administrators, by "punching holes" in them, we strike a balance between information security and required administrative needs. Our system predicts which system resources should be accessible for handling each IT issue, creates a perforated container with the corresponding isolation, and deploys it in the corresponding machines as needed for fixing the problem. Under this approach, the system administrator retains his superuser privileges, while he can only operate within the container limits. We further provide means for the administrator to bypass the isolation, and perform operations beyond her boundaries. However, such operations are monitored and logged for later analysis and anomaly detection. We provide a proof-of-concept implementation of our strategy, along with a case study on the IT database of IBM Research in Israel.
In 2012, two academic groups reported having computed the RSA private keys for 0.5% of HTTPS hosts on the internet, and traced the underlying issue to widespread random number generation failures on networked devices. The vulnerability was reported to dozens of vendors, several of whom responded with security advisories, and the Linux kernel was patched to fix a boottime entropy hole that contributed to the failures. In this paper, we measure the actions taken by vendors and end users over time in response to the original disclosure. We analyzed public internet-wide TLS scans performed between July 2010 and May 2016 and extracted 81 million distinct RSA keys. We then computed the pairwise common divisors for the entire set in order to factor over 313,000 keys vulnerable to the aw, and fingerprinted implementations to study patching behavior over time across vendors. We find that many vendors appear to have never produced a patch, and observed little to no patching behavior by end users of affected devices. The number of vulnerable hosts increased in the years after notification and public disclosure, and several newly vulnerable implementations have appeared since 2012. Vendor notification, positive vendor responses, and even vendor-produced public security advisories appear to have little correlation with end-user security.
In many decades, due to fast growth of the World Wide Web, HTTP automated software/applications (auto-ware) are blooming for multiple purposes. Unfortunately, beside normal applications such as virus defining or operating system updating, auto-ware can also act as abnormal processes such as botnet, worms, virus, spywares, and advertising software (adware). Therefore, auto-ware, in a sense, consumes network bandwidth, and it might become internal security threats, auto-ware accessed domain/server also might be malicious one. Understanding about behaviour of HTTP auto-ware is beneficial for anomaly/malicious detection, the network management, traffic engineering and security. In this paper, HTTP auto-ware communication behaviour is analysed and modeled, from which a method in filtering out its domain/server is proposed. The filtered results can be used as a good resource for other security action purposes such as malicious domain/URL detection/filtering or investigation of HTTP malware from internal threats.
For many wiretap channel models asymptotically optimal coding schemes are known, but less effort has been put into actual realizations of wiretap codes for practical parameters. Bounds on the mutual information and error probability when using coset coding on a Rayleigh fading channel were recently established by Oggier and Belfiore, and the results in this paper build on their work. However, instead of using their ultimate inverse norm sum approximation, a more precise expression for the eavesdropper's probability of correct decision is used in order to determine a general class of good coset codes. The code constructions are based on well-rounded lattices arising from simple geometric criteria. In addition to new coset codes and simulation results, novel number-theoretic results on well-rounded ideal lattices are presented.
	 
For many wiretap channel models asymptotically optimal coding schemes are known, but less effort has been put into actual realizations of wiretap codes for practical parameters. Bounds on the mutual information and error probability when using coset coding on a Rayleigh fading channel were recently established by Oggier and Belfiore, and the results in this paper build on their work. However, instead of using their ultimate inverse norm sum approximation, a more precise expression for the eavesdropper's probability of correct decision is used in order to determine a general class of good coset codes. The code constructions are based on well-rounded lattices arising from simple geometric criteria. In addition to new coset codes and simulation results, novel number-theoretic results on well-rounded ideal lattices are presented.
In this study, we present WindTalker, a novel and practical keystroke inference framework that allows an attacker to infer the sensitive keystrokes on a mobile device through WiFi-based side-channel information. WindTalker is motivated from the observation that keystrokes on mobile devices will lead to different hand coverage and the finger motions, which will introduce a unique interference to the multi-path signals and can be reflected by the channel state information (CSI). The adversary can exploit the strong correlation between the CSI fluctuation and the keystrokes to infer the user's number input. WindTalker presents a novel approach to collect the target's CSI data by deploying a public WiFi hotspot. Compared with the previous keystroke inference approach, WindTalker neither deploys external devices close to the target device nor compromises the target device. Instead, it utilizes the public WiFi to collect user's CSI data, which is easy-to-deploy and difficult-to-detect. In addition, it jointly analyzes the traffic and the CSI to launch the keystroke inference only for the sensitive period where password entering occurs. WindTalker can be launched without the requirement of visually seeing the smart phone user's input process, backside motion, or installing any malware on the tablet. We implemented Windtalker on several mobile phones and performed a detailed case study to evaluate the practicality of the password inference towards Alipay, the largest mobile payment platform in the world. The evaluation results show that the attacker can recover the key with a high successful rate.



