Biblio
In this article, researcher collaboration patterns and research topics on Intelligence and Security Informatics (ISI) are investigated using social network analysis approaches. The collaboration networks exhibit scale-free property and small-world effect. From these networks, the authors obtain the key researchers, institutions, and three important topics.
The electric network frequency (ENF) criterion is a recently developed technique for audio timestamp identification, which involves the matching between extracted ENF signal and reference data. For nearly a decade, conventional matching criterion has been based on the minimum mean squared error (MMSE) or maximum correlation coefficient. However, the corresponding performance is highly limited by low signal-to-noise ratio, short recording durations, frequency resolution problems, and so on. This paper presents a threshold-based dynamic matching algorithm (DMA), which is capable of autocorrecting the noise affected frequency estimates. The threshold is chosen according to the frequency resolution determined by the short-time Fourier transform (STFT) window size. A penalty coefficient is introduced to monitor the autocorrection process and finally determine the estimated timestamp. It is then shown that the DMA generalizes the conventional MMSE method. By considering the mainlobe width in the STFT caused by limited frequency resolution, the DMA achieves improved identification accuracy and robustness against higher levels of noise and the offset problem. Synthetic performance analysis and practical experimental results are provided to illustrate the advantages of the DMA.
Information is increasing quickly, database owners have tendency to outsource their data to an external service provider called Cloud Computing. Using Cloud, clients can remotely store their data without burden of local data storage and maintenance. However, such service provider is untrusted, therefore there are some challenges in data security: integrity, availability and confidentiality. Since integrity and availability are prerequisite conditions of the existence of a system, we mainly focus on them rather than confidentiality. To ensure integrity and availability, researchers have proposed network coding-based POR (Proof of Retrievability) schemes that enable the servers to demonstrate whether the data is retrievable or not. However, most of network coding-based POR schemes are inefficient in data checking and also cannot prevent a common attack in POR: small corruption attack. In this paper, we propose a new network coding-based POR scheme using dispersal code in order to reduce cost in checking phase and also to prevent small corruption attack.
This paper discusses strategies for I/O sharing in Multiple Independent Levels of Security (MILS) systems mostly deployed in the special environment of avionic systems. MILS system designs are promising approaches for handling the increasing complexity of functionally integrated systems, where multiple applications run concurrently on the same hardware platform. Such integrated systems, also known as Integrated Modular Avionics (IMA) in the aviation industry, require communication to remote systems located outside of the hosting hardware platform. One possible solution is to provide each partition, the isolated runtime environment of an application, a direct interface to the communication's hardware controller. Nevertheless, this approach requires a special design of the hardware itself. This paper discusses efficient system architectures for I/O sharing in the environment of high-criticality embedded systems and the exemplary analysis of Free scale's proprietary Data Path Acceleration Architecture (DPAA) with respect to generic hardware requirements. Based on this analysis we also discuss the development of possible architectures matching with the MILS approach. Even though the analysis focuses on avionics it is equally applicable to automotive architectures such as Auto SAR.
With the rapid increase in cloud services collecting and using user data to offer personalized experiences, ensuring that these services comply with their privacy policies has become a business imperative for building user trust. However, most compliance efforts in industry today rely on manual review processes and audits designed to safeguard user data, and therefore are resource intensive and lack coverage. In this paper, we present our experience building and operating a system to automate privacy policy compliance checking in Bing. Central to the design of the system are (a) Legal ease-a language that allows specification of privacy policies that impose restrictions on how user data is handled, and (b) Grok-a data inventory for Map-Reduce-like big data systems that tracks how user data flows among programs. Grok maps code-level schema elements to data types in Legal ease, in essence, annotating existing programs with information flow types with minimal human input. Compliance checking is thus reduced to information flow analysis of Big Data systems. The system, bootstrapped by a small team, checks compliance daily of millions of lines of ever-changing source code written by several thousand developers.
Our vision in this paper is that agency, as the individual ability to intervene and tailor the system, is a crucial element in building trust in IoT technologies. Following up on this vision, we will first address the issue of agency, namely the individual capability to adopt free decisions, as a relevant driver in building trusted human-IoT relations, and how agency should be embedded in digital systems. Then we present the main challenges posed by existing approaches to implement this vision. We show then our proposal for a model-based approach that realizes the agency concept, including a prototype implementation.
Distributed Denial of Service attacks are a growing threat to organizations and, as defense mechanisms are becoming more advanced, hackers are aiming at the application layer. For example, application layer Low and Slow Distributed Denial of Service attacks are becoming a serious issue because, due to low resource consumption, they are hard to detect. In this position paper, we propose a reference architecture that mitigates the Low and Slow Distributed Denial of Service attacks by utilizing Software Defined Infrastructure capabilities. We also propose two concrete architectures based on the reference architecture: a Performance Model-Based and Off-The-Shelf Components based architecture, respectively. We introduce the Shark Tank concept, a cluster under detailed monitoring that has full application capabilities and where suspicious requests are redirected for further filtering.
This paper presents one-layer projection neural networks based on projection operators for solving constrained variational inequalities and related optimization problems. Sufficient conditions for global convergence of the proposed neural networks are provided based on Lyapunov stability. Compared with the existing neural networks for variational inequalities and optimization, the proposed neural networks have lower model complexities. In addition, some improved criteria for global convergence are given. Compared with our previous work, a design parameter has been added in the projection neural network models, and it results in some improved performance. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural networks.
We consider the block Rayleigh fading multiple-input multiple-output (MIMO) wiretap channel with no prior channel state information (CSI) available at any of the terminals. The channel gains remain constant in a coherence time of T symbols, and then change to another independent realization. The transmitter, the legitimate receiver and the eavesdropper have nt, nr and ne antennas, respectively. We determine the exact secure degrees of freedom (s.d.o.f.) of this system when T ≥ 2 min(nt, nr). We show that, in this case, the s.d.o.f. is exactly (min(nt, nr) - ne)+(T - min(nt, nr))/T. The first term can be interpreted as the eavesdropper with ne antennas taking away ne antennas from both the transmitter and the legitimate receiver. The second term can be interpreted as a fraction of s.d.o.f. being lost due to the lack of CSI at the legitimate receiver. In particular, the fraction loss, min(nt, nr)/T, can be interpreted as the fraction of channel uses dedicated to training the legitimate receiver for it to learn its own CSI. We prove that this s.d.o.f. can be achieved by employing a constant norm channel input, which can be viewed as a generalization of discrete signalling to multiple dimensions.
Distributed wireless sensor network technologies have become one of the major research areas in healthcare industries due to rapid maturity in improving the quality of life. Medical Wireless Sensor Network (MWSN) via continuous monitoring of vital health parameters over a long period of time can enable physicians to make more accurate diagnosis and provide better treatment. The MWSNs provide the options for flexibilities and cost saving to patients and healthcare industries. Medical data sensors on patients produce an increasingly large volume of increasingly diverse real-time data. The transmission of this data through hospital wireless networks becomes a crucial problem, because the health information of an individual is highly sensitive. It must be kept private and secure. In this paper, we propose a security model to protect the transfer of medical data in hospitals using MWSNs. We propose Compressed Sensing + Encryption as a strategy to achieve low-energy secure data transmission in sensor networks.
The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. We give a new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair (A,C) of the system and we show in particular that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked. In addition, we show how the design of a secure local control loop can improve the resilience of the system. When the number of attacks is smaller than a threshold, we propose an efficient algorithm inspired from techniques in compressed sensing to estimate the state of the plant despite attacks. We give a theoretical characterization of the performance of this algorithm and we show on numerical simulations that the method is promising and allows to reconstruct the state accurately despite attacks. Finally, we consider the problem of designing output-feedback controllers that stabilize the system despite sensor attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.
This paper describes the development of subsymbolic ACT-R models for the Concentration game. Performance data is taken from an experiment in which participants played the game un- der two conditions: minimizing the number of mismatches/ turns during a game, and minimizing the time to complete a game. Conflict resolution and parameter tuning are used to implement an accuracy model and a speed model that capture the differences for the two conditions. Visual attention drives exploration of the game board in the models. Modeling re- sults are generally consistent with human performance, though some systematic differences can be seen. Modeling decisions, model limitations, and open issues are discussed.
Central to the secure operation of a public key infrastructure (PKI) is the ability to revoke certificates. While much of users' security rests on this process taking place quickly, in practice, revocation typically requires a human to decide to reissue a new certificate and revoke the old one. Thus, having a proper understanding of how often systems administrators reissue and revoke certificates is crucial to understanding the integrity of a PKI. Unfortunately, this is typically difficult to measure: while it is relatively easy to determine when a certificate is revoked, it is difficult to determine whether and when an administrator should have revoked.
In this paper, we use a recent widespread security vulnerability as a natural experiment. Publicly announced in April 2014, the Heartbleed OpenSSL bug, potentially (and undetectably) revealed servers' private keys. Administrators of servers that were susceptible to Heartbleed should have revoked their certificates and reissued new ones, ideally as soon as the vulnerability was publicly announced.
Using a set of all certificates advertised by the Alexa Top 1 Million domains over a period of six months, we explore the patterns of reissuing and revoking certificates in the wake of Heartbleed. We find that over 73% of vulnerable certificates had yet to be reissued and over 87% had yet to be revoked three weeks after Heartbleed was disclosed. Moreover, our results show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication.
To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.
This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.
Memory corruption bugs in software written in low-level languages like C or C++ are one of the oldest problems in computer security. The lack of safety in these languages allows attackers to alter the program's behavior or take full control over it by hijacking its control flow. This problem has existed for more than 30 years and a vast number of potential solutions have been proposed, yet memory corruption attacks continue to pose a serious threat. Real world exploits show that all currently deployed protections can be defeated. This paper sheds light on the primary reasons for this by describing attacks that succeed on today's systems. We systematize the current knowledge about various protection techniques by setting up a general model for memory corruption attacks. Using this model we show what policies can stop which attacks. The model identifies weaknesses of currently deployed techniques, as well as other proposed protections enforcing stricter policies. We analyze the reasons why protection mechanisms implementing stricter polices are not deployed. To achieve wide adoption, protection mechanisms must support a multitude of features and must satisfy a host of requirements. Especially important is performance, as experience shows that only solutions whose overhead is in reasonable bounds get deployed. A comparison of different enforceable policies helps designers of new protection mechanisms in finding the balance between effectiveness (security) and efficiency. We identify some open research problems, and provide suggestions on improving the adoption of newer techniques.
We present an architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security of the collected data, which will inevitably include intimate participant behavioral data. Third, the SBO must serve our research interests, which will inevitably change as collected data is analyzed and interpreted. This short paper summarizes some of our design and implementation benefits and discusses a few hurdles and trade-offs to consider when designing such a data collection system.
One hundred-sixty four participants from the United States, India and China completed a survey designed to assess past phishing experiences and whether they engaged in certain online safety practices (e.g., reading a privacy policy). The study investigated participants' reported agreement regarding the characteristics of phishing attacks, types of media where phishing occurs and the consequences of phishing. A multivariate analysis of covariance indicated that there were significant differences in agreement regarding phishing characteristics, phishing consequences and types of media where phishing occurs for these three nationalities. Chronological age and education did not influence the agreement ratings; therefore, the samples were demographically equivalent with regards to these variables. A logistic regression analysis was conducted to analyze the categorical variables and nationality data. Results based on self-report data indicated that (1) Indians were more likely to be phished than Americans, (2) Americans took protective actions more frequently than Indians by destroying old documents, and (3) Americans were more likely to notice the "padlock" security icon than either Indian or Chinese respondents. The potential implications of these results are discussed in terms of designing culturally sensitive anti-phishing solutions.