Biblio
This paper presents for the first time a study on the security of information processed by video projectors. Examples of video recovery from the electromagnetic radiation of these equipment will be illustrated both in laboratory and real-field environment. It presents the results of the time parameters evaluation for the analyzed video signal that confirm the video standards specifications. There will also be illustrated the results of a vulnerability analysis based on the colors used to display the images but also the remote video recovery capabilities.
Single sign-on (SSO) becomes popular as the identity management and authentication infrastructure in the Internet. A user receives an SSO ticket after being authenticated by the identity provider (IdP), and this IdP-issued ticket enables him to sign onto the relying party (RP). However, there are vulnerabilities (e.g., Golden SAML) that allow attackers to arbitrarily issue SSO tickets and then sign onto any RP on behalf of any user. Meanwhile, several incidents of certification authorities (CAs) also indicate that the trusted third party of security services is not so trustworthy as expected, and fraudulent TLS server certificates are signed by compromised or deceived CAs to launch TLS man-in-the-middle attacks. Various approaches are then proposed to tame the absolute authority of (compromised) CAs, to detect or prevent fraudulent TLS server certificates in the TLS handshakes. The trust model of SSO services is similar to that of certificate services. So this paper investigates the defense strategies of these trust-enhancements of certificate services, and attempts to apply these strategies to SSO to derive the trust-enhancements applicable in the SSO services. Our analysis derives (a) some security designs which have been commonly-used in the SSO services or non-SSO authentication services, and (b) two schemes effectively improving the trustworthiness of SSO services, which are not widely discussed or adopted.
Network attacks continue to pose threats to missions in cyber space. To prevent critical missions from getting impacted or minimize the possibility of mission impact, active cyber defense is very important. Mission impact graph is a graphical model that enables mission impact assessment and shows how missions can be possibly impacted by cyber attacks. Although the mission impact graph provides valuable information, it is still very difficult for human analysts to comprehend due to its size and complexity. Especially when given limited resources, human analysts cannot easily decide which security measures to take first with respect to mission assurance. Therefore, this paper proposes to apply a ranking algorithm towards the mission impact graph so that the huge amount of information can be prioritized. The actionable conditions that can be managed by security admins are ranked with numeric values. The rank enables efficient utilization of limited resources and provides guidance for taking security countermeasures.
Aiming at the problem that one-dimensional parameter optimization in insider threat detection using deep learning will lead to unsatisfactory overall performance of the model, an insider threat detection method based on adaptive optimization DBN by grid search is designed. This method adaptively optimizes the learning rate and the network structure which form the two-dimensional grid, and adaptively selects a set of optimization parameters for threat detection, which optimizes the overall performance of the deep learning model. The experimental results show that the method has good adaptability. The learning rate of the deep belief net is optimized to 0.6, the network structure is optimized to 6 layers, and the threat detection rate is increased to 98.794%. The training efficiency and the threat detection rate of the deep belief net are improved.
The greatest threat towards securing the organization and its assets are no longer the attackers attacking beyond the network walls of the organization but the insiders present within the organization with malicious intent. Existing approaches helps to monitor, detect and prevent any malicious activities within an organization's network while ignoring the human behavior impact on security. In this paper we have focused on user behavior profiling approach to monitor and analyze user behavior action sequence to detect insider threats. We present an ensemble hybrid machine learning approach using Multi State Long Short Term Memory (MSLSTM) and Convolution Neural Networks (CNN) based time series anomaly detection to detect the additive outliers in the behavior patterns based on their spatial-temporal behavior features. We find that using Multistate LSTM is better than basic single state LSTM. The proposed method with Multistate LSTM can successfully detect the insider threats providing the AUC of 0.9042 on train data and AUC of 0.9047 on test data when trained with publically available dataset for insider threats.
Recently, malicious insider attacks represent one of the most damaging threats to companies and government agencies. This paper proposes a new framework in constructing a user-centered machine learning based insider threat detection system on multiple data granularity levels. System evaluations and analysis are performed not only on individual data instances but also on normal and malicious insiders, where insider scenario specific results and delay in detection are reported and discussed. Our results show that the machine learning based detection system can learn from limited ground truth and detect new malicious insiders with a high accuracy.
In today's interconnected world, universities recognize the importance of protecting their information assets from internal and external threats. Being the possible insider threats to Information Security, employees are often coined as the weakest link. Both employees and organizations should be aware of this raising challenge. Understanding staff perception of compliance behaviour is critical for universities wanting to leverage their staff capabilities to mitigate Information Security risks. Therefore, this research seeks to get insights into staff perception based on factors adopted from several theories by using proposed constructs i.e. "perceived" practices/policies and "perceived" intention to comply. Drawing from the General Deterrence Theory, Protection Motivation Theory, Theory of Planned Behaviour and Information Reinforcement, within the context of Palestine universities, this paper integrates staff awareness of Information Security Policies (ISP) countermeasures as antecedents to ``perceived'' influencing factors (perceived sanctions, perceived rewards, perceived coping appraisal, and perceived information reinforcement). The empirical study is designed to follow a quantitative research approaches, use survey as a data collection method and questionnaires as the research instruments. Partial least squares structural equation modelling is used to inspect the reliability and validity of the measurement model and hypotheses testing for the structural model. The research covers ISP awareness among staff and seeks to assert that information security is the responsibility of all academic and administrative staff from all departments. Overall, our pilot study findings seem promising, and we found strong support for our theoretical model.
Research on keystroke dynamics has the good potential to offer continuous authentication that complements conventional authentication methods in combating insider threats and identity theft before more harm can be done to the genuine users. Unfortunately, the large amount of data required by free-text keystroke authentication often contain personally identifiable information, or PII, and personally sensitive information, such as a user's first name and last name, username and password for an account, bank card numbers, and social security numbers. As a result, there are privacy risks associated with keystroke data that must be mitigated before they are shared with other researchers. We conduct a systematic study to remove PII's from a recent large keystroke dataset. We find substantial amounts of PII's from the dataset, including names, usernames and passwords, social security numbers, and bank card numbers, which, if leaked, may lead to various harms to the user, including personal embarrassment, blackmails, financial loss, and identity theft. We thoroughly evaluate the effectiveness of our detection program for each kind of PII. We demonstrate that our PII detection program can achieve near perfect recall at the expense of losing some useful information (lower precision). Finally, we demonstrate that the removal of PII's from the original dataset has only negligible impact on the detection error tradeoff of the free-text authentication algorithm by Gunetti and Picardi. We hope that this experience report will be useful in informing the design of privacy removal in future keystroke dynamics based user authentication systems.
With the rapidly increasing connectivity in cyberspace, Insider Threat is becoming a huge concern. Insider threat detection from system logs poses a tremendous challenge for human analysts. Analyzing log files of an organization is a key component of an insider threat detection and mitigation program. Emerging machine learning approaches show tremendous potential for performing complex and challenging data analysis tasks that would benefit the next generation of insider threat detection systems. However, with huge sets of heterogeneous data to analyze, applying machine learning techniques effectively and efficiently to such a complex problem is not straightforward. In this paper, we extract a concise set of features from the system logs while trying to prevent loss of meaningful information and providing accurate and actionable intelligence. We investigate two unsupervised anomaly detection algorithms for insider threat detection and draw a comparison between different structures of the system logs including daily dataset and periodically aggregated one. We use the generated anomaly score from the previous cycle as the trust score of each user fed to the next period's model and show its importance and impact in detecting insiders. Furthermore, we consider the psychometric score of users in our model and check its effectiveness in predicting insiders. As far as we know, our model is the first one to take the psychometric score of users into consideration for insider threat detection. Finally, we evaluate our proposed approach on CERT insider threat dataset (v4.2) and show how it outperforms previous approaches.
Recently Distributed Denial-of-Service (DDoS) are becoming more and more sophisticated, which makes the existing defence systems not capable of tolerating by themselves against wide-ranging attacks. Thus, collaborative protection mitigation has become a needed alternative to extend defence mechanisms. However, the existing coordinated DDoS mitigation approaches either they require a complex configuration or are highly-priced. Blockchain technology offers a solution that reduces the complexity of signalling DDoS system, as well as a platform where many autonomous systems (Ass) can share hardware resources and defence capabilities for an effective DDoS defence. In this work, we also used a Deep learning DDoS detection system; we identify individual DDoS attack class and also define whether the incoming traffic is legitimate or attack. By classifying the attack traffic flow separately, our proposed mitigation technique could deny only the specific traffic causing the attack, instead of blocking all the traffic coming towards the victim(s).
The concept of Virtualized Network Functions (VNFs) aims to move Network Functions (NFs) out of dedicated hardware devices into software that runs on commodity hardware. A single NF consists of multiple VNF instances, usually running on virtual machines in a cloud infrastructure. The elastic management of an NF refers to load management across the VNF instances and the autonomic scaling of the number of VNF instances as the load on the NF changes. In this paper, we present EL-SEC, an autonomic framework to elastically manage security NFs on a virtualized infrastructure. As a use case, we deploy the Snort Intrusion Detection System as the NF on the GENI testbed. Concepts from control theory are used to create an Elastic Manager, which implements various controllers - in this paper, Proportional Integral (PI) and Proportional Integral Derivative (PID) - to direct traffic across the VNF Snort instances by monitoring the current load. RINA (a clean-slate Recursive InterNetwork Architecture) is used to build a distributed application that monitors load and collects Snort alerts, which are processed by the Elastic Manager and an Attack Analyzer, respectively. Software Defined Networking (SDN) is used to steer traffic through the VNF instances, and to block attack traffic. Our results show that virtualized security NFs can be easily deployed using our EL-SEC framework. With the help of real-time graphs, we show that PI and PID controllers can be used to easily scale the system, which leads to quicker detection of attacks.
Despite decades of research on the Internet security, we constantly hear about mega data breaches and malware infections affecting hundreds of millions of hosts. The key reason is that the current threat model of the Internet relies on two assumptions that no longer hold true: (1) Web servers, hosting the content, are secure, (2) each Internet connection starts from the original content provider and terminates at the content consumer. Internet security is today merely patched on top of the TCP/IP protocol stack. In order to achieve comprehensive security for the Internet, we believe that a clean-slate approach must be adopted where a content based security model is employed. Named Data Networking (NDN) is a step in this direction which is envisioned to be the next generation Internet architecture based on a content centric communication model. NDN is currently being designed with security as a key requirement, and thus to support content integrity, authenticity, confidentiality and privacy. However, in order to meet such a requirement, one needs to overcome several challenges, especially in either large operational environments or resource constrained networks. In this paper, we explore the security challenges in achieving comprehensive content security in NDN and propose a research agenda to address some of the challenges.
The gap is widening between the processor clock speed of end-system architectures and network throughput capabilities. It is now physically possible to provide single-flow throughput of speeds up to 100 Gbps, and 400 Gbps will soon be possible. Most current research into high-speed data networking focuses on managing expanding network capabilities within datacenter Local Area Networks (LANs) or efficiently multiplexing millions of relatively small flows through a Wide Area Network (WAN). However, datacenter hyper-convergence places high-throughput networking workloads on general-purpose hardware, and distributed High-Performance Computing (HPC) applications require time-sensitive, high-throughput end-to-end flows (also referred to as ``elephant flows'') to occur over WANs. For these applications, the bottleneck is often the end-system and not the intervening network. Since the problem of the end-system bottleneck was uncovered, many techniques have been developed which address this mismatch with varying degrees of effectiveness. In this survey, we describe the most promising techniques, beginning with network architectures and NIC design, continuing with operating and end-system architectures, and concluding with clean-slate protocol design.
Dynamic data race detectors are valuable tools for testing and validating concurrent software, but to achieve good performance they are typically implemented using sophisticated concurrent algorithms. Thus, they are ironically prone to the exact same kind of concurrency bugs they are designed to detect. To address these problems, we have developed VerifiedFT, a clean slate redesign of the FastTrack race detector [19]. The VerifiedFT analysis provides the same precision guarantee as FastTrack, but is simpler to implement correctly and efficiently, enabling us to mechanically verify an implementation of its core algorithm using CIVL [27]. Moreover, VerifiedFT provides these correctness guarantees without sacrificing any performance over current state-of-the-art (but complex and unverified) FastTrack implementations for Java.
In Software Defined Infrastructure (SDI), virtualization techniques are used to decouple applications and higher-level services from their underlying physical compute, storage, and network resources. The approach offers a set of powerful new capabilities (isolation, encapsulation, portability, interposition), including the formation of a software-based, infrastructure-wide control plane for orchestrated management. In this position paper, we identify opportunities for revisiting ongoing cybersecurity challenges using SDI as a powerful new toolset. Benefits of this approach can be broadly utilized in public, private, and hybrid clouds, data centers, enterprise computing, IoT deployments, and more. The discussion motivates the research challenge underlying VMware's partnership with the National Science Foundation to fund novel and foundational research in this area. Known as the NSF/VMware Partnership on Software Defined Infrastructure as a Foundation for Clean-Slate Computing Security (SDI-CSCS), the jointly funded university research program is set to begin in the fall of 2017.
This paper advocates programming high-performance code using partial evaluation. We present a clean-slate programming system with a simple, annotation-based, online partial evaluator that operates on a CPS-style intermediate representation. Our system exposes code generation for accelerators (vectorization/parallelization for CPUs and GPUs) via compiler-known higher-order functions that can be subjected to partial evaluation. This way, generic implementations can be instantiated with target-specific code at compile time. In our experimental evaluation we present three extensive case studies from image processing, ray tracing, and genome sequence alignment. We demonstrate that using partial evaluation, we obtain high-performance implementations for CPUs and GPUs from one language and one code base in a generic way. The performance of our codes is mostly within 10%, often closer to the performance of multi man-year, industry-grade, manually-optimized expert codes that are considered to be among the top contenders in their fields.
The number of sensors and embedded devices in an urban area can be on the order of thousands. New low-power wide area (LPWA) wireless network technologies have been proposed to support this large number of asynchronous, low-bandwidth devices. Among them, the Cooperative UltraNarrowband (C-UNB) is a clean-slate cellular network technology to connect these devices to a remote site or data collection server. C-UNB employs small bandwidth channels, and a lightweight random access protocol. In this paper, a new application is investigated - the use of C-UNB wireless networks to support the Advanced Metering Infrastructure (AMI), in order to facilitate the communication between smart meters and utilities. To this end, we adapted a mathematical model for C-UNB, and implemented a network simulation module in NS-3 to represent C-UNB's physical and medium access control layer. For the application layer, we implemented the DLMS-COSEM protocol, or Device Language Message Specification - Companion Specification for Energy Metering. Details of the simulation module are presented and we conclude that it supports the results of the mathematical model.
This paper investigates the use of deep reinforcement learning (DRL) in the design of a "universal" MAC protocol referred to as Deep-reinforcement Learning Multiple Access (DLMA). The design framework is partially inspired by the vision of DARPA SC2, a 3-year competition whereby competitors are to come up with a clean-slate design that "best share spectrum with any network(s), in any environment, without prior knowledge, leveraging on machine-learning technique". While the scope of DARPA SC2 is broad and involves the redesign of PHY, MAC, and Network layers, this paper's focus is narrower and only involves the MAC design. In particular, we consider the problem of sharing time slots among a multiple of time-slotted networks that adopt different MAC protocols. One of the MAC protocols is DLMA. The other two are TDMA and ALOHA. The DRL agents of DLMA do not know that the other two MAC protocols are TDMA and ALOHA. Yet, by a series of observations of the environment, its own actions, and the rewards - in accordance with the DRL algorithmic framework - a DRL agent can learn the optimal MAC strategy for harmonious co-existence with TDMA and ALOHA nodes. In particular, the use of neural networks in DRL (as opposed to traditional reinforcement learning) allows for fast convergence to optimal solutions and robustness against perturbation in hyper- parameter settings, two essential properties for practical deployment of DLMA in real wireless networks.