Biblio
Central to the secure operation of a public key infrastructure (PKI) is the ability to revoke certificates. While much of users' security rests on this process taking place quickly, in practice, revocation typically requires a human to decide to reissue a new certificate and revoke the old one. Thus, having a proper understanding of how often systems administrators reissue and revoke certificates is crucial to understanding the integrity of a PKI. Unfortunately, this is typically difficult to measure: while it is relatively easy to determine when a certificate is revoked, it is difficult to determine whether and when an administrator should have revoked.
In this paper, we use a recent widespread security vulnerability as a natural experiment. Publicly announced in April 2014, the Heartbleed OpenSSL bug, potentially (and undetectably) revealed servers' private keys. Administrators of servers that were susceptible to Heartbleed should have revoked their certificates and reissued new ones, ideally as soon as the vulnerability was publicly announced.
Using a set of all certificates advertised by the Alexa Top 1 Million domains over a period of six months, we explore the patterns of reissuing and revoking certificates in the wake of Heartbleed. We find that over 73% of vulnerable certificates had yet to be reissued and over 87% had yet to be revoked three weeks after Heartbleed was disclosed. Moreover, our results show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication.
Both SAT and #SAT can represent difficult problems in seemingly dissimilar areas such as planning, verification, and probabilistic inference. Here, we examine an expressive new language, #∃SAT, that generalizes both of these languages. #∃SAT problems require counting the number of satisfiable formulas in a concisely-describable set of existentially quantified, propositional formulas. We characterize the expressiveness and worst-case difficulty of #∃SAT by proving it is complete for the complexity class #P NP [1], and re- lating this class to more familiar complexity classes. We also experiment with three new
general-purpose #∃SAT solvers on a battery of problem distributions including a simple logistics domain. Our experiments show that, despite the formidable worst-case complex-
ity of #P NP [1], many of the instances can be solved efficiently by noticing and exploiting a particular type of frequent structure.
Much of the data researchers usually collect about users’ privacy and security behavior comes from short-term studies and focuses on specific, narrow activities. We present a design architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from a panel of hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security and privacy of the collected data, which will inevitably include intimate details about our participants' behavior. Third, the SBO must serve our research interests, which will inevitably change over the course of the study, as collected data is analyzed, interpreted, and suggest further lines of inquiry. We describe in detail the SBO infrastructure, its secure data collection methods, the benefits of our design and implementation, as well as the hurdles and tradeoffs to consider when designing such a data collection system. - See more at: https://www.cylab.cmu.edu/research/techreports/2014/tr_cylab14009.html#sthash.vsO39UdR.dpuf
The significant dependence on cyberspace has indeed brought new risks that often compromise, exploit and damage invaluable data and systems. Thus, the capability to proactively infer malicious activities is of paramount importance. In this context, inferring probing events, which are commonly the first stage of any cyber attack, render a promising tactic to achieve that task. We have been receiving for the past three years 12 GB of daily malicious real darknet data (i.e., Internet traffic destined to half a million routable yet unallocated IP addresses) from more than 12 countries. This paper exploits such data to propose a novel approach that aims at capturing the behavior of the probing sources in an attempt to infer their orchestration (i.e., coordination) pattern. The latter defines a recently discovered characteristic of a new phenomenon of probing events that could be ominously leveraged to cause drastic Internet-wide and enterprise impacts as precursors of various cyber attacks. To accomplish its goals, the proposed approach leverages various signal and statistical techniques, information theoretical metrics, fuzzy approaches with real malware traffic and data mining methods. The approach is validated through one use case that arguably proves that a previously analyzed orchestrated probing event from last year is indeed still active, yet operating in a stealthy, very low rate mode. We envision that the proposed approach that is tailored towards darknet data, which is frequently, abundantly and effectively used to generate cyber threat intelligence, could be used by network security analysts, emergency response teams and/or observers of cyber events to infer large-scale orchestrated probing events for early cyber attack warning and notification.
Sources such as speakers and environments from different communication devices produce signal variations that result in interference generated by different communication devices. Despite these convolutions, signal variations produced by different mobile devices leave intrinsic fingerprints on recorded calls, thus allowing the tracking of the models and brands of engaged mobile devices. This study aims to investigate the use of recorded Voice over Internet Protocol calls in the blind identification of source mobile devices. The proposed scheme employs a combination of entropy and mel-frequency cepstrum coefficients to extract the intrinsic features of mobile devices and analyzes these features with a multi-class support vector machine classifier. The experimental results lead to an accurate identification of 10 source mobile devices with an average accuracy of 99.72%.
The need for increased surveillance due to increase in flight volume in remote or oceanic regions outside the range of traditional radar coverage has been fulfilled by the advent of space-based Automatic Dependent Surveillance — Broadcast (ADS-B) Surveillance systems. ADS-B systems have the capability of providing air traffic controllers with highly accurate real-time flight data. ADS-B is dependent on digital communications between aircraft and ground stations of the air route traffic control center (ARTCC); however these communications are not secured. Anyone with the appropriate capabilities and equipment can interrogate the signal and transmit their own false data; this is known as spoofing. The possibility of this type of attacks decreases the situational awareness of United States airspace. The purpose of this project is to design a secure transmission framework that prevents ADS-B signals from being spoofed. Three alternative methods of securing ADS-B signals are evaluated: hashing, symmetric encryption, and asymmetric encryption. Security strength of the design alternatives is determined from research. Feasibility criteria are determined by comparative analysis of alternatives. Economic implications and possible collision risk is determined from simulations that model the United State airspace over the Gulf of Mexico and part of the airspace under attack respectively. The ultimate goal of the project is to show that if ADS-B signals can be secured, the situational awareness can improve and the ARTCC can use information from this surveillance system to decrease the separation between aircraft and ultimately maximize the use of the United States airspace.
Long Term Evolution (LTE) networks designed by 3rd Generation Partnership Project (3GPP) represent a widespread technology. LTE is mainly influenced by high data rates, minimum delay and the capacity due to scalable bandwidth and its flexibility. With the rapid and widespread use LTE networks, and increase the use in data/video transmission and Internet applications in general, accordingly, the challenges of securing and speeding up data communication in such networks is also increased. Authentication in LTE networks is very important process because most of the coming attacks occur during this stage. Attackers try to be authenticated and then launch the network resources and prevent the legitimate users from the network services. The basics of Extensible Authentication Protocol-Authentication and Key Agreement (EAP-AKA) are used in LTE AKA protocol which is called Evolved Packet System AKA (EPS-AKA) protocol to secure LTE network, However it still suffers from various vulnerabilities such as disclosure of the user identity, computational overhead, Man In The Middle (MITM) attack and authentication delay. In this paper, an Efficient EPS-AKA protocol (EEPS-AKA) is proposed to overcome those problems. The proposed protocol is based on the Simple Password Exponential Key Exchange (SPEKE) protocol. Compared to previous proposed methods, our method is faster, since it uses a secret key method which is faster than certificate-based methods, In addition, the size of messages exchanged between User Equipment (UE) and Home Subscriber Server (HSS) is reduced, this reduces authentication delay and storage overhead effectively. The automated validation of internet security protocols and applications (AVISPA) tool is used to provide a formal verification. Results show that the proposed EEPS-AKA is efficient and secure against active and passive attacks.
Effective digital identity management system is a critical enabler of cloud computing, since it supports the provision of the required assurances to the transacting parties. Such assurances sometimes require the disclosure of sensitive personal information. Given the prevalence of various forms of identity abuses on the Internet, a re-examination of the factors underlying cloud services acquisition has become critical and imperative. In order to provide better assurances, parties to cloud transactions must have confidence in service providers' ability and integrity in protecting their interest and personal information. Thus a trusted cloud identity ecosystem could promote such user confidence and assurances. Using a qualitative research approach, this paper explains the role of trust in cloud service acquisition by organizations. The paper focuses on the processes of acquisition of cloud services by financial institutions in Ghana. The study forms part of comprehensive study on the monetization of personal Identity information.
Effective digital identity management system is a critical enabler of cloud computing, since it supports the provision of the required assurances to the transacting parties. Such assurances sometimes require the disclosure of sensitive personal information. Given the prevalence of various forms of identity abuses on the Internet, a re-examination of the factors underlying cloud services acquisition has become critical and imperative. In order to provide better assurances, parties to cloud transactions must have confidence in service providers' ability and integrity in protecting their interest and personal information. Thus a trusted cloud identity ecosystem could promote such user confidence and assurances. Using a qualitative research approach, this paper explains the role of trust in cloud service acquisition by organizations. The paper focuses on the processes of acquisition of cloud services by financial institutions in Ghana. The study forms part of comprehensive study on the monetization of personal Identity information.
Voting among replicated data collection devices is a means to achieve dependable data delivery to the end-user in a hostile environment. Failures may occur during the data collection process: such as data corruptions by malicious devices and security/bandwidth attacks on data paths. For a voting system, how often a correct data is delivered to the user in a timely manner and with low overhead depicts the QoS. Prior works have focused on algorithm correctness issues and performance engineering of the voting protocol mechanisms. In this paper, we study the methods for autonomic management of device replication in the voting system to deal with situations where the available network bandwidth fluctuates, the fault parameters change unpredictably, and the devices have battery energy constraints. We treat the voting system as a `black-box' with programmable I/O behaviors. A management module exercises a macroscopic control of the voting box with situational inputs: such as application priorities, network resources, battery energy, and external threat levels.
Voting among replicated data collection devices is a means to achieve dependable data delivery to the end-user in a hostile environment. Failures may occur during the data collection process: such as data corruptions by malicious devices and security/bandwidth attacks on data paths. For a voting system, how often a correct data is delivered to the user in a timely manner and with low overhead depicts the QoS. Prior works have focused on algorithm correctness issues and performance engineering of the voting protocol mechanisms. In this paper, we study the methods for autonomic management of device replication in the voting system to deal with situations where the available network bandwidth fluctuates, the fault parameters change unpredictably, and the devices have battery energy constraints. We treat the voting system as a `black-box' with programmable I/O behaviors. A management module exercises a macroscopic control of the voting box with situational inputs: such as application priorities, network resources, battery energy, and external threat levels.
Cloud computing is an emerging paradigm shifting the shape of computing models from being a technology to a utility. However, security, privacy and trust are amongst the issues that can subvert the benefits and hence wide deployment of cloud computing. With the introduction of omnipresent mobile-based clients, the ubiquity of the model increases, suggesting a still higher integration in life. Nonetheless, the security issues rise to a higher degree as well. The constrained input methods for credentials and the vulnerable wireless communication links are among factors giving rise to serious security issues. To strengthen the access control of cloud resources, organizations now commonly acquire Identity Management Systems (IdM). This paper presents that the most popular IdM, namely OAuth, working in scope of Mobile Cloud Computing has many weaknesses in authorization architecture. In particular, authors find two major issues in current IdM. First, if the IdM System is compromised through malicious code, it allows a hacker to get authorization of all the protected resources hosted on a cloud. Second, all the communication links among client, cloud and IdM carries complete authorization token, that can allow hacker, through traffic interception at any communication link, an illegitimate access of protected resources. We also suggest a solution to the reported problems, and justify our arguments with experimentation and mathematical modeling.
Intrusion Detection Systems (IDS) have become a necessity in computer security systems because of the increase in unauthorized accesses and attacks. Intrusion Detection is a major component in computer security systems that can be classified as Host-based Intrusion Detection System (HIDS), which protects a certain host or system and Network-based Intrusion detection system (NIDS), which protects a network of hosts and systems. This paper addresses Probes attacks or reconnaissance attacks, which try to collect any possible relevant information in the network. Network probe attacks have two types: Host Sweep and Port Scan attacks. Host Sweep attacks determine the hosts that exist in the network, while port scan attacks determine the available services that exist in the network. This paper uses an intelligent system to maximize the recognition rate of network attacks by embedding the temporal behavior of the attacks into a TDNN neural network structure. The proposed system consists of five modules: packet capture engine, preprocessor, pattern recognition, classification, and monitoring and alert module. We have tested the system in a real environment where it has shown good capability in detecting attacks. In addition, the system has been tested using DARPA 1998 dataset with 100% recognition rate. In fact, our system can recognize attacks in a constant time.
This paper proposes content and network-aware redundancy allocation algorithms for channel coding and network coding to optimally deliver data and video multicast services over error prone wireless mesh networks. Each network node allocates redundancies for channel coding and network coding taking in to account the content properties, channel bandwidth and channel status to improve the end-to-end performance of data and video multicast applications. For data multicast applications, redundancies are allocated at each network node in such a way that the total amount of redundant bits transmitted is minimised. As for video multicast applications, redundancies are allocated considering the priority of video packets such that the probability of delivering high priority video packets is increased. This not only ensures the continuous playback of a video but also increases the received video quality. Simulation results for bandwidth sensitive data multicast applications exhibit up to 10× reduction of the required amount of redundant bits compared to reference schemes to achieve a 100% packet delivery ratio. Similarly, for delay sensitive video multicast applications, simulation results exhibit up to 3.5dB PSNR gains in the received video quality.
Cloud computing significantly increased the security threats because intruders can exploit the large amount of cloud resources for their attacks. However, most of the current security technologies do not provide early warnings about such attacks. This paper presents a Finite State Hidden Markov prediction model that uses an adaptive risk approach to predict multi-staged cloud attacks. The risk model measures the potential impact of a threat on assets given its occurrence probability. The attacks prediction model was integrated with our autonomous cloud intrusion detection framework (ACIDF) to raise early warnings about attacks to the controller so it can take proactive corrective actions before the attacks pose a serious security risk to the system. According to our experiments on DARPA 2000 dataset, the proposed prediction model has successfully fired the early warning alerts 39.6 minutes before the launching of the LLDDoS1.0 attack. This gives the auto response controller ample time to take preventive measures.
This paper presents a novel and efficient audio signal recognition algorithm with limited computational complexity. As the audio recognition system will be used in real world environment where background noises are high, conventional speech recognition techniques are not directly applicable, since they have a poor performance in these environments. So here, we introduce a new audio recognition algorithm which is optimized for mechanical sounds such as car horn, telephone ring etc. This is a hybrid time-frequency approach which makes use of acoustic fingerprint for the recognition of audio signal patterns. The limited computational complexity is achieved through efficient usage of both time domain and frequency domain in two different processing phases, detection and recognition respectively. And the transition between these two phases is carried out through a finite state machine(FSM)model. Simulation results shows that the algorithm effectively recognizes audio signals within a noisy environment.
The notion of trust is considered to be the cornerstone on patient-psychiatrist relationship. Thus, a trustfully background is fundamental requirement for provision of effective Ubiquitous Healthcare (UH) service. In this paper, the issue of Trust Evaluation of UH Providers when register UH environment is addressed. For that purpose a novel trust evaluation method is proposed, based on cloud theory, exploiting User Profile attributes. This theory mimics human thinking, regarding trust evaluation and captures fuzziness and randomness of this uncertain reasoning. Two case studies are investigated through simulation in MATLAB software, in order to verify the effectiveness of this novel method.
An important challenge in networked control systems is to ensure the confidentiality and integrity of the message in order to secure the communication and prevent attackers or intruders from compromising the system. However, security mechanisms may jeopardize the temporal behavior of the network data communication because of the computation and communication overhead. In this paper, we study the effect of adding Hash Based Message Authentication (HMAC) to a time-triggered networked control system. Time Triggered Architectures (TTAs) provide a deterministic and predictable timing behavior that is used to ensure safety, reliability and fault tolerance properties. The paper analyzes the computation and communication overhead of adding HMAC and the impact on the performance of the time-triggered network. Experimental validation and performance evaluation results using a TTEthernet network are also presented.
A scheme for preserving privacy in MobilityFirst (MF) clean-slate future Internet architecture is proposed in this paper. The proposed scheme, called Anonymity in MobilityFirst (AMF), utilizes the three-tiered approach to effectively exploit the inherent properties of MF Network such as Globally Unique Flat Identifier (GUID) and Global Name Resolution Service (GNRS) to provide anonymity to the users. While employing new proposed schemes in exchanging of keys between different tiers of routers to alleviate trust issues, the proposed scheme uses multiple routers in each tier to avoid collaboration amongst the routers in the three tiers to expose the end users.
Threats which come from database insiders or database outsiders have formed a big challenge to the protection of integrity and confidentiality in many database systems. To overcome this situation a new domain called a Database Forensic (DBF) has been introduced to specifically investigate these dynamic threats which have posed many problems in Database Management Systems (DBMS) of many organizations. DBF is a process to identify, collect, preserve, analyse, reconstruct and document all digital evidences caused by this challenge. However, until today, this domain is still lacks having a standard and generic knowledge base for its forensic investigation methods / tools due to many issues and challenges in its complex processes. Therefore, this paper will reveal an approach adapted from a software engineering domain called metamodelling which will unify these DBF complex knowledge processes into an artifact, a metamodel (DBF Metamodel). In future, the DBF Metamodel could benefit many DBF investigation users such as database investigators, stockholders, and other forensic teams in offering various possible solutions for their problem domain.
Cognitive radio (CR) networks are becoming an increasingly important part of the wireless networking landscape due to the ever-increasing scarcity of spectrum resources throughout the world. Nowadays CR media is becoming popular wireless communication media for disaster recovery communication network. Although the operational aspects of CR are being explored vigorously, its security aspects have gained less attention to the research community. The existing research on CR network mainly focuses on the spectrum sensing and allocation, energy efficiency, high throughput, end-to-end delay and other aspect of the network technology. But, very few focuses on the security aspect and almost none focus on the secure anonymous communication in CR networks (CRNs). In this research article we would focus on secure anonymous communication in CR ad hoc networks (CRANs). We would propose a secure anonymous routing for CRANs based on pairing based cryptography which would provide source node, destination node and the location anonymity. Furthermore, the proposed research would protect different attacks those are feasible on CRANs.
In this paper, an edit detection method for forensic audio analysis is proposed. It develops and improves a previous method through changes in the signal processing chain and a novel detection criterion. As with the original method, electrical network frequency (ENF) analysis is central to the novel edit detector, for it allows monitoring anomalous variations of the ENF related to audio edit events. Working in unsupervised manner, the edit detector compares the extent of ENF variations, centered at its nominal frequency, with a variable threshold that defines the upper limit for normal variations observed in unedited signals. The ENF variations caused by edits in the signal are likely to exceed the threshold providing a mechanism for their detection. The proposed method is evaluated in both qualitative and quantitative terms via two distinct annotated databases. Results are reported for originally noisy database signals as well as versions of them further degraded under controlled conditions. A comparative performance evaluation, in terms of equal error rate (EER) detection, reveals that, for one of the tested databases, an improvement from 7% to 4% EER is achieved, respectively, from the original to the new edit detection method. When the signals are amplitude clipped or corrupted by broadband background noise, the performance figures of the novel method follow the same profile of those of the original method.