Biblio
Security analysis requires some degree of knowledge to align threats to vulnerabilities in information technology. Despite the abundance of security requirements, the evidence suggests that security experts are not applying these checklists. Instead, they default to their background knowledge to identify security vulnerabilities. To better understand the different effects of security checklists, analysis and expertise, we conducted a series of interviews to capture and encode the decisionmaking process of security experts and novices during three security requirements analysis exercises. Participants were asked to analyze three kinds of artifacts: source code, data flow diagrams, and network diagrams, for vulnerabilities, and then to apply a requirements checklist to demonstrate their ability to mitigate vulnerabilities. We framed our study using Situation Awareness theory to elicit responses that were analyzed using coding theory and grounded analysis. Our results include decision-making patterns that characterize how analysts perceive, comprehend and project future threats, and how these patterns relate to selecting security mitigations. Based on this analysis, we discovered new theory to measure how security experts and novices apply attack models and how structured and unstructured analysis enables increasing security requirements coverage. We discuss suggestions of how our method could be adapted and applied to improve training and education instruments of security analysts.
Smart home automation and IoT promise to bring many advantages but they also expose their users to certain security and privacy vulnerabilities. For example, leaking the information about the absence of a person from home or the medicine somebody is taking may have serious security and privacy consequences for home users and potential legal implications for providers of home automation and IoT platforms. We envision that a new ecosystem within an existing smartphone ecosystem will be a suitable platform for distribution of apps for smart home and IoT devices. Android is increasingly becoming a popular platform for smart home and IoT devices and applications. Built-in security mechanisms in ecosystems such as Android have limitations that can be exploited by malicious apps to leak users' sensitive data to unintended recipients. For instance, Android enforces that an app requires the Internet permission in order to access a web server but it does not control which servers the app talks to or what data it shares with other apps. Therefore, sub-ecosystems that enforce additional fine-grained custom policies on top of existing policies of the smartphone ecosystems are necessary for smart home or IoT platforms. To this end, we have built a tool that enforces additional policies on inter-app interactions and permissions of Android apps. We have done preliminary testing of our tool on three proprietary apps developed by a future provider of a home automation platform. Our initial evaluation demonstrates that it is possible to develop mechanisms that allow definition and enforcement of custom security policies appropriate for ecosystems of the like smart home automation and IoT.
Hardware Trojans are an emerging threat that intrudes in the design and manufacturing cycle of the chips and has gained much attention lately due to the severity of the problems it draws to the chip supply chain. Hardware Typically, hardware Trojans are not detected during the usual manufacturing testing due to the fact that they are activated as an effect of a rare event. A class of published HTs are based on the geometrical characteristics of the circuit and claim to be undetectable, in the sense that their activation cannot be detected. In this work we study the effect of continuously monitoring the inputs of the module under test with respect to the detection of HTs possibly inserted in the module, either in the design or the manufacturing stage.
This paper has conducted a trial in establishing a systematic instrument for evaluating the performance of the marine information systems. Analytic Network Process (ANP) was introduced for determining the relative importance of a set of interdependent criteria concerned by the stakeholders (shipper/consignee, customer broker, forwarder, and container yard). Three major information platforms (MTNet, TradeVan, and Nice Shipping) in Taiwan were evaluated according to the criteria derived from ANP. Results show that the performance of marine information system can be divided into three constructs, namely: Safety and Technology (3 items), Service (3 items), and Charge (3 items). The Safety and Technology is the most important construct of marine information system evaluation, whereas Charger is the least important construct. This study give insights to improve the performance of the existing marine information systems and serve as the useful reference for the future freight information platform.
Almost all commodity IT devices include firmware and software components from non-US suppliers, potentially introducing grave vulnerabilities to homeland security by enabling cyber-attacks via flaws injected into these devices through the supply chain. However, determining that a given device is free of any and all implementation flaws is computationally infeasible in the general case; hence a critical part of any vetting process is prioritizing what kinds of flaws are likely to enable potential adversary goals. We present Theseus, a four-year research project sponsored by the DARPA VET program. Theseus will provide technology to automatically map and explore the firmware/software (FW/SW) architecture of a commodity IT device and then generate attack scenarios for the device. From these device attack scenarios, Theseus then creates a prioritized checklist of FW/SW components to check for potential vulnerabilities. Theseus combines static program analysis, attack graph generation algorithms, and a Boolean satisfiability solver to automate the checklist generation workflow. We describe how Theseus exploits analogies between the commodity IT device problem and attack graph generation for networks. We also present a novel approach called Component Interaction Mapping to recover a formal model of a device's FW/SW architecture from which attack scenarios can be generated.
This paper describes multiple system security engineering techniques for assessing system security vulnerabilities and discusses the application of these techniques at different system maturity points. The proposed vulnerability assessment approach allows a systems engineer to identify and assess vulnerabilities early in the life cycle and to continually increase the fidelity of the vulnerability identification and assessment as the system matures.
Hadoop has become increasingly popular as it rapidly processes data in parallel. Cloud computing gives reliability, flexibility, scalability, elasticity and cost saving to cloud users. Deploying Hadoop in cloud can benefit Hadoop users. Our evaluation exhibits that various internal cloud attacks can bypass current Hadoop security mechanisms, and compromised Hadoop components can be used to threaten overall Hadoop. It is urgent to improve compromise resilience, Hadoop can maintain a relative high security level when parts of Hadoop are compromised. Hadoop has two vulnerabilities that can dramatically impact its compromise resilience. The vulnerabilities are the overloaded authentication key, and the lack of fine-grained access control at the data access level. We developed a security enhancement for a public cloud-based Hadoop, named SEHadoop, to improve the compromise resilience through enhancing isolation among Hadoop components and enforcing least access privilege for Hadoop processes. We have implemented the SEHadoop model, and demonstrated that SEHadoop fixes the above vulnerabilities with minimal or no run-time overhead, and effectively resists related attacks.
Physical perturbations are performed against embedded systems that can contain valuable data. Such devices and in particular smart cards are targeted because potential attackers hold them. The embedded system security must hold against intentional hardware failures that can result in software errors. In a malicious purpose, an attacker could exploit such errors to find out secret data or disrupt a transaction. Simulation techniques help to point out fault injection vulnerabilities and come at an early stage in the development process. This paper proposes a generic fault injection simulation tool that has the particularity to embed the injection mechanism into the smart card source code. By its embedded nature, the Embedded Fault Simulator (EFS) allows us to perform fault injection simulations and side-channel analyses simultaneously. It makes it possible to achieve combined attacks, multiple fault attacks and to perform backward analyses. We appraise our approach on real, modern and complex smart card systems under data and control flow fault models. We illustrate the EFS capacities by performing a practical combined attack on an Advanced Encryption Standard (AES) implementation.
We consider the estimation of a scalar state based on m measurements that can be potentially manipulated by an adversary. The attacker is assumed to have full knowledge about the true value of the state to be estimated and about the value of all the measurements. However, the attacker has limited resources and can only manipulate up to l of the m measurements. The problem is formulated as a minimax optimization, where one seeks to construct an optimal estimator that minimizes the “worst-case” expected cost against all possible manipulations by the attacker. We show that if the attacker can manipulate at least half the measurements (l ≥ m/2), then the optimal worst-case estimator should ignore all measurements and be based solely on the a-priori information. We provide the explicit form of the optimal estimator when the attacker can manipulate less than half the measurements (l <; m/2), which is based on (m2l) local estimators. We further prove that such an estimator can be reduced into simpler forms for two special cases, i.e., either the estimator is symmetric and monotone or m = 2l + 1. Finally we apply the proposed methodology in the case of Gaussian measurements.
Cyber-attacks have been evolved in a way to be more sophisticated by employing combinations of attack methodologies with greater impacts. For instance, Advanced Persistent Threats (APTs) employ a set of stealthy hacking processes running over a long period of time, making it much hard to detect. With this trend, the importance of big-data security analytics has taken greater attention since identifying such latest attacks requires large-scale data processing and analysis. In this paper, we present SEAS-MR (Security Event Aggregation System over MapReduce) that facilitates scalable security event aggregation for comprehensive situation analysis. The introduced system provides the following three core functions: (i) periodic aggregation, (ii) on-demand aggregation, and (iii) query support for effective analysis. We describe our design and implementation of the system over MapReduce and high-level query languages, and report our experimental results collected through extensive settings on a Hadoop cluster for performance evaluation and design impacts.
Honey pots and honey nets are popular tools in the area of network security and network forensics. The deployment and usage of these tools are influenced by a number of technical and legal issues, which need to be carefully considered together. In this paper, we outline privacy issues of honey pots and honey nets with respect to technical aspects. The paper discusses the legal framework of privacy, legal ground to data processing, and data collection. The analysis of legal issues is based on EU law and is supported by discussions on privacy and related issues. This paper is one of the first papers which discuss in detail privacy issues of honey pots and honey nets in accordance with EU law.
In our previous work [1], we presented a study of using performance escalation to automatic detect Distributed Denial of Service (DDoS) types of attacks. We propose to enhance the work of security threat detection by using mobile phones as the detector to identify outliers of normal traffic patterns as threats. The mobile solution makes detection portable to any services. This paper also shows that the same detection method works for advanced persistent threats.
Online Social Networks exploit a lightweight process to identify their users so as to facilitate their fast adoption. However, such convenience comes at the price of making legitimate users subject to different threats created by fake accounts. Therefore, there is a crucial need to empower users with tools helping them in assigning a level of trust to whomever they interact with. To cope with this issue, in this paper we introduce a novel model, DIVa, that leverages on mining techniques to find correlations among user profile attributes. These correlations are discovered not from user population as a whole, but from individual communities, where the correlations are more pronounced. DIVa exploits a decentralized learning approach and ensures privacy preservation as each node in the OSN independently processes its local data and is required to know only its direct neighbors. Extensive experiments using real-world OSN datasets show that DIVa is able to extract fine-grained community-aware correlations among profile attributes with average improvements up to 50% than the global approach.
The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.
With the growth of the Internet, web applications are becoming very popular in the user communities. However, the presence of security vulnerabilities in the source code of these applications is raising cyber crime rate rapidly. It is required to detect and mitigate these vulnerabilities before their exploitation in the execution environment. Recently, Open Web Application Security Project (OWASP) and Common Vulnerabilities and Exposures (CWE) reported Cross-Site Scripting (XSS) as one of the most serious vulnerabilities in the web applications. Though many vulnerability detection approaches have been proposed in the past, existing detection approaches have the limitations in terms of false positive and false negative results. This paper proposes a context-sensitive approach based on static taint analysis and pattern matching techniques to detect and mitigate the XSS vulnerabilities in the source code of web applications. The proposed approach has been implemented in a prototype tool and evaluated on a public data set of 9408 samples. Experimental results show that proposed approach based tool outperforms over existing popular open source tools in the detection of XSS vulnerabilities.
Summary form only given. Aadhaar, India's Unique Identity Project, has become the largest biometric identity system in the world, already covering more than 920 million people. Building such a massive system required significant design thinking, aligning to the core strategy, and building a technology platform that is scalable to meet the project's objective. Entire technology architecture behind Aadhaar is based on principles of openness, linear scalability, strong security, and most importantly vendor neutrality. All application components are built using open source components and open standards. Aadhaar system currently runs across two of the data centers within India managed by UIDAI and handles 1 million enrollments a day and at the peak doing about 900 trillion biometric matches a day. Current system has about 8 PB (8000 Terabytes) of raw data. Aadhaar Authentication service, which requires sub-second response time, is already live and can handle more than 100 million authentications a day. In this talk, the speaker, who has been the Chief Architect of Aadhaar since inception, shares his experience of building the system.
The digital forensics refers to the application of scientific techniques in investigation of a crime, specifically to identify or validate involvement of some suspect in an activity leading towards that crime. Network forensics particularly deals with the monitoring of network traffic with an aim to trace some suspected activity from normal traffic or to identify some abnormal pattern in the traffic that may give clue towards some attack. Network forensics, quite valuable phenomenon in investigation process, presents certain challenges including problems in accessing network devices of cloud architecture, handling large amount network traffic, and rigorous processing required to analyse the huge volume of data, of which large proportion may prove to be irrelevant later on. Cloud Computing technology offers services to its clients remotely from a shared pool of resources, as per clients customized requirement, any time, from anywhere. Cloud Computing has attained tremendous popularity recently, leading to its vast and rapid deployment, however Privacy and Security concerns have also increased in same ratio, since data and application is outsourced to a third party. Security concerns about cloud architecture have come up as the prime barrier hindering the major shift of industry towards cloud model, despite significant advantages of cloud architecture. Cloud computing architecture presents aggravated and specific challenges in the network forensics. In this paper, I have reviewed challenges and issues faced in conducting network forensics particularly in the cloud computing environment. The study covers limitations that a network forensic expert may confront during investigation in cloud environment. I have categorized challenges presented to network forensics in cloud computing into various groups. Challenges in each group can be handled appropriately by either Forensic experts, Cloud service providers or Forensic tools whereas leftover challenges are declared as be- ond the control.
Cloud Computing is one of the large and essential environment now a days to work for the storage collection and privacy preserve to that data. Cloud data security is most important and major concern for the client while use of the cloud services provided by the different service providers. There can be some major security concern and conflicts between the client and the service provider. To get out from those issues, a third party auditor uses as an auditor for assurance of data in the environment. Storage systems for the cloud has many fundamental challenges still today. All basic as well critical challenges among which storage space and security is generally the top concern in the cloud environment. To give the appropriate security issues we have proposed third party authentication system. The cloud not only for the simplified data storage but also secure data acquisition in cloud environment. At last we have perform different security analysis as well performance analysis. It give the results that proposed scheme has significant increases in efficiency for maintaining highly secure data storage and acquisition. The proposed method also helps to minimize the cost in environment and also increases communication efficiency in the cloud environment.
Based on the analysis relationships of challenger and attestation in remote attestation process, we propose a dynamic remote attestation model based on concerns. By combines the trusted root and application of dynamic credible monitoring module, Convert the Measurement for all load module of integrity measurement architecture into the Attestation of the basic computing environments, dynamic credible monitoring module, and request service software module. Discuss the rationality of the model. The model used Merkel hash tree to storage applications software integrity metrics, both to protect the privacy of the other party application software, and also improves the efficiency of remote attestation. Experimental prototype system shows that the model can verify the dynamic behavior of the software, to make up for the lack of static measure.
Vulnerabilities usually represents the risk level of software, and it is of high value to forecast vulnerabilities so as to evaluate the security level of software. Current researches mainly focus on predicting the number of vulnerabilities or the occurrence time of vulnerabilities, however, to our best knowledge, there are no other researches focusing on the prediction of vulnerabilities' severity, which we think is an important aspect reflecting vulnerabilities and software security. To compensate for this deficiency, we borrows the grey model GM(1,1) from grey system theory to forecast the severity of vulnerabilities. The experiment is carried on the real data collected from CVE and proves the feasibility of our predicting method.
Nowadays, Online Social Networks (OSNs) are very popular and have become an integral part of our life. People are dependent on Online Social Networks for various purposes. The activities of most of the users are normal, but a few of the users exhibit unusual and suspicious behavior. We term this suspicious and unusual behavior as malicious behavior. Malicious behavior in Online Social Networks includes a wide range of unethical activities and actions performed by individuals or communities to manipulate thought process of OSN users to fulfill their vested interest. Such malicious behavior needs to be checked and its effects should be minimized. To minimize effects of such malicious activities, we require proper detection and containment strategy. Such strategy will protect millions of users across the OSNs from misinformation and security threats. In this paper, we discuss the different studies performed in the area of malicious behavior analysis and propose a framework for detection of malicious behavior in OSNs.
Turbo code has been one of the important subjects in coding theory since 1993. This code has low Bit Error Rate (BER) but decoding complexity and delay are big challenges. On the other hand, considering the complexity and delay of separate blocks for coding and encryption, if these processes are combined, the security and reliability of communication system are guaranteed. In this paper a secure decoding algorithm in parallel on General-Purpose Graphics Processing Units (GPGPU) is proposed. This is the first prototype of a fast and parallel Joint Channel-Security Coding (JCSC) system. Despite of encryption process, this algorithm maintains desired BER and increases decoding speed. We considered several techniques for parallelism: (1) distribute decoding load of a code word between multiple cores, (2) simultaneous decoding of several code words, (3) using protection techniques to prevent performance degradation. We also propose two kinds of optimizations to increase the decoding speed: (1) memory access improvement, (2) the use of new GPU properties such as concurrent kernel execution and advanced atomics to compensate buffering latency.
There are more and more systems using mobile devices to perform sensing tasks, but these increase the risk of leakage of personal privacy and data. Data hiding is one of the important ways for information security. Even though many data hiding algorithms have worked on providing more hiding capacity or higher PSNR, there are few algorithms that can control PSNR effectively while ensuring hiding capacity. In this paper, with controllable PSNR based on LSBs substitution- PSNR-Controllable Data Hiding (PCDH), we first propose a novel encoding plan for data hiding. In PCDH, we use the remainder algorithm to calculate the hidden information, and hide the secret information in the last x LSBs of every pixel. Theoretical proof shows that this method can control the variation of stego image from cover image, and control PSNR by adjusting parameters in the remainder calculation. Then, we design the encoding and decoding algorithms with low computation complexity. Experimental results show that PCDH can control the PSNR in a given range while ensuring high hiding capacity. In addition, it can resist well some steganalysis. Compared to other algorithms, PCDH achieves better tradeoff among PSNR, hiding capacity, and computation complexity.
Strength of security and privacy of any cryptographic mechanisms that use random numbers require that the random numbers generated have two important properties namely 1. Uniform distribution and 2. Independence. With the growth of Internet many devices are connected to Internet that host sensors. One idea proposed is to use sensor data as seed for Random Number Generator (RNG) since sensors measure the physical phenomena that exhibit randomness over time. The random numbers generated from sensor data can be used for cryptographic algorithms in Internet activities. These sensor data also pose weaknesses where sensors may be under adversarial control that may lead to generating expected random sequence which breaks the security and privacy. This paper proposes a wash-rinse-spin approach to process the raw sensor data that increases randomness in the seed value. The generated sequences from two sensors are combined by Decimation method to improve unpredictability. This makes the sensor data to be more secure in generating random numbers preventing attackers from knowing the random sequence through adversarial control.