Biblio
This paper proposes a practical time-phased model to analyze the vulnerability of power systems over a time horizon, in which the scheduled maintenance of network facilities is considered. This model is deemed as an efficient tool that could be used by system operators to assess whether how their systems become vulnerable giving a set of scheduled facility outages. The final model is presented as a single level Mixed-Integer Linear Programming (MILP) problem solvable with commercially available software. Results attained based on the well-known IEEE 24-Bus Reliability Test System (RTS) appreciate the applicability of the model and highlight the necessity of considering the scheduled facility outages in assessing the vulnerability of a power system.
Data randomization or scrambling has been effectively used in various applications to improve the data security. In this paper, we use the idea of data randomization to proactively randomize the spectrum (re)allocation to improve connections' security. As it is well-known that random (re)allocation fragments the spectrum and thus increases blocking in elastic optical networks, we analyze the tradeoff between system performance and security. To this end, in addition to spectrum randomization, we utilize an on-demand defragmentation scheme every time a request is blocked due to the spectrum fragmentation. We model the occupancy pattern of an elastic optical link (EOL) using a multi-class continuous-time Markov chain (CTMC) under the random-fit spectrum allocation method. Numerical results show that although both the blocking and security can be improved for a particular so-called randomization process (RP) arrival rate, while with the increase in RP arrival rate the connections' security improves at the cost of the increase in overall blocking.
At the RELENG 2014 Q&A, the question was asked, “What is your greatest concern?” and the response was “someone subverting our deployment pipeline”. That is the motivation for this paper. We explore what it means to subvert a pipeline and provide several different scenarios of subversion. We then focus on the issue of securing a pipeline. As a result, we provide an engineering process that is based on having trusted components mediate access to sensitive portions of the pipeline from other components, which can remain untrusted. Applying our process to a pipeline we constructed involving Chef, Jenkins, Docker, Github, and AWS, we find that some aspects of our process result in easy to make changes to the pipeline, whereas others are more difficult. Consequently, we have developed a design that hardens the pipeline, although it does not yet completely secure it.
Intro: Computer network defense has models for attacks and incidents comprised of multiple attacks after the fact. However, we lack an evidence-based model the likelihood and intensity of attacks and incidents. Purpose: We propose a model of global capability advancement, the adversarial capability chain (ACC), to fit this need. The model enables cyber risk analysis to better understand the costs for an adversary to attack a system, which directly influences the cost to defend it. Method: The model is based on four historical studies of adversarial capabilities: capability to exploit Windows XP, to exploit the Android API, to exploit Apache, and to administer compromised industrial control systems. Result: We propose the ACC with five phases: Discovery, Validation, Escalation, Democratization, and Ubiquity. We use the four case studies as examples as to how the ACC can be applied and used to predict attack likelihood and intensity.
Cyber-attacks have been evolved in a way to be more sophisticated by employing combinations of attack methodologies with greater impacts. For instance, Advanced Persistent Threats (APTs) employ a set of stealthy hacking processes running over a long period of time, making it much hard to detect. With this trend, the importance of big-data security analytics has taken greater attention since identifying such latest attacks requires large-scale data processing and analysis. In this paper, we present SEAS-MR (Security Event Aggregation System over MapReduce) that facilitates scalable security event aggregation for comprehensive situation analysis. The introduced system provides the following three core functions: (i) periodic aggregation, (ii) on-demand aggregation, and (iii) query support for effective analysis. We describe our design and implementation of the system over MapReduce and high-level query languages, and report our experimental results collected through extensive settings on a Hadoop cluster for performance evaluation and design impacts.
Most existing approaches focus on examining the values are dangerous for information flow within inter-suspicious modules of cloud applications (apps) in a host by using malware threat analysis, rather than the risk posed by suspicious apps were connected to the cloud computing server. Accordingly, this paper proposes a taint propagation analysis model incorporating a weighted spanning tree analysis scheme to track data with taint marking using several taint checking tools. In the proposed model, Android programs perform dynamic taint propagation to analyse the spread of and risks posed by suspicious apps were connected to the cloud computing server. In determining the risk of taint propagation, risk and defence capability are used for each taint path for assisting a defender in recognising the attack results against network threats caused by malware infection and estimate the losses of associated taint sources. Finally, a case of threat analysis of a typical cyber security attack is presented to demonstrate the proposed approach. Our approach verified the details of an attack sequence for malware infection by incorporating a finite state machine (FSM) to appropriately reflect the real situations at various configuration settings and safeguard deployment. The experimental results proved that the threat analysis model allows a defender to convert the spread of taint propagation to loss and practically estimate the risk of a specific threat by using behavioural analysis with real malware infection.
Digital Forensics is an area of Forensics Science that uses the application of scientific method toward crime investigation. The thwarting of forensic evidence is known as anti-forensics, the aim of which is ambiguous in the sense that it could be bad or good. The aim of this project is to simulate digital crimes scenario and carry out forensic and anti-forensic analysis to enhance security. This project uses several forensics and anti-forensic tools and techniques to carry out this work. The data analyzed were gotten from result of the simulation. The results reveal that although it might be difficult to investigate digital crime but with the help of sophisticated forensic tools/anti-forensics tools it can be accomplished.
Certain crimes are difficult to be committed by individuals but carefully organised by group of associates and affiliates loosely connected to each other with a single or small group of individuals coordinating the overall actions. A common starting point in understanding the structural organisation of criminal groups is to identify the criminals and their associates. Situations arise in many criminal datasets where there is no direct connection among the criminals. In this paper, we investigate ties and community structure in crime data in order to understand the operations of both traditional and cyber criminals, as well as to predict the existence of organised criminal networks. Our contributions are twofold: we propose a bipartite network model for inferring hidden ties between actors who initiated an illegal interaction and objects affected by the interaction, we then validate the method in two case studies on pharmaceutical crime and underground forum data using standard network algorithms for structural and community analysis. The vertex level metrics and community analysis results obtained indicate the significance of our work in understanding the operations and structure of organised criminal networks which were not immediately obvious in the data. Identifying these groups and mapping their relationship to one another is essential in making more effective disruption strategies in the future.
We present Cloud-COVER (Controls and Orderings for Vulnerabilities and ExposuRes), a cloud security threat modelling tool. Cloud-COVER takes input from a user about their deployment, requiring information about the data, instances, connections, their properties, and the importance of various security attributes. This input is used to analyse the relevant threats, and the way they propagate through the system. They are then presented to the user, ordered according to the security attributes they have prioritised, along with the best countermeasures to secure against the dangers listed.
Today ICT networks are the economy's vital backbone. While their complexity continuously evolves, sophisticated and targeted cyber attacks such as Advanced Persistent Threats (APTs) become increasingly fatal for organizations. Numerous highly developed Intrusion Detection Systems (IDSs) promise to detect certain characteristics of APTs, but no mechanism which allows to rate, compare and evaluate them with respect to specific customer infrastructures is currently available. In this paper, we present BAESE, a system which enables vendor independent and objective rating and comparison of IDSs based on small sets of customer network data.
Cyber-attacks have been evolved in a way to be more sophisticated by employing combinations of attack methodologies with greater impacts. For instance, Advanced Persistent Threats (APTs) employ a set of stealthy hacking processes running over a long period of time, making it much hard to detect. With this trend, the importance of big-data security analytics has taken greater attention since identifying such latest attacks requires large-scale data processing and analysis. In this paper, we present SEAS-MR (Security Event Aggregation System over MapReduce) that facilitates scalable security event aggregation for comprehensive situation analysis. The introduced system provides the following three core functions: (i) periodic aggregation, (ii) on-demand aggregation, and (iii) query support for effective analysis. We describe our design and implementation of the system over MapReduce and high-level query languages, and report our experimental results collected through extensive settings on a Hadoop cluster for performance evaluation and design impacts.
Ensuring system survivability in the wake of advanced persistent threats is a big challenge that the security community is facing to ensure critical infrastructure protection. In this paper, we define metrics and models for the assessment of coordinated massive malware campaigns targeting critical infrastructure sectors. First, we develop an analytical model that allows us to capture the effect of neighborhood on different metrics (infection probability and contagion probability). Then, we assess the impact of putting operational but possibly infected nodes into quarantine. Finally, we study the implications of scanning nodes for early detection of malware (e.g., worms), accounting for false positives and false negatives. Evaluating our methodology using a small four-node topology, we find that malware infections can be effectively contained by using quarantine and appropriate rates of scanning for soft impacts.
Design-time analysis and verification of distributed real-time embedded systems necessitates the modeling of the time-varying performance of the network and comparing that to application requirements. Earlier work has shown how to build a system network model that abstracted away the network's physical medium and protocols which govern its access and multiplexing. In this work we show how to apply a network medium channel access protocol, such as Time-Division Multiple Access (TDMA), to our network analysis methods and use the results to show that the abstracted model without the explicit model of the protocol is valid.
Online fault diagnosis has been a crucial task for industrial processes. Reconstruction-based fault diagnosis has been drawing special attentions as a good alternative to the traditional contribution plot. It identifies the fault cause by finding the specific fault subspace that can well eliminate alarming signals from a bunch of alternatives that have been prepared based on historical fault data. However, in practice, the abnormality may result from the joint effects of multiple faults, which thus can not be well corrected by single fault subspace archived in the historical fault library. In the present work, an aggregative reconstruction-based fault diagnosis strategy is proposed to handle the case where multiple fault causes jointly contribute to the abnormal process behaviors. First, fault subspaces are extracted based on historical fault data in two different monitoring subspaces where analysis of relative changes is taken to enclose the major fault effects that are responsible for different alarming monitoring statistics. Then, a fault subspace selection strategy is developed to analyze the combinatorial fault nature which will sort and select the informative fault subspaces that are most likely to be responsible for the concerned abnormalities. Finally, an aggregative fault subspace is calculated by combining the selected fault subspaces which represents the joint effects from multiple faults and works as the final reconstruction model for online fault diagnosis. Theoretical support is framed and the related statistical characteristics are analyzed. Its feasibility and performance are illustrated with simulated multi-faults using data from the Tennessee Eastman (TE) benchmark process.
The integration of social networking concepts into the Internet of things has led to the Social Internet of Things (SIoT) paradigm, according to which objects are capable of establishing social relationships in an autonomous way with respect to their owners with the benefits of improving the network scalability in information/service discovery. Within this scenario, we focus on the problem of understanding how the information provided by members of the social IoT has to be processed so as to build a reliable system on the basis of the behavior of the objects. We define two models for trustworthiness management starting from the solutions proposed for P2P and social networks. In the subjective model each node computes the trustworthiness of its friends on the basis of its own experience and on the opinion of the friends in common with the potential service providers. In the objective model, the information about each node is distributed and stored making use of a distributed hash table structure so that any node can make use of the same information. Simulations show how the proposed models can effectively isolate almost any malicious nodes in the network at the expenses of an increase in the network traffic for feedback exchange.
Application domains in which early performance evaluation is needed are becoming more complex. In addition to traditional measures of complexity due, for example, to the number of components, their interactions, complicated control coordination and schemes, emerging applications may require adaptive response and reconfiguration the impact of externally observable (security) parameters. In this paper we introduce an approach for effective modeling and analysis of performance and security tradeoffs. The approach identifies a suitable allocation of resources that meet performance requirements, while maximizing measurable security effects. We demonstrate this approach through the analysis of performance sensitivity of a Border Inspection Management System (BIMS) with changing security mechanisms (e.g. biometric system parameters for passenger identification). The final result is a model-based approach that allows us to take decisions about BIMS performance and security mechanisms on the basis of rates of traveler arrivals and traveler identification security guarantees. We describe the experience gained when applying this approach to daily flight arrival schedule of a real airport.
Due to cyber security is an important issue of the cloud computing. Insider threat becomes more and more important for cyber security, it is also much more complex issue. But till now, there is no equivalent to a vulnerability scanner for insider threat. We survey and discuss the history of research on insider threat analysis to know system dynamics is the best method to mitigate insider threat from people, process, and technology. In the paper, we present a system dynamics method to model insider threat. We suggest some concludes for future research who are interested in insider threat issue The study.
Formal methods, models and tools for social big data analytics are largely limited to graph theoretical approaches such as social network analysis (SNA) informed by relational sociology. There are no other unified modeling approaches to social big data that integrate the conceptual, formal and software realms. In this paper, we first present and discuss a theory and conceptual model of social data. Second, we outline a formal model based on set theory and discuss the semantics of the formal model with a real-world social data example from Facebook. Third, we briefly present and discuss the Social Data Analytics Tool (SODATO) that realizes the conceptual model in software and provisions social data analysis based on the conceptual and formal models. Fourth and last, based on the formal model and sentiment analysis of text, we present a method for profiling of artifacts and actors and apply this technique to the data analysis of big social data collected from Facebook page of the fast fashion company, H&M.
A distributed cyber control system comprises various types of assets, including sensors, intrusion detection systems, scanners, controllers, and actuators. The modeling and analysis of these components usually require multi-disciplinary approaches. This paper presents a modeling and dynamic analysis of a distributed cyber control system for situational awareness by taking advantage of control theory and time Petri net. Linear time-invariant systems are used to model the target system, attacks, assets influences, and an anomaly-based intrusion detection system. Time Petri nets are used to model the impact and timing relationships of attacks, vulnerability, and recovery at every node. To characterize those distributed control systems that are perfectly attackable, algebraic and topological attackability conditions are derived. Numerical evaluation is performed to determine the impact of attacks on distributed control system.
Wide-area monitoring and control (WAMC) systems are the next-generation operational-management systems for electric power systems. The main purpose of such systems is to provide high resolution real-time situational awareness in order to improve the operation of the power system by detecting and responding to fast evolving phenomenon in power systems. From an information and communication technology (ICT) perspective, the nonfunctional qualities of these systems are increasingly becoming important and there is a need to evaluate and analyze the factors that impact these nonfunctional qualities. Enterprise architecture methods, which capture properties of ICT systems in architecture models and use these models as a basis for analysis and decision making, are a promising approach to meet these challenges. This paper presents a quantitative architecture analysis method for the study of WAMC ICT architectures focusing primarily on the interoperability and cybersecurity aspects.
In order to deal with shortcomings of security management systems, this work proposes a methodology based on agents paradigm for cybersecurity risk management. In this approach a system is decomposed in agents that may be used to attain goals established by attackers. Threats to business are achieved by attacker's goals in service and deployment agents. To support a proactive behavior, sensors linked to security mechanisms are analyzed accordingly with a model for Situational Awareness(SA)[4].
Moving Target Defense (MTD) changes the attack surface of a system that confuses intruders to thwart attacks. Various MTD techniques are developed to enhance the security of a networked system, but the effectiveness of these techniques is not well assessed. Security models (e.g., Attack Graphs (AGs)) provide formal methods of assessing security, but modeling the MTD techniques in security models has not been studied. In this paper, we incorporate the MTD techniques in security modeling and analysis using a scalable security model, namely Hierarchical Attack Representation Models (HARMs), to assess the effectiveness of the MTD techniques. In addition, we use importance measures (IMs) for scalable security analysis and deploying the MTD techniques in an effective manner. The performance comparison between the HARM and the AG is given. Also, we compare the performance of using the IMs and the exhaustive search method in simulations.
Port hopping is a typical moving target defense, which constantly changes service port number to thwart reconnaissance attack. It is effective in hiding service identities and confusing potential attackers, but it is still unknown how effective port hopping is and under what circumstances it is a viable proactive defense because the existed works are limited and they usually discuss only a few parameters and give some empirical studies. This paper introduces urn model and quantifies the likelihood of attacker success in terms of the port pool size, number of probes, number of vulnerable services, and hopping frequency. Theoretical analysis shows that port hopping is an effective and promising proactive defense technology in thwarting network attacks.
Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, presence of security weaknesses in web applications allows malicious user's to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) and Cross-Site Scripting (XSS) vulnerabilities are most dangerous security vulnerabilities exploited in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on defensive programming, vulnerability detection and attack prevention techniques has been quite intensive in the past decade. Defensive programming is a set of coding guidelines to develop secure applications. But, mostly developers do not follow security guidelines and repeat same type of programming mistakes in their code. Attack prevention techniques protect the applications from attack during their execution in actual environment. The difficulties associated with accurate detection of SQLI and XSS vulnerabilities in coding phase of software development life cycle. This paper proposes a classification of software security approaches used to develop secure software in various phase of software development life cycle. It also presents a survey of static analysis based approaches to detect SQL Injection and cross-site scripting vulnerabilities in source code of web applications. The aim of these approaches is to identify the weaknesses in source code before their exploitation in actual environment. This paper would help researchers to note down future direction for securing legacy web applications in early phases of software development life cycle.
Artificial monitoring is no longer able to match the rapid growth of cybercrime, it is in great need to develop a new spatial analysis technology which allows emergency events to get rapidly and accurately locked in real environment, furthermore, to establish correlative analysis model for cybercrime prevention strategy. On the other hand, Geography information system has been changed virtually in data structure, coordinate system and analysis model due to the “uncertainty and hyper-dimension” characteristics of network object and behavior. In this paper, the spatial rules of typical cybercrime are explored on base of GIS with Internet searching and IP tracking technology: (1) Setup spatial database through IP searching based on criminal evidence. (2)Extend GIS data-structure and spatial models, add network dimension and virtual attribution to realize dynamic connection between cyber and real space. (3)Design cybercrime monitoring and prevention system to discover the cyberspace logics based on spatial analysis.