Biblio
Multi- and many-core systems are increasingly prevalent in embedded systems. Additionally, isolation requirements between different partitions and criticalities are gaining in importance. This difficult combination is not well addressed by current software systems. Parallel systems require consistency guarantees on shared data-structures often provided by locks that use predictable resource sharing protocols. However, as the number of cores increase, even a single shared cache-line (e.g. for the lock) can cause significant interference. In this paper, we present a clean-slate design of the SPeCK kernel, the next generation of our COMPOSITE OS, that attempts to provide a strong version of scalable predictability - where predictability bounds made on a single core, remain constant with an increase in cores. Results show that, despite using a non-preemptive kernel, it has strong scalable predictability, low average-case overheads, and demonstrates better response-times than a state-of-the-art preemptive system.
Governments needs reliable data on crime in order to both devise adequate policies, and allocate the correct revenues so that the measures are cost-effective, i.e., The money spent in prevention, detection, and handling of security incidents is balanced with a decrease in losses from offences. The analysis of the actual scenario of government actions in cyber security shows that the availability of multiple contrasting figures on the impact of cyber-attacks is holding back the adoption of policies for cyber space as their cost-effectiveness cannot be clearly assessed. The most relevant literature on the topic is reviewed to highlight the research gaps and to determine the related future research issues that need addressing to provide a solid ground for future legislative and regulatory actions at national and international levels.
Flooding attacks are well-known security threats that can lead to a denial of service (DoS) in computer networks. These attacks consist of an excessive traffic generation, by which an attacker aim to disrupt or interrupt some services in the network. The impact of flooding attacks is not just about some nodes, it can be also the whole network. Many routing protocols are vulnerable to these attacks, especially those using reactive mechanism of route discovery, like AODV. In this paper, we propose a statistical approach to defense against RREQ flooding attacks in MANETs. Our detection mechanism can be applied on AODV-based ad hoc networks. Simulation results prove that these attacks can be detected with a low rate of false alerts.
Privacy analysis is essential in the society. Data privacy preservation for access control, guaranteed service in wireless sensor networks are important parts. In programs' verification, we not only consider about these kinds of safety and liveness properties but some security policies like noninterference, and observational determinism which have been proposed as hyper properties. Fairness is widely applied in verification for concurrent systems, wireless sensor networks and embedded systems. This paper studies verification and analysis for proving security-relevant properties and hyper properties by proposing deductive proof rules under fairness requirements (constraints).
With the growth of the Internet, web applications are becoming very popular in the user communities. However, the presence of security vulnerabilities in the source code of these applications is raising cyber crime rate rapidly. It is required to detect and mitigate these vulnerabilities before their exploitation in the execution environment. Recently, Open Web Application Security Project (OWASP) and Common Vulnerabilities and Exposures (CWE) reported Cross-Site Scripting (XSS) as one of the most serious vulnerabilities in the web applications. Though many vulnerability detection approaches have been proposed in the past, existing detection approaches have the limitations in terms of false positive and false negative results. This paper proposes a context-sensitive approach based on static taint analysis and pattern matching techniques to detect and mitigate the XSS vulnerabilities in the source code of web applications. The proposed approach has been implemented in a prototype tool and evaluated on a public data set of 9408 samples. Experimental results show that proposed approach based tool outperforms over existing popular open source tools in the detection of XSS vulnerabilities.
Authorities like the Federal Financial Institutions Examination Council in the US and the European Central Bank in Europe have stepped up their expected minimum security requirements for financial institutions, including the requirements for risk analysis. In a previous article, we introduced a visual tool and a systematic way to estimate the probability of a successful incident response process, which we called an incident response tree (IRT). In this article, we present several scenarios using the IRT which could be used in a risk analysis of online financial services concerning fraud prevention. By minimizing the problem of underreporting, we are able to calculate the conditional probabilities of prevention, detection, and response in the incident response process of a financial institution. We also introduce a quantitative model for estimating expected loss from fraud, and conditional fraud value at risk, which enables a direct comparison of risk among online banking channels in a multi-channel environment.
Vulnerabilities usually represents the risk level of software, and it is of high value to forecast vulnerabilities so as to evaluate the security level of software. Current researches mainly focus on predicting the number of vulnerabilities or the occurrence time of vulnerabilities, however, to our best knowledge, there are no other researches focusing on the prediction of vulnerabilities' severity, which we think is an important aspect reflecting vulnerabilities and software security. To compensate for this deficiency, we borrows the grey model GM(1,1) from grey system theory to forecast the severity of vulnerabilities. The experiment is carried on the real data collected from CVE and proves the feasibility of our predicting method.
Biomimetic flapping wing vehicles have attracted recent interest because of their numerous potential military and civilian applications. In this paper we describe the design of a multi-agent adaptive controller for such a vehicle. This controller is responsible for estimating the vehicle pose (position and orientation) and then generating four parameters needed for split-cycle control of wing movements to correct pose errors. These parameters are produced via a subsumption architecture rule base. The control strategy is fault tolerant. Using an online learning process an agent continuously monitors the vehicle's behavior and initiates diagnostics if the behavior has degraded. This agent can then autonomously adapt the rule base if necessary. Each rule base is constructed using a combination of extrinsic and intrinsic evolution. Details on the vehicle, the multi-agent system architecture, agent task scheduling, rule base design, and vehicle control are provided.
In this paper, we focus on energy management of distributed generators (DGs) and energy storage system (ESS) in microgrids (MG) considering uncertainties in renewable energy and load demand. The MG energy management problem is formulated as a two-stage stochastic programming model based on optimization principle. Then, the optimization model is decomposed into a mixed integer quadratic programming problem by using discrete stochastic scenarios to approximate the continuous random variables. A Scenarios generation approach based on time-homogeneous Markov chain model is proposed to generate simulated time-series of renewable energy generation and load demand. Finally, the proposed stochastic programming model is tested in a typical LV network and solved by Matlab optimization toolbox. The simulation results show that the proposed stochastic programming model has a better performance to obtain robust scheduling solutions and lower the operating cost compared to the deterministic optimization modeling methods.
Language vector space models (VSMs) have recently proven to be effective across a variety of tasks. In VSMs, each word in a corpus is represented as a real-valued vector. These vectors can be used as features in many applications in machine learning and natural language processing. In this paper, we study the effect of vector space representations in cyber security. In particular, we consider a passive traffic analysis attack (Website Fingerprinting) that threatens users' navigation privacy on the web. By using anonymous communication, Internet users (such as online activists) may wish to hide the destination of web pages they access for different reasons such as avoiding tyrant governments. Traditional website fingerprinting studies collect packets from the users' network and extract features that are used by machine learning techniques to reveal the destination of certain web pages. In this work, we propose the packet to vector (P2V) approach where we model website fingerprinting attack using word vector representations. We show how the suggested model outperforms previous website fingerprinting works.
We consider a class of robust optimization problems that we call “robust-to-dynamics optimization” (RDO). The input to an RDO problem is twofold: (i) a mathematical program (e.g., an LP, SDP, IP, etc.), and (ii) a dynamical system (e.g., a linear, nonlinear, discrete, or continuous dynamics). The objective is to maximize over the set of initial conditions that forever remain feasible under the dynamics. The focus of this paper is on the case where the optimization problem is a linear program and the dynamics are linear. We establish some structural properties of the feasible set and prove that if the linear system is asymptotically stable, then the RDO problem can be solved in polynomial time. We also outline a semidefinite programming based algorithm for providing upper bounds on robust-to-dynamics linear programs.
The modern day approach in boulevard network centers on efficient factor in safe routing. The safe routing must follow up the low risk cities. The troubles in routing are a perennial one confronting people day in and day out. The common goal of everyone using a boulevard seems to be reaching the desired point through the fastest manner which involves the balancing conundrum of multiple expected and unexpected influencing factors such as time, distance, security and cost. It is universal knowledge that travelling is an almost inherent aspect in everyone's daily routine. With the gigantic and complex road network of a modern city or country, finding a low risk community for traversing the distance is not easy to achieve. This paper follows the code based community for detecting the boulevard network and fuzzy technique for identifying low risk community.
The limited battery lifetime and rapidly increasing functionality of portable multimedia devices demand energy-efficient designs. The filters employed mainly in these devices are based on Gaussian smoothing, which is slow and, severely affects the performance. In this paper, we propose a novel energy-efficient approximate 2D Gaussian smoothing filter (2D-GSF) architecture by exploiting "nearest pixel approximation" and rounding-off Gaussian kernel coefficients. The proposed architecture significantly improves Speed-Power-Area-Accuracy (SPAA) metrics in designing energy-efficient filters. The efficacy of the proposed approximate 2D-GSF is demonstrated on real application such as edge detection. The simulation results show 72%, 79% and 76% reduction in area, power and delay, respectively with acceptable 0.4dB loss in PSNR as compared to the well-known approximate 2D-GSF.
The amount of personal information contributed by individuals to digital repositories such as social network sites has grown substantially. The existence of this data offers unprecedented opportunities for data analytics research in various domains of societal importance including medicine and public policy. The results of these analyses can be considered a public good which benefits data contributors as well as individuals who are not making their data available. At the same time, the release of personal information carries perceived and actual privacy risks to the contributors. Our research addresses this problem area. In our work, we study a game-theoretic model in which individuals take control over participation in data analytics projects in two ways: 1) individuals can contribute data at a self-chosen level of precision, and 2) individuals can decide whether they want to contribute at all (or not). From the analyst's perspective, we investigate to which degree the research analyst has flexibility to set requirements for data precision, so that individuals are still willing to contribute to the project, and the quality of the estimation improves. We study this tradeoffs scenario for populations of homogeneous and heterogeneous individuals, and determine Nash equilibrium that reflect the optimal level of participation and precision of contributions. We further prove that the analyst can substantially increase the accuracy of the analysis by imposing a lower bound on the precision of the data that users can reveal.
This paper presents a model to evaluate and select security countermeasures from a pool of candidates. The model performs industrial evaluation and simulations of the financial and technical impact associated to security countermeasures. The financial impact approach uses the Return On Response Investment (RORI) index to compare the expected impact of the attack when no response is enacted against the impact after applying security countermeasures. The technical impact approach evaluates the protection level against a threat, in terms of confidentiality, integrity, and availability. We provide a use case on malware attacks that shows the applicability of our model in selecting the best countermeasure against an Advanced Persistent Threat.
The rate at which cyber-attacks are increasing globally portrays a terrifying picture upfront. The main dynamics of such attacks could be studied in terms of the actions of attackers and defenders in a cyber-security game. However currently little research has taken place to study such interactions. In this paper we use behavioral game theory and try to investigate the role of certain actions taken by attackers and defenders in a simulated cyber-attack scenario of defacing a website. We choose a Reinforcement Learning (RL) model to represent a simulated attacker and a defender in a 2×4 cyber-security game where each of the 2 players could take up to 4 actions. A pair of model participants were computationally simulated across 1000 simulations where each pair played at most 30 rounds in the game. The goal of the attacker was to deface the website and the goal of the defender was to prevent the attacker from doing so. Our results show that the actions taken by both the attackers and defenders are a function of attention paid by these roles to their recently obtained outcomes. It was observed that if attacker pays more attention to recent outcomes then he is more likely to perform attack actions. We discuss the implication of our results on the evolution of dynamics between attackers and defenders in cyber-security games.
A novel approach is developed for analyzing power system vulnerability related to extraordinary events. Vulnerability analyses are necessary for identification of barriers to prevent such events and as a basis for the emergency preparedness. Identification of cause and effect relationships to reveal vulnerabilities related to extraordinary events is a complex and difficult task. In the proposed approach, the analysis starts by identifying the critical consequences. Then the critical contingencies and operating states, and which external threats and causes that may result in such severe consequences, are identified. This is opposed to the traditional risk and vulnerability analysis which starts by analyzing threats and what can happen as a chain of events. The vulnerability analysis methodology is tested and demonstrated on real systems.
Today ICT networks are the economy's vital backbone. While their complexity continuously evolves, sophisticated and targeted cyber attacks such as Advanced Persistent Threats (APTs) become increasingly fatal for organizations. Numerous highly developed Intrusion Detection Systems (IDSs) promise to detect certain characteristics of APTs, but no mechanism which allows to rate, compare and evaluate them with respect to specific customer infrastructures is currently available. In this paper, we present BAESE, a system which enables vendor independent and objective rating and comparison of IDSs based on small sets of customer network data.