Biblio
Malicious domain names are consistently changing. It is challenging to keep blacklists of malicious domain names up-to-date because of the time lag between its creation and detection. Even if a website is clean itself, it does not necessarily mean that it won't be used as a pivot point to redirect users to malicious destinations. To address this issue, this paper demonstrates how to use linkage analysis and open-source threat intelligence to visualize the relationship of malicious domain names whilst verifying their categories, i.e., drive-by download, unwanted software etc. Featured by a graph-based model that could present the inter-connectivity of malicious domain names in a dynamic fashion, the proposed approach proved to be helpful for revealing the group patterns of different kinds of malicious domain names. When applied to analyze a blacklisted set of URLs in a real enterprise network, it showed better effectiveness than traditional methods and yielded a clearer view of the common patterns in the data.
In multi-tenant datacenters, the hardware may be homogeneous but the traffic often is not. For instance, customers who pay an equal amount of money can get an unequal share of the bottleneck capacity when they do not open the same number of TCP connections. To address this problem, several recent proposals try to manipulate the traffic that TCP sends from the VMs. VCC and AC/DC are two new mechanisms that let the hypervisor control traffic by influencing the TCP receiver window (rwnd). This avoids changing the guest OS, but has limitations (it is not possible to make TCP increase its rate faster than it normally would). Seawall, on the other hand, completely rewrites TCP's congestion control, achieving fairness but requiring significant changes to both the hypervisor and the guest OS. There seems to be a need for a middle ground: a method to control TCP's sending rate without requiring a complete redesign of its congestion control. We introduce a minimally-invasive solution that is flexible enough to cater for needs ranging from weighted fairness in multi-tenant datacenters to potentially offering Internet-wide benefits from reduced interflow competition.
Congestion diffusion resulting from the coupling by resource competing is a kind of typical failure propagation in network systems. The existing models of failure propagation mainly focused on the coupling by direct physical connection between nodes, the most efficiency path, or dependence group, while the coupling by resource competing is ignored. In this paper, a model of network congestion diffusion with resource competing is proposed. With the analysis of the similarities to resource competing in biomolecular network, the model describing the dynamic changing process of biomolecule concentration based on titration mechanism provides reference for our model. Then the innovation on titration mechanism is proposed to describe the dynamic changing process of link load in networks, and a novel congestion model is proposed. By this model, the global congestion can be evaluated. Simulations show that network congestion with resource competing can be obtained from our model.
We present ctrlTCP, a method to combine the congestion controls of multiple TCP connections. In contrast to the previous methods such as the Congestion Manager, ctrlTCP can couple all TCP flows that leave one sender, traverse a common bottleneck (e.g., a home user's thin uplink) and arrive at different destinations. Using ns-2 simulations and an implementation in the FreeBSD kernel, we show that our mechanism reduces queuing delay, packet loss, and short flow completion times while enabling precise allocation of the share of the available bandwidth between the connections according to the needs of the applications.
Communication between two Internet hosts using parallel connections may result in unwanted interference between the connections. In this dissertation, we propose a sender-side solution to address this problem by letting the congestion controllers of the different connections collaborate, correctly taking congestion control logic into account. Real-life experiments and simulations show that our solution works for a wide variety of congestion control mechanisms, provides great flexibility when allocating application traffic to the connections, and results in lower queuing delay and less packet loss.
Confidentiality, Integrity, and Availability are principal keys to build any secure software. Considering the security principles during the different software development phases would reduce software vulnerabilities. This paper measures the impact of the different software quality metrics on Confidentiality, Integrity, or Availability for any given object-oriented PHP application, which has a list of reported vulnerabilities. The National Vulnerability Database was used to provide the impact score on confidentiality, integrity, and availability for the reported vulnerabilities on the selected applications. This paper includes a study for these scores and its correlation with 25 code metrics for the given vulnerable source code. The achieved results were able to correlate 23.7% of the variability in `Integrity' to four metrics: Vocabulary Used in Code, Card and Agresti, Intelligent Content, and Efferent Coupling metrics. The Length (Halstead metric) could alone predict about 24.2 % of the observed variability in ` Availability'. The results indicate no significant correlation of `Confidentiality' with the tested code metrics.
In this paper, we consider the problem of decentralized verification for large-scale cascade interconnections of linear subsystems such that dissipativity properties of the overall system are guaranteed with minimum knowledge of the dynamics. In order to achieve compositionality, we distribute the verification process among the individual subsystems, which utilize limited information received locally from their immediate neighbors. Furthermore, to obviate the need for full knowledge of the subsystem parameters, each decentralized verification rule employs a model-free learning structure; a reinforcement learning algorithm that allows for online evaluation of the appropriate storage function that can be used to verify dissipativity of the system up to that point. Finally, we show how the interconnection can be extended by adding learning-enabled subsystems while ensuring dissipativity.
This paper describes an approach to detecting malicious code introduced by insiders, which can compromise the data integrity in a program. The approach identifies security spots in a program, which are either malicious code or benign code. Malicious code is detected by reviewing each security spot to determine whether it is malicious or benign. The integrity breach conditions (IBCs) for object-oriented programs are specified to identify security spots in the programs. The IBCs are specified by means of the concepts of coupling within an object or between objects. A prototype tool is developed to validate the approach with a case study.
Software integration in modern vehicles is continuously expanding. This is due to the fact that vehicle manufacturers are always trying to enhance and add more innovative and competitive features that may rely on complex software functionalities. However, these features come at a cost. They amplify the security vulnerabilities in vehicles and lead to more security issues in today's automobiles. As a result, the need for identifying vulnerable components in a vehicle software system has become crucial. Security experts need to know which components of the vehicle software system can be exploited for attacks and should focus their testing and inspection efforts on it. Nevertheless, it is a challenging and costly task to identify these weak components in a vehicle's system. In this paper, we propose some security vulnerability metrics for connected vehicles that aim to assist software testers during the development life-cycle in order to identify the frail links that put the vehicle at highsecurity risks. Vulnerable function assessment can give software testers a good idea about which components in a connected vehicle need to be prioritized in order to mitigate the risk and hence secure the vehicle. The proposed metrics were applied to OpenPilot - a software that provides Autopilot feature - and has been integrated with 48 different vehicles.. The application shows how the defined metrics can be effectively used to quantitatively measure the vulnerabilities of a vehicle software system.
The key factors for deploying successful services is centered on the service design practices adopted by an enterprise. The design level information should be validated and measures are required to quantify the structural attributes. The metrics at this stage will support an early discovery of design flaws and help designers to predict the capabilities of service oriented architecture (SOA) adoption. In this work, we take a deeper look at how we can forecast the key SOA capabilities infrastructure efficiency and service reuse from the service designs modeled by SOA modeling language. The proposed approach defines metrics based on the structural and domain level similarity of service operations. The proposed metrics are analytically validated with respect to software engineering metrics properties. Moreover, a tool has been developed to automate the proposed approach and the results indicate that the metrics predict the SOA capabilities at the service design stage. This work can be further extended to predict the business based capabilities of SOA adoption such as flexibility and agility.
In vehicular networks, each message is signed by the generating node to ensure accountability for the contents of that message. For privacy reasons, each vehicle uses a collection of certificates, which for accountability reasons are linked at a central authority. One such design is the Security Credential Management System (SCMS) [1], which is the leading credential management system in the US. The SCMS is composed of multiple components, each of which has a different task for key management, which are logically separated. The SCMS is designed to ensure privacy against a single insider compromise, or against outside adversaries. In this paper, we demonstrate that the current SCMS design fails to achieve its design goal, showing that a compromised authority can gain substantial information about certificate linkages. We propose a solution that accommodates threshold-based detection, but uses relabeling and noise to limit the information that can be learned from a single insider adversary. We also analyze our solution using techniques from differential privacy and validate it using traffic-simulator based experiments. Our results show that our proposed solution prevents privacy information leakage against the compromised authority in collusion with outsider attackers.
The correct prediction of faulty modules or classes has a number of advantages such as improving the quality of software and assigning capable development resources to fix such faults. There have been different kinds of fault/defect prediction models proposed in literature, but a great majority of them makes use of static code metrics as independent variables for making predictions. Recently, process metrics have gained a considerable attention as alternative metrics to use for making trust-worthy predictions. The objective of this paper is to investigate different combinations of static code and process metrics for evaluating fault prediction performance. We have used publicly available data sets, along with a frequently used classifier, Naive Bayes, to run our experiments. We have, both statistically and visually, analyzed our experimental results. The statistical analysis showed evidence against any significant difference in fault prediction performances for a variety of different combinations of metrics. This reinforced earlier research results that process metrics are as good as predictors of fault proneness as static code metrics. Furthermore, the visual inspection of box plots revealed that the best set of metrics for fault prediction is a mix of both static code and process metrics. We also presented evidence in support of some process metrics being more discriminating than others and thus making them as good predictors to use.
Interconnect opens are known to be one of the predominant defects in nanoscale technologies. Automatic test pattern generation for open faults is challenging, because of their rather unstable behavior and the numerous electrical parameters which need to be considered. Thus, most approaches try to avoid accurate modeling of all constraints like the influence of the aggressors on the open net and use simplified fault models in order to detect as many faults as possible or make assumptions which decrease both complexity and accuracy. Yet, this leads to the problem that not only generated tests may be invalidated but also the localization of a specific fault may fail - in case such a model is used as basis for diagnosis. Furthermore, most of the models do not consider the problem of oscillating behavior, caused by feedback introduced by coupling capacitances, which occurs in almost all designs. In [1], the Robust Enhanced Aggressor Victim Model (REAV) and in [2] an extension to address the problem of oscillating behavior were introduced. The resulting model does not only consider the influence of all aggressors accurately but also guarantees robustness against oscillating behavior as well as process variations affecting the thresholds of gates driven by an open interconnect. In this work we present the first diagnostic classification algorithm for this model. This algorithm considers all constraints enforced by the REAV model accurately - and hence handles unknown values as well as oscillating behavior. In addition, it allows to distinguish faults at the same interconnect and thus reducing the area that has to be considered for physical failure analysis. Experimental results show the high efficiency of the new method handling circuits with up to 500,000 non-equivalent faults and considerably increasing the diagnostic resolution.