Biblio
For modern Automatic Test Equipment (ATE) one of the most daunting tasks is now Information Assurance (IA). What was once at most a secondary item consisting mainly of installing an Anti-Virus suite is now becoming one of the most important aspects of ATE. Given the current climate of IA it has become important to ensure ATE is kept safe from any breaches of security or loss of information. Even though most ATE are not on the Internet (or even on a network for many) they are still vulnerable to some of the same attack vectors plaguing common computers and other electronic devices. This paper will discuss some of the processes and procedures which must be used to ensure that modern ATE can continue to be used to test and detect faults in the systems they are designed to test. The common items that must be considered for ATE are as follows: The ATE system must have some form of Anti-Virus (as should all computers). The ATE system should have a minimum software footprint only providing the software needed to perform the task. The ATE system should be verified to have all the Operating System (OS) settings configured pursuant to the task it is intended to perform. The ATE OS settings should include password and password expiration settings to prevent access by anyone not expected to be on the system. The ATE system software should be written and constructed such that it in itself is not readily open to attack. The ATE system should be designed in a manner such that none of the instruments in the system can easily be attacked. The ATE system should insure any paths to the outside world (such as Ethernet or USB devices) are limited to only those required to perform the task it was designed for. These and many other common configuration concerns will be discussed in the paper.
Given a history of detected malware attacks, can we predict the number of malware infections in a country? Can we do this for different malware and countries? This is an important question which has numerous implications for cyber security, right from designing better anti-virus software, to designing and implementing targeted patches to more accurately measuring the economic impact of breaches. This problem is compounded by the fact that, as externals, we can only detect a fraction of actual malware infections. In this paper we address this problem using data from Symantec covering more than 1.4 million hosts and 50 malware spread across 2 years and multiple countries. We first carefully design domain-based features from both malware and machine-hosts perspectives. Secondly, inspired by epidemiological and information diffusion models, we design a novel temporal non-linear model for malware spread and detection. Finally we present ESM, an ensemble-based approach which combines both these methods to construct a more accurate algorithm. Using extensive experiments spanning multiple malware and countries, we show that ESM can effectively predict malware infection ratios over time (both the actual number and trend) upto 4 times better compared to several baselines on various metrics. Furthermore, ESM's performance is stable and robust even when the number of detected infections is low.
Traditional Anti-virus technology is primarily based on static analysis and dynamic monitoring. However, both technologies are heavily depended on application files, which increase the risk of being attacked, wasting of time and network bandwidth. In this study, we propose a new graph-based method, through which we can preliminary detect malicious URL without application file. First, the relationship between URLs can be found through the relationship between people and URLs. Then the association rules can be mined with confidence of each frequent URLs. Secondly, the networks of URLs was built through the association rules. When the networks of URLs were finished, we clustered the date with modularity to detect communities and every community represents different types of URLs. We suppose that a URL has association with one community, then the URL is malicious probably. In our experiments, we successfully captured 82 % of malicious samples, getting a higher capture than using traditional methods.