Biblio
Vehicular Ad Hoc Networks (VANETs) enable vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications that bring many benefits and conveniences to improve the road safety and drive comfort in future transportation systems. Sybil attack is considered one of the most risky threats in VANETs since a Sybil attacker can generate multiple fake identities with false messages to severely impair the normal functions of safety-related applications. In this paper, we propose a novel Sybil attack detection method based on Received Signal Strength Indicator (RSSI), Voiceprint, to conduct a widely applicable, lightweight and full-distributed detection for VANETs. To avoid the inaccurate position estimation according to predefined radio propagation models in previous RSSI-based detection methods, Voiceprint adopts the RSSI time series as the vehicular speech and compares the similarity among all received time series. Voiceprint does not rely on any predefined radio propagation model, and conducts independent detection without the support of the centralized infrastructure. It has more accurate detection rate in different dynamic environments. Extensive simulations and real-world experiments demonstrate that the proposed Voiceprint is an effective method considering the cost, complexity and performance.
As the use of low-power and low resource embedded devices continues to increase dramatically with the introduction of new Internet of Things (IoT) devices, security techniques are necessary which are compatible with these devices. This research advances the knowledge in the area of cyber security for the IoT through the exploration of a moving target defense to apply for limiting the time attackers may conduct reconnaissance on embedded systems while considering the challenges presented from IoT devices such as resource and performance constraints. We introduce the design and optimizations for a Micro-Moving Target IPv6 Defense including a description of the modes of operation, needed protocols, and use of lightweight hash algorithms. We also detail the testing and validation possibilities including a Cooja simulation configuration, and describe the direction to further enhance and validate the security technique through large scale simulations and hardware testing followed by providing information on other future considerations.
Robust Trojans are inserted in outsourced products resulting in security vulnerabilities. Post-silicon testing is done mandatorily to detect such malicious inclusions. Logic testing becomes obsolete for larger circuits with sequential Trojans. For such cases, side channel analysis is an effective approach. The major challenge with the side channel analysis is reduction in hardware Trojan detection sensitivity due to process variation (process variation could lead to false positives and false negatives and it is unavoidable during a manufacturing stage). In this paper Self Referencing method is proposed that measures leakage power of the circuit at four different time windows that hammers the Trojan into triggering and also help to identify/eliminate false positives/false negatives due to process variation.
The best practice to prevent Cross Site Scripting (XSS) attacks is to apply encoders to sanitize untrusted data. To balance security and functionality, encoders should be applied to match the web page context, such as HTML body, JavaScript, and style sheets. A common programming error is the use of a wrong encoder to sanitize untrusted data, leaving the application vulnerable. We present a security unit testing approach to detect XSS vulnerabilities caused by improper encoding of untrusted data. Unit tests for the XSS vulnerability are automatically constructed out of each web page and then evaluated by a unit test execution framework. A grammar-based attack generator is used to automatically generate test inputs. We evaluate our approach on a large open source medical records application, demonstrating that we can detect many 0-day XSS vulnerabilities with very low false positives, and that the grammar-based attack generator has better test coverage than industry best practices.
Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.
This presents a new model to support empirical failure probability estimation for a software-intensive system. The new element of the approach is that it combines the results of testing using a simulated hardware platform with results from testing on the real platform. This approach addresses a serious practical limitation of a technique known as statistical testing. This limitation will be called the test time expansion problem (or simply the 'time problem'), which is that the amount of testing required to demonstrate useful levels of reliability over a time period T is many orders of magnitude greater than T. The time problem arises whether the aim is to demonstrate ultra-high reliability levels for protection system, or to demonstrate any (desirable) reliability levels for continuous operation ('high demand') systems. Specifically, the theoretical feasibility of a platform simulation approach is considered since, if this is not proven, questions of practical implementation are moot. Subject to the assumptions made in the paper, theoretical feasibility is demonstrated.
This paper focuses on one type of Covert Storage Channel (CSC) that uses the 6-bit TCP flag header in TCP/IP network packets to transmit secret messages between accomplices. We use relative entropy to characterize the irregularity of network flows in comparison to normal traffic. A normal profile is created by the frequency distribution of TCP flags in regular traffic packets. In detection, the TCP flag frequency distribution of network traffic is computed for each unique IP pair. In order to evaluate the accuracy and efficiency of the proposed method, this study uses real regular traffic data sets as well as CSC messages using coding schemes under assumptions of both clear text, composed by a list of keywords common in Unix systems, and encrypted text. Moreover, smart accomplices may use only those TCP flags that are ever appearing in normal traffic. Then, in detection, the relative entropy can reveal the dissimilarity of a different frequency distribution from this normal profile. We have also used different data processing methods in detection: one method summarizes all the packets for a pair of IP addresses into one flow and the other uses a sliding moving window over such a flow to generate multiple frames of packets. The experimentation results, displayed by Receiver Operating Characteristic (ROC) curves, have shown that the method is promising to differentiate normal and CSC traffic packet streams. Furthermore the delay of raising an alert is analyzed for CSC messages to show its efficiency.
The rapid growth of science and technology in the telecommunications world can come up with new ways for some people bent on abusing for threatening information security as hackers, crackers, carder, phreaker and so on. If the information is on the wrong side will result in losses. Information that must be considered is the security of confidential information. Steganography is a method that can be used to hide a message by using digital media. Digital Steganography using digital media as the container vessel such as images, sounds, text, and video. Hidden secret data can also include images, audio, text, and video. In this final audio steganography implemented. One method that can be used in steganography is the Least Significant Bit (LSB). Steganography implementation will be accompanied by the application of cryptography in the form of encryption and decryption. This method works is messages that have been encrypted beforehand will be hidden evenly on each region in MP3 or WAV already divided, with modify / change the LSB of the media container with the bits of information to be hidden. In making the steganography application, the author uses the Java programming language eclipse, because the program is quite easy and can be run in the Android smartphone operating system.
Phishing is a major concern on the Internet today and many users are falling victim because of criminal's deceitful tactics. Blacklisting is still the most common defence users have against such phishing websites, but is failing to cope with the increasing number. In recent years, researchers have devised modern ways of detecting such websites using machine learning. One such method is to create machine learnt models of URL features to classify whether URLs are phishing. However, there are varying opinions on what the best approach is for features and algorithms. In this paper, the objective is to evaluate the performance of the Random Forest algorithm using a lexical only dataset. The performance is benchmarked against other machine learning algorithms and additionally against those reported in the literature. Initial results from experiments indicate that the Random Forest algorithm performs the best yielding an 86.9% accuracy.
Personalized medicine performs diagnoses and treatments according to the DNA information of the patients. The new paradigm will change the health care model in the future. A doctor will perform the DNA sequence matching instead of the regular clinical laboratory tests to diagnose and medicate the diseases. Additionally, with the help of the affordable personal genomics services such as 23andMe, personalized medicine will be applied to a great population. Cloud computing will be the perfect computing model as the volume of the DNA data and the computation over it are often immense. However, due to the sensitivity, the DNA data should be encrypted before being outsourced into the cloud. In this paper, we start from a practical system model of the personalize medicine and present a solution for the secure DNA sequence matching problem in cloud computing. Comparing with the existing solutions, our scheme protects the DNA data privacy as well as the search pattern to provide a better privacy guarantee. We have proved that our scheme is secure under the well-defined cryptographic assumption, i.e., the sub-group decision assumption over a bilinear group. Unlike the existing interactive schemes, our scheme requires only one round of communication, which is critical in practical application scenarios. We also carry out a simulation study using the real-world DNA data to evaluate the performance of our scheme. The simulation results show that the computation overhead for real world problems is practical, and the communication cost is small. Furthermore, our scheme is not limited to the genome matching problem but it applies to general privacy preserving pattern matching problems which is widely used in real world.
Machine learning and data mining algorithms typically assume that the training and testing data are sampled from the same fixed probability distribution; however, this violation is often violated in practice. The field of domain adaptation addresses the situation where this assumption of a fixed probability between the two domains is violated; however, the difference between the two domains (training/source and testing/target) may not be known a priori. There has been a recent thrust in addressing the problem of learning in the presence of an adversary, which we formulate as a problem of domain adaption to build a more robust classifier. This is because the overall security of classifiers and their preprocessing stages have been called into question with the recent findings of adversaries in a learning setting. Adversarial training (and testing) data pose a serious threat to scenarios where an attacker has the opportunity to ``poison'' the training or ``evade'' on the testing data set(s) in order to achieve something that is not in the best interest of the classifier. Recent work has begun to show the impact of adversarial data on several classifiers; however, the impact of the adversary on aspects related to preprocessing of data (i.e., dimensionality reduction or feature selection) has widely been ignored in the revamp of adversarial learning research. Furthermore, variable selection, which is a vital component to any data analysis, has been shown to be particularly susceptible under an attacker that has knowledge of the task. In this work, we explore avenues for learning resilient classification models in the adversarial learning setting by considering the effects of adversarial data and how to mitigate its effects through optimization. Our model forms a single convex optimization problem that uses the labeled training data from the source domain and known- weaknesses of the model for an adversarial component. We benchmark the proposed approach on synthetic data and show the trade-off between classification accuracy and skew-insensitive statistics.
Deep Learning has been proven more effective than conventional machine-learning algorithms in solving classification problem with high dimensionality and complex features, especially when trained with big data. In this paper, a deep learning binomial classifier for Network Intrusion Detection System is proposed and experimentally evaluated using the UNSW-NB15 dataset. Three different experiments were executed in order to determine the optimal activation function, then to select the most important features and finally to test the proposed model on unseen data. The evaluation results demonstrate that the proposed classifier outperforms other models in the literature with 98.99% accuracy and 0.56% false alarm rate on unseen data.
New and unseen network attacks pose a great threat to the signature-based detection systems. Consequently, machine learning-based approaches are designed to detect attacks, which rely on features extracted from network data. The problem is caused by different distribution of features in the training and testing datasets, which affects the performance of the learned models. Moreover, generating labeled datasets is very time-consuming and expensive, which undercuts the effectiveness of supervised learning approaches. In this paper, we propose using transfer learning to detect previously unseen attacks. The main idea is to learn the optimized representation to be invariant to the changes of attack behaviors from labeled training sets and non-labeled testing sets, which contain different types of attacks and feed the representation to a supervised classifier. To the best of our knowledge, this is the first effort to use a feature-based transfer learning technique to detect unseen variants of network attacks. Furthermore, this technique can be used with any common base classifier. We evaluated the technique on publicly available datasets, and the results demonstrate the effectiveness of transfer learning to detect new network attacks.
Metaheuristic search technique is one of the advance approach when compared with traditional heuristic search technique. To select one option among different alternatives is not hard to get but really hard is give assurance that being cost effective. This hard problem is solved by the meta-heuristic search technique with the help of fitness function. Fitness function is a crucial metrics or a measure which helps in deciding which solution is optimal to choose from available set of test sets. This paper discusses hill climbing, simulated annealing, tabu search, genetic algorithm and particle swarm optimization techniques in detail explaining with the help of the algorithm. If metaheuristic search techniques combine some of the security testing methods, it would result in better searching technique as well as secure too. This paper primarily focusses on the metaheuristic search techniques.
Hypervisors are the main components for managing virtual machines on cloud computing systems. Thus, the security of hypervisors is very crucial as the whole system could be compromised when just one vulnerability is exploited. In this paper, we assess the vulnerabilities of widely used hypervisors including VMware ESXi, Citrix XenServer and KVM using the NIST 800-115 security testing framework. We perform real experiments to assess the vulnerabilities of those hypervisors using security testing tools. The results are evaluated using weakness information from CWE, and using vulnerability information from CVE. We also compute the severity scores using CVSS information. All vulnerabilities found of three hypervisors will be compared in terms of weaknesses, severity scores and impact. The experimental results showed that ESXi and XenServer have common weaknesses and vulnerabilities whereas KVM has fewer vulnerabilities. In addition, we discover a new vulnerability called HTTP response splitting on ESXi Web interface.
Integrating security testing into the workflow of software developers not only can save resources for separate security testing but also reduce the cost of fixing security vulnerabilities by detecting them early in the development cycle. We present an automatic testing approach to detect a common type of Cross Site Scripting (XSS) vulnerability caused by improper encoding of untrusted data. We automatically extract encoding functions used in a web application to sanitize untrusted inputs and then evaluate their effectiveness by automatically generating XSS attack strings. Our evaluations show that this technique can detect 0-day XSS vulnerabilities that cannot be found by static analysis tools. We will also show that our approach can efficiently cover a common type of XSS vulnerability. This approach can be generalized to test for input validation against other types injections such as command line injection.
The premise of this year's SafeConfig Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners.
SW Quality Assessment models are either too broad such as CMMI-DEV and SPICE that cover the full software development life cycle (SDLC), or too narrow such as TMMI and TPI that focus on testing. Quality Management as a main concern within the software industry is broader than the concept of testing. The V-Model sets a broader view with the concepts of Verification and Validation. Quality Assurance (QA) is another broader term that includes quality of processes. Configuration audits add more scope. In parallel there are some less visible dimensions in quality not often addressed in traditional models such as business alignment of QA efforts. This paper compares the commonly accepted models related to software quality management and proposes a model that fills an empty space in this area. The paper provides some analysis of the concepts of maturity and capability levels and provides some proposed adaptations for quality management assessment.
In last twenty years, use of internet applications, web hacking activities have exaggerated speedily. Organizations facing very significant challenges in securing their web applications from rising cyber threats, as compromise with the protection issues don't seem to be reasonable. Vulnerability Assessment and Penetration Testing (VAPT) techniques help them to go looking out security loopholes. These security loopholes could also be utilized by attackers to launch attacks on technical assets. Thus it is necessary ascertain these vulnerabilities and install security patches. VAPT helps organization to determine whether their security arrangements are working properly. This paper aims to elucidate overview and various techniques used in vulnerability assessment and penetration testing (VAPT). Also focuses on making cyber security awareness and its importance at various level of an organization for adoption of required up to date security measures by the organization to stay protected from various cyber-attacks.
Massively Open Online Courses (MOOCs) provide a unique opportunity to reach out to students who would not normally be reached by alleviating the need to be physically present in the classroom. However, teaching software security coursework outside of a classroom setting can be challenging. What are the challenges when converting security material from an on-campus course to the MOOC format? The goal of this research is to assist educators in constructing software security coursework by providing a comparison of classroom courses and MOOCs. In this work, we compare demographic information, student motivations, and student results from an on-campus software security course and a MOOC version of the same course. We found that the two populations of students differed, with the MOOC reaching a more diverse set of students than the on-campus course. We found that students in the on-campus course had higher quiz scores, on average, than students in the MOOC. Finally, we document our experience running the courses and what we would do differently to assist future educators constructing similar MOOC's.
Context: While successful conventional software development regularly employs separate testing staff, there are successful agile teams with as well as without separate testers. Question: How does successful agile development work without separate testers? What are advantages and disadvantages? Method: A case study, based on Grounded Theory evaluation of interviews and direct observation of three agile teams; one having separate testers, two without. All teams perform long-term development of parts of e-business web portals. Results: Teams without testers use a quality experience work mode centered around a tight field-use feedback loop, driven by a feeling of responsibility, supported by test automation, resulting in frequent deployments. Conclusion: In the given domain, hand-overs to separate testers appear to hamper the feedback loop more than they contribute to quality, so working without testers is preferred. However, Quality Experience is achievable only with modular architectures and in suitable domains.
Customer Edge Switching (CES) is an experimental Internet architecture that provides reliable and resilient multi-domain communications. It provides resilience against security threats because domains negotiate inbound and outbound policies before admitting new traffic. As CES and its signalling protocols are being prototyped, there is a need for independent testing of the CES architecture. Hence, our research goal is to develop an automated test framework that CES protocol designers and early adopters can use to improve the architecture. The test framework includes security, functional, and performance tests. Using the Robot Framework and STRIDE analysis, in this paper we present this automated security test framework. By evaluating sample test scenarios, we show that the Robot Framework and our CES test suite have provided productive discussions about this new architecture, in addition to serving as clear, easy-to-read documentation. Our research also confirms that test automation can be useful to improve new protocol architectures and validate their implementation.
A3 is an execution management environment that aims to make network-facing applications and services resilient against zero-day attacks. A3 recently underwent two adversarial evaluations of its defensive capabilities. In one, A3 defended an App Store used in a Capture the Flag (CTF) tournament, and in the other, a tactically relevant network service in a red team exercise. This paper describes the A3 defensive technologies evaluated, the evaluation results, and the broader lessons learned about evaluations for technologies that seek to protect critical systems from zero-day attacks.
This paper describes multiple system security engineering techniques for assessing system security vulnerabilities and discusses the application of these techniques at different system maturity points. The proposed vulnerability assessment approach allows a systems engineer to identify and assess vulnerabilities early in the life cycle and to continually increase the fidelity of the vulnerability identification and assessment as the system matures.