Biblio
The majority of applications use a prompt for a username and password. Passwords are recommended to be unique, long, complex, alphanumeric and non-repetitive. These reasons that make passwords secure may prove to be a point of weakness. The complexity of the password provides a challenge for a user and they may choose to record it. This compromises the security of the password and takes away its advantage. An alternate method of security is Keystroke Biometrics. This approach uses the natural typing pattern of a user for authentication. This paper proposes a new method for reducing error rates and creating a robust technique. The new method makes use of multiple sensors to obtain information about a user. An artificial neural network is used to model a user's behavior as well as for retraining the system. An alternate user verification mechanism is used in case a user is unable to match their typing pattern.
In this paper we use car games as a simulator for real automobiles, and generate driving logs that contain the vehicle data. This includes values for parameters like gear used, speed, left turns taken, right turns taken, accelerator, braking and so on. From these parameters we have derived some more additional parameters and analyzed them. As the input from automobile driver is only routine driving, no explicit feedback is required; hence there are more chances of being able to accurately profile the driver. Experimentation and analysis from this logged data shows possibility that driver profiling can be done from vehicle data. Since the profiles are unique, these can be further used for a wide range of applications and can successfully exhibit typical driving characteristics of each user.
This paper presents a model calibration algorithm for the modulated wideband converter (MWC) with non-ideal analog lowpass filter (LPF). The presented technique uses a test signal to estimate the finite impulse response (FIR) of the practical non-ideal LPF, and then a digital compensation filter is designed to calibrate the approximated FIR filter in the digital domain. At the cost of a moderate oversampling rate, the calibrated filter performs as an ideal LPF. The calibrated model uses the MWC system with non-ideal LPF to capture the samples of underlying signal, and then the samples are filtered by the digital compensation filter. Experimental results indicate that, without making any changes to the architecture of MWC, the proposed algorithm can obtain the samples as that of standard MWC with ideal LPF, and the signal can be reconstructed with overwhelming probability.
Taiwan has become the frontline in an emerging cyberspace battle. Cyberattacks from different countries are constantly reported during past decades. The incident of Advanced Persistent Threat (APT) is analyzed from the golden triangle components (people, process and technology) to ensure the application of digital forensics. This study presents a novel People-Process-Technology-Strategy (PPTS) model by implementing a triage investigative step to identify evidence dynamics in digital data and essential information in auditing logs. The result of this study is expected to improve APT investigation. The investigation scenario of this proposed methodology is illustrated by applying to some APT incidents in Taiwan.
Increased use of Android devices and its open source development framework has attracted many digital crime groups to use Android devices as one of the key attack surfaces. Due to the extensive connectivity and multiple sources of network connections, Android devices are most suitable to botnet based malware attacks. The research focuses on developing a cloud-based Android botnet malware detection system. A prototype of the proposed system is deployed which provides a runtime Android malware analysis. The paper explains architectural implementation of the developed system using a botnet detection learning dataset and multi-layered algorithm used to predict botnet family of a particular application.
The rate at which cyber-attacks are increasing globally portrays a terrifying picture upfront. The main dynamics of such attacks could be studied in terms of the actions of attackers and defenders in a cyber-security game. However currently little research has taken place to study such interactions. In this paper we use behavioral game theory and try to investigate the role of certain actions taken by attackers and defenders in a simulated cyber-attack scenario of defacing a website. We choose a Reinforcement Learning (RL) model to represent a simulated attacker and a defender in a 2×4 cyber-security game where each of the 2 players could take up to 4 actions. A pair of model participants were computationally simulated across 1000 simulations where each pair played at most 30 rounds in the game. The goal of the attacker was to deface the website and the goal of the defender was to prevent the attacker from doing so. Our results show that the actions taken by both the attackers and defenders are a function of attention paid by these roles to their recently obtained outcomes. It was observed that if attacker pays more attention to recent outcomes then he is more likely to perform attack actions. We discuss the implication of our results on the evolution of dynamics between attackers and defenders in cyber-security games.
In the Internet-of-Things (IoT), users might share part of their data with different IoT prosumers, which offer applications or services. Within this open environment, the existence of an adversary introduces security risks. These can be related, for instance, to the theft of user data, and they vary depending on the security controls that each IoT prosumer has put in place. To minimize such risks, users might seek an “optimal” set of prosumers. However, assuming the adversary has the same information as the users about the existing security measures, he can then devise which prosumers will be preferable (e.g., with the highest security levels) and attack them more intensively. This paper proposes a decision-support approach that minimizes security risks in the above scenario. We propose a non-cooperative, two-player game entitled Prosumers Selection Game (PSG). The Nash Equilibria of PSG determine subsets of prosumers that optimize users' payoffs. We refer to any game solution as the Nash Prosumers Selection (NPS), which is a vector of probabilities over subsets of prosumers. We show that when using NPS, a user faces the least expected damages. Additionally, we show that according to NPS every prosumer, even the least secure one, is selected with some non-zero probability. We have also performed simulations to compare NPS against two different heuristic selection algorithms. The former is proven to be approximately 38% more effective in terms of security-risk mitigation.
Security decision-making is a critical task in tackling security threats affecting a system or process. It often involves selecting a suitable resolution action to tackle an identified security risk. To support this selection process, decision-makers should be able to evaluate and compare available decision options. This article introduces a modelling language that can be used to represent the effects of resolution actions on the stakeholders' goals, the crime process, and the attacker. In order to reach this aim, we develop a multidisciplinary framework that combines existing knowledge from the fields of software engineering, crime science, risk assessment, and quantitative decision analysis. The framework is illustrated through an application to a case of identity theft.
We provide a generic framework that, with the help of a preprocessing phase that is independent of the inputs of the users, allows an arbitrary number of users to securely outsource a computation to two non-colluding external servers. Our approach is shown to be provably secure in an adversarial model where one of the servers may arbitrarily deviate from the protocol specification, as well as employ an arbitrary number of dummy users. We use these techniques to implement a secure recommender system based on collaborative filtering that becomes more secure, and significantly more efficient than previously known implementations of such systems, when the preprocessing efforts are excluded. We suggest different alternatives for preprocessing, and discuss their merits and demerits.
The term “Advanced Persistent Threat” refers to a well-organized, malicious group of people who launch stealthy attacks against computer systems of specific targets, such as governments, companies or military. The attacks themselves are long-lasting, difficult to expose and often use very advanced hacking techniques. Since they are advanced in nature, prolonged and persistent, the organizations behind them have to possess a high level of knowledge, advanced tools and competent personnel to execute them. The attacks are usually preformed in several phases - reconnaissance, preparation, execution, gaining access, information gathering and connection maintenance. In each of the phases attacks can be detected with different probabilities. There are several ways to increase the level of security of an organization in order to counter these incidents. First and foremost, it is necessary to educate users and system administrators on different attack vectors and provide them with knowledge and protection so that the attacks are unsuccessful. Second, implement strict security policies. That includes access control and restrictions (to information or network), protecting information by encrypting it and installing latest security upgrades. Finally, it is possible to use software IDS tools to detect such anomalies (e.g. Snort, OSSEC, Sguil).
In response to the critical challenges of the current Internet architecture and its protocols, a set of so-called clean slate designs has been proposed. Common among them is an addressing scheme that separates location and identity with self-certifying, flat and non-aggregatable address components. Each component is long, reaching a few kilobits, and would consume an amount of fast memory in data plane devices (e.g., routers) that is far beyond existing capacities. To address this challenge, we present Caesar, a high-speed and length-agnostic forwarding engine for future border routers, performing most of the lookups within three fast memory accesses. To compress forwarding states, Caesar constructs scalable and reliable Bloom filters in Ternary Content Addressable Memory (TCAM). To guarantee correctness, Caesar detects false positives at high speed and develops a blacklisting approach to handling them. In addition, we optimize our design by introducing a hashing scheme that reduces the number of hash computations from k to log(k) per lookup based on hash coding theory. We handle routing updates while keeping filters highly utilized in address removals. We perform extensive analysis and simulations using real traffic and routing traces to demonstrate the benefits of our design. Our evaluation shows that Caesar is more energy-efficient and less expensive (in terms of total cost) compared to optimized IPv6 TCAM-based solutions by up to 67% and 43% respectively. In addition, the total cost of our design is approximately the same for various address lengths.
Detecting attacks that are based on unknown security vulnerabilities is a challenging problem. The timely detection of attacks based on hitherto unknown vulnerabilities is crucial for protecting other users and systems from being affected as well. To know the attributes of a novel attack's target system can support automated reconfiguration of firewalls and sending alerts to administrators of other vulnerable targets. We suggest a novel approach of post-incident intrusion detection by utilizing information gathered from real-time social media streams. To accomplish this we take advantage of social media users posting about incidents that affect their user accounts of attacked target systems or their observations about misbehaving online services. Combining knowledge of the attacked systems and reported incidents, we should be able to recognize patterns that define the attributes of vulnerable systems. By matching detected attribute sets with those attributes of well-known attacks, we furthermore should be able to link attacks to already existing entries in the Common Vulnerabilities and Exposures database. If a link to an existing entry is not found, we can assume to have detected an exploitation of an unknown vulnerability, i.e., a zero day exploit or the result of an advanced persistent threat. This finding could also be used to direct efforts of examining vulnerabilities of attacked systems and therefore lead to faster patch deployment.
Ensuring system survivability in the wake of advanced persistent threats is a big challenge that the security community is facing to ensure critical infrastructure protection. In this paper, we define metrics and models for the assessment of coordinated massive malware campaigns targeting critical infrastructure sectors. First, we develop an analytical model that allows us to capture the effect of neighborhood on different metrics (infection probability and contagion probability). Then, we assess the impact of putting operational but possibly infected nodes into quarantine. Finally, we study the implications of scanning nodes for early detection of malware (e.g., worms), accounting for false positives and false negatives. Evaluating our methodology using a small four-node topology, we find that malware infections can be effectively contained by using quarantine and appropriate rates of scanning for soft impacts.
Smartwatches, with motion sensors, are becoming a common utility for users. With the increasing popularity of practical wearable computers, and in particular smartwatches, the security risks linked with sensors on board these devices have yet to be fully explored. Recent research literature has demonstrated the capability of using a smartphone's own accelerometer and gyroscope to infer tap locations; this paper expands on this work to demonstrate a method for inferring smartphone PINs through the analysis of smartwatch motion sensors. This study determines the feasibility and accuracy of inferring user keystrokes on a smartphone through a smartwatch worn by the user. Specifically, we show that with malware accessing only the smartwatch's motion sensors, it is possible to recognize user activity and specific numeric keypad entries. In a controlled scenario, we achieve results no less than 41% and up to 92% accurate for PIN prediction within 5 guesses.
The aim of this research is to advance the user active authentication using keystroke dynamics. Through this research, we assess the performance and influence of various keystroke features on keystroke dynamics authentication systems. In particular, we investigate the performance of keystroke features on a subset of most frequently used English words. The performance of four features such as i) key duration, ii) flight time latency, iii) diagraph time latency, and iv) word total time duration are analyzed. Two machine learning techniques are employed for assessing keystroke authentications. The selected classification methods are support vector machine (SVM), and k-nearest neighbor classifier (K-NN). The logged experimental data are captured for 28 users. The experimental results show that key duration time offers the best performance result among all four keystroke features, followed by word total time.
The future of ambient assisted living (AAL) especially eHealthcare almost depends on the smart objects that are part of the Internet of things (IoT). In our AAL scenario, these objects collect and transfer real-time information about the patients to the hospital server with the help of Wireless Mesh Network (WMN). Due to the multi-hop nature of mesh networks, it is possible for an adversary to reroute the network traffic via many denial of service (DoS) attacks, and hence affect the correct functionality of the mesh routing protocol. In this paper, based on a comparative study, we choose the most suitable secure mesh routing protocol for IoT-based AAL applications. Then, we analyze the resilience of this protocol against DoS attacks. Focusing on the hello flooding attack, the protocol is simulated and analyzed in terms of data packet delivery ratio, delay, and throughput. Simulation results show that the chosen protocol is totally resilient against DoS attack and can be one of the best candidates for secure routing in IoT-based AAL applications.
Advanced persistent threat (APT) is becoming a major threat to cyber security. As APT attacks are often launched by well funded entities that are persistent and stealthy in achieving their goals, they are highly challenging to combat in a cost-effective way. The situation becomes even worse when a sophisticated attacker is further assisted by an insider with privileged access to the inside information. Although stealthy attacks and insider threats have been considered separately in previous works, the coupling of the two is not well understood. As both types of threats are incentive driven, game theory provides a proper tool to understand the fundamental tradeoffs involved. In this paper, we propose the first three-player attacker-defender-insider game to model the strategic interactions among the three parties. Our game extends the two-player FlipIt game model for stealthy takeover by introducing an insider that can trade information to the attacker for a profit. We characterize the subgame perfect equilibria of the game with the defender as the leader and the attacker and the insider as the followers, under two different information trading processes. We make various observations and discuss approaches for achieving more efficient defense in the face of both APT and insider threats.
Compressive Sampling and Sparse reconstruction theory is applied to a linearly frequency modulated continuous wave hybrid lidar/radar system. The goal is to show that high resolution time of flight measurements to underwater targets can be obtained utilizing far fewer samples than dictated by Nyquist sampling theorems. Traditional mixing/down-conversion and matched filter signal processing methods are reviewed and compared to the Compressive Sampling and Sparse Reconstruction methods. Simulated evidence is provided to show the possible sampling rate reductions, and experiments are used to observe the effects that turbid underwater environments have on recovery. Results show that by using compressive sensing theory and sparse reconstruction, it is possible to achieve significant sample rate reduction while maintaining centimeter range resolution.
A technical method regarding to the improvement of transmission capacity of an optical wireless orthogonal frequency division multiplexing (OFDM) link based on a visible light emitting diode (LED) is proposed in this paper. An original OFDM signal, which is encoded by various multilevel digital modulations such as quadrature phase shift keying (QPSK), and quadrature amplitude modulation (QAM), is converted into a sparse one and then compressed using an adaptive sampling with inverse discrete cosine transform, while its error-free reconstruction is implemented using a L1-minimization based on a Bayesian compressive sensing (CS). In case of QPSK symbols, the transmission capacity of the optical wireless OFDM link was increased from 31.12 Mb/s to 51.87 Mb/s at the compression ratio of 40 %, while It was improved from 62.5 Mb/s to 78.13 Mb/s at the compression ratio of 20 % under the 16-QAM symbols in the error free wireless transmission (forward error correction limit: bit error rate of 10-3).