Biblio
How to construct good linear codes is an important problem in coding theory. This paper considers the construction of linear codes from functions with two variables, presents a class of two-weight and three-weight ternary linear codes and employs the Gauss sums and exponential sums to determine the parameters and weight distribution of these codes. Linear codes with few weights have applications in consumer electronics, communication and date storage systems. Linear codes with two weights have applications in strongly regular graphs and linear codes with three weights can be applied in association schemes.
The applications of 3D Virtual Environments are taking giant leaps with more sophisticated 3D user interfaces and immersive technologies. Interactive 3D and Virtual Reality platforms present a great opportunity for data analytics and can represent large amounts of data to help humans in decision making and insight. For any of the above to be effective, it is essential to understand the characteristics of these interfaces in displaying different types of content. Text is an essential and widespread content and legibility acts as an important criterion to determine the style, size and quantity of the text to be displayed. This study evaluates the maximum amount of text per visual angle, that is, the maximum density of text that will be legible in a virtual environment displayed on different platforms. We used Extensible 3D (X3D) to provide the portable (cross-platform) stimuli. The results presented here are based on a user study conducted in DeepSix (a tiled LCD display with 5750×2400 resolution) and the Hypercube (an immersive CAVE-style active stereo projection system with three walls and floor at 2560×2560 pixels active stereo per wall). We found that more legible text can be displayed on an immersive projection due to its larger Field of Regard; in the immersive case, stereo versus monoscopic rendering did not have a significant effect on legibility.
In our era, most of the communication between people is realized in the form of electronic messages and especially through smart mobile devices. As such, the written text exchanged suffers from bad use of punctuation, misspelling words, continuous chunk of several words without spaces, tables, internet addresses etc. which make traditional text analytics methods difficult or impossible to be applied without serious effort to clean the dataset. Our proposed method in this paper can work in massive noisy and scrambled texts with minimal preprocessing by removing special characters and spaces in order to create a continuous string and detect all the repeated patterns very efficiently using the Longest Expected Repeated Pattern Reduced Suffix Array (LERP-RSA) data structure and a variant of All Repeated Patterns Detection (ARPaD) algorithm. Meta-analyses of the results can further assist a digital forensics investigator to detect important information to the chunk of text analyzed.
The wide-spreading mobile malware has become a dreadful issue in the increasingly popular mobile networks. Most of the mobile malware relies on network interface to coordinate operations, steal users' private information, and launch attack activities. In this paper, we propose TextDroid, an effective and automated malware detection method combining natural language processing and machine learning. TextDroid can extract distinguishable features (n-gram sequences) to characterize malware samples. A malware detection model is then developed to detect mobile malware using a Support Vector Machine (SVM) classifier. The trained SVM model presents a superior performance on two different data sets, with the malware detection rate reaching 96.36% in the test set and 76.99% in an app set captured in the wild, respectively. In addition, we also design a flow header visualization method to visualize the highlighted texts generated during the apps' network interactions, which assists security researchers in understanding the apps' complex network activities.
Modern vehicles are opening up, with wireless interfaces such as Bluetooth integrated in order to enable comfort and safety features. Furthermore a plethora of aftermarket devices introduce additional connectivity which contributes to the driving experience. This connectivity opens the vehicle to potentially malicious attacks, which could have negative consequences with regards to safety. In this paper, we survey vehicles with Bluetooth connectivity from a threat intelligence perspective to gain insight into conditions during real world driving. We do this in two ways: firstly, by examining Bluetooth implementation in vehicles and gathering information from inside the cabin, and secondly, using war-nibbling (general monitoring and scanning for nearby devices). We find that as the vehicle age decreases, the security (relatively speaking) of the Bluetooth implementation increases, but that there is still some technological lag with regards to Bluetooth implementation in vehicles. We also find that a large proportion of vehicles and aftermarket devices still use legacy pairing (and are therefore more insecure), and that these vehicles remain visible for sufficient time to mount an attack (assuming some premeditation and preparation). We demonstrate a real-world threat scenario as an example of the latter. Finally, we provide some recommendations on how the security risks we discover could be mitigated.
We define a number of threat models to describe the goals, the available information and the actions characterising the behaviour of a possible attacker in multimedia forensic scenarios. We distinguish between an investigative scenario, wherein the forensic analysis is used to guide the investigative action and a use-in-court scenario, wherein forensic evidence must be defended during a lawsuit. We argue that the goals and actions of the attacker in these two cases are very different, thus exposing the forensic analyst to different challenges. Distinction is also made between model-based techniques and techniques based on machine learning, showing how in the latter case the necessity of defining a proper training set enriches the set of actions available to the attacker. By leveraging on the previous analysis, we then introduce some game-theoretic models to describe the interaction between the forensic analyst and the attacker in the investigative and use-in-court scenarios.
The concept of Extension Headers, newly introduced with IPv6, is elusive and enables new types of threats in the Internet. Simply dropping all traffic containing any Extension Header - a current practice by operators-seemingly is an effective solution, but at the cost of possibly dropping legitimate traffic as well. To determine whether threats indeed occur, and evaluate the actual nature of the traffic, measurement solutions need to be adapted. By implementing these specific parsing capabilities in flow exporters and performing measurements on two different production networks, we show it is feasible to quantify the metrics directly related to these threats, and thus allow for monitoring and detection. Analysing the traffic that is hidden behind Extension Headers, we find mostly benign traffic that directly affects end-user QoE: simply dropping all traffic containing Extension Headers is thus a bad practice with more consequences than operators might be aware of.
A novel optical fiber sensing network is proposed to eliminate the effect of multiple fiber failures. Simulation results show that if the number of breakpoint in each subnet is less than four, the optical routing paths can be reset to avoid those breakpoints by changing the status of optical switches in the remote nodes.
Small, local groups who share protected resources (e.g., families, work teams, student organizations) have unmet authentication needs. For these groups, existing authentication strategies either create unnecessary social divisions (e.g., biometrics), do not identify individuals (e.g., shared passwords), do not equitably distribute security responsibility (e.g., individual passwords), or make it difficult to share or revoke access (e.g., physical keys). To explore an alternative, we designed Thumprint: inclusive group authentication with a shared secret knock. All group members share one secret knock, but individual expressions of the secret are discernible. We evaluated the usability and security of our concept through two user studies with 30 participants. Our results suggest that (1) individuals who enter the same shared thumprint are distinguishable from one another, (2) that people can enter thumprints consistently over time, and (3) that thumprints are resilient to casual adversaries.
We consider Delay Tolerant Mobile Social Networks (DTMSNs), made of wireless nodes with intermittent connections and clustered into social communities. The lack of infrastructure and its reliance on nodes' mobility make routing a challenge. Network Coding (NC) is a generalization of routing and has been shown to bring a number of advantages over routing. We consider the problem of pollution attacks in these networks, that are a very important issue both for NC and for DTMSNs. Our first contribution is to propose a protocol which allows controlling adversary's capacity by combining cryptographic hash dissemination and error-correction to ensure message recovery at the receiver. Our second contribution is the modeling of the performance of such a protection scheme. To do so, we adapt an inter-session NC model based on a fluid approximation of the dissemination process. We provide a numerical validation of the model. We are eventually able to provide a workflow to set the correct parameters and counteract the attacks. We conclude by highlighting how these contributions can help secure a real-world DTMSN application (e.g., a smart-phone app.).
Network-connected embedded systems grow on a large scale as a critical part of Internet of Things, and these systems are under the risk of increasing malware. Anomaly-based detection methods can detect malware in embedded systems effectively and provide the advantage of detecting zero-day exploits relative to signature-based detection methods, but existing approaches incur significant performance overheads and are susceptible to mimicry attacks. In this article, we present a formal runtime security model that defines the normal system behavior including execution sequence and execution timing. The anomaly detection method in this article utilizes on-chip hardware to non-intrusively monitor system execution through trace port of the processor and detect malicious activity at runtime. We further analyze the properties of the timing distribution for control flow events, and select subset of monitoring targets by three selection metrics to meet hardware constraint. The designed detection method is evaluated by a network-connected pacemaker benchmark prototyped in FPGA and simulated in SystemC, with several mimicry attacks implemented at different levels. The resulting detection rate and false positive rate considering constraints on the number of monitored events supported in the on-chip hardware demonstrate good performance of our approach.
Circular statistics present a new technique to analyse the time patterns of events in the field of cyber security. We apply this technique to analyse incidents of malware infections detected by network monitoring. In particular we are interested in the daily and weekly variations of these events. Based on "live" data provided by Spamhaus, we examine the hypothesis that attacks on four countries are distributed uniformly over 24 hours. Specifically, we use Rayleigh and Watson tests. While our results are mainly exploratory, we are able to demonstrate that the attacks are not uniformly distributed, nor do they follow a Poisson distribution as reported in other research. Our objective in this is to identify a distribution that can be used to establish risk metrics. Moreover, our approach provides a visual overview of the time patterns' variation, indicating when attacks are most likely. This will assist decision makers in cyber security to allocate resources or estimate the cost of system monitoring during high risk periods. Our results also reveal that the time patterns are influenced by the total number of attacks. Networks subject to a large volume of attacks exhibit bimodality while one case, where attacks were at relatively lower rate, showed a multi-modal daily variation.
Cyber attacks occur on a near daily basis and are becoming exponentially more common. While some research aims to detect the characteristics of an attack, little focus has been given to patterns of attacks in general. This paper aims to exploit temporal correlations between the number of attacks per day in order to predict future intensity of cyber incidents. Through analysis of attack data collected from Hackmageddon, correlation was found among reported attack volume in consecutive days. This paper presents a forecasting system that aims to predict the number of cyber attacks on a given day based only on a set of historical attack count data. Our system conducts ARIMA time series forecasting on all previously collected incidents to predict the expected number of attacks on a future date. Our tool is able to use only a subset of data relevant to a specific attack method. Prediction models are dynamically updated over time as new data is collected to improve accuracy. Our system outperforms naive forecasting methods by 14.1% when predicting attacks of any type, and up to 21.2% when forecasting attacks of a specific type. Our system also produces a model which more accurately predicts future cyber attack intensity behavior.
In the age of IOT, as more and more devices are getting connected to the internet through wireless networks, a better security infrastructure is required to protect these devices from massive attacks. For long SSIDs and passwords have been used to authenticate and secure Wi-Fi networks. But the SSID and password combination is vulnerable to security exploits like phishing and brute-forcing. In this paper, a completely automated Wi-Fi authentication system is proposed, that generates Time-based One-Time Passwords (TOTP) to secure Wi-Fi networks. This approach aims to black box the process of connecting to a Wi-Fi network for the user and the process of generating periodic secure passwords for the network without human intervention.
Redundant capacity in filesystem timestamps is recently proposed in the literature as an effective means for information hiding and data leakage. Here, we evaluate the steganographic capabilities of such channels and propose techniques to aid digital forensics investigation towards identifying and detecting manipulated filesystem timestamps. Our findings indicate that different storage media and interfaces exhibit different timestamp creation patterns. Such differences can be utilized to characterize file source media and increase the analysis capabilities of the incident response process.
Submitted
With the fast development of autonomous driving and vehicular communication technologies, intelligent transportation systems that are based on VANET (Vehicular Ad-Hoc Network) have shown great promise. For instance, through V2V (Vehicle-to-Vehicle) and V2I (Vehicle-to-Infrastructure) communication, intelligent intersections allow more fine-grained control of vehicle crossings and significantly enhance traffic efficiency. However, the performance and safety of these VANET-based systems could be seriously impaired by communication delays and packet losses, which may be caused by network congestion or by malicious attacks that target communication timing behavior. In this paper, we quantitatively model and analyze some of the timing and security issues in transportation networks with VANET-based intelligent intersections. In particular, we demonstrate how communication delays may affect the performance and safety of a single intersection and of multiple interconnected intersections, and present our delay-tolerant intersection management protocols. We also discuss the issues of such protocols when the vehicles are non-cooperative and how they may be addressed with game theory.
We present a binary static analysis approach to detect intelligent electronic device (IED) malware based on the time requirements of electrical substations. We explore graph theory techniques to model the timing performance of an IED executable. Timing performance is subsequently used as a metric for IED malware detection. More specifically, we perform a series of steps to reduce a part of the IED malware detection problem into a classical problem of graph theory, namely finding single-source shortest paths on a weighted directed acyclic graph (DAG). Shortest paths represent execution flows that take the longest time to compute. Their clock cycles are examined to determine if they violate the real-time nature of substation monitoring and control, in which case IED malware detection is attained. We did this work with particular reference to implementations of protection and control algorithms that use the IEC 61850 standard for substation data representation and network communication. We tested our approach against IED exploits and malware, network scanning code, and numerous malware samples involved in recent ICS malware campaigns.
Time-based one-time password (TOTP) systems in use today require storing secrets on both the client and the server. As a result, an attack on the server can expose all second factors for all users in the system. We present T/Key, a time-based one-time password system that requires no secrets on the server. Our work modernizes the classic S/Key system and addresses the challenges in making such a system secure and practical. At the heart of our construction is a new lower bound analyzing the hardness of inverting hash chains composed of independent random functions, which formalizes the security of this widely used primitive. Additionally, we develop a near-optimal algorithm for quickly generating the required elements in a hash chain with little memory on the client. We report on our implementation of T/Key as an Android application. T/Key can be used as a replacement for current TOTP systems, and it remains secure in the event of a server-side compromise. The cost, as with S/Key, is that one-time passwords are longer than the standard six characters used in TOTP.
In the cloud computing era, in order to avoid computational burdens, many organizations tend to outsource their computations to third-party cloud servers. In order to protect service quality, the integrity of computation results need to be guaranteed. In this paper, we develop a game theoretic framework which helps the outsourcer to maximize its payoff while ensuring the desired level of integrity for the outsourced computation. We define two Stackelberg games and analyze the optimal setting's sensitivity for the parameters of the model.
The notion of commitment is widely studied as a high-level abstraction for modeling multiagent interaction. An important challenge is supporting flexible decentralized enactments of commitment specifications. In this paper, we combine recent advances on specifying commitments and information protocols. Specifically, we contribute Tosca, a technique for automatically synthesizing information protocols from commitment specifications. Our main result is that the synthesized protocols support commitment alignment, which is the idea that agents must make compatible inferences about their commitments despite decentralization.