Biblio
In this paper, an innovative approach to keyboard user monitoring (authentication), using keyboard dynamics and founded on the concept of time series analysis, is presented. The work is motivated by the need for robust authentication mechanisms in the context of on-line assessment such as those featured in many online learning platforms. Four analysis mechanisms are considered: analysis of keystroke time series in their raw form (without any translation), analysis consequent to translating the time series into a more compact form using either the Discrete Fourier Transform or the Discrete Wavelet Transform, and a "benchmark" feature vector representation of the form typically used in previous related work. All four mechanisms are fully described and evaluated. A best authentication accuracy of 99% was obtained using the wavelet transform.
SEAndroid is a mandatory access control (MAC) framework that can confine faulty applications on Android. Nevertheless, the effectiveness of SEAndroid enforcement depends on the employed policy. The growing complexity of Android makes it difficult for policy engineers to have complete domain knowledge on every system functionality. As a result, policy engineers sometimes craft over-permissive and ineffective policy rules, which unfortunately increased the attack surface of the Android system and have allowed multiple real-world privilege escalation attacks. We propose SPOKE, an SEAndroid Policy Knowledge Engine, that systematically extracts domain knowledge from rich-semantic functional tests and further uses the knowledge for characterizing the attack surface of SEAndroid policy rules. Our attack surface analysis is achieved by two steps: 1) It reveals policy rules that cannot be justified by the collected domain knowledge. 2) It identifies potentially over-permissive access patterns allowed by those unjustified rules as the attack surface. We evaluate SPOKE using 665 functional tests targeting 28 different categories of functionalities developed by Samsung Android Team. SPOKE successfully collected 12,491 access patterns for the 28 categories as domain knowledge, and used the knowledge to reveal 320 unjustified policy rules and 210 over-permissive access patterns defined by those rules, including one related to the notorious libstagefright vulnerability. These findings have been confirmed by policy engineers.
Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the "wear and tear" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system i- ages that exhibit a realistic wear-and-tear state.
Web-Based applications are becoming more increasingly technically complex and sophisticated. The very nature of their feature-rich design and their capability to collate, process, and disseminate information over the Internet or from within an intranet makes them a popular target for attack. According to Open Web Application Security Project (OWASP) Top Ten Cheat sheet-2017, SQL Injection Attack is at peak among online attacks. This can be attributed primarily to lack of awareness on software security. Developing effective SQL injection detection approaches has been a challenge in spite of extensive research in this area. In this paper, we propose a signature based SQL injection attack detection framework by integrating fingerprinting method and Pattern Matching to distinguish genuine SQL queries from malicious queries. Our framework monitors SQL queries to the database and compares them against a dataset of signatures from known SQL injection attacks. If the fingerprint method cannot determine the legitimacy of query alone, then the Aho Corasick algorithm is invoked to ascertain whether attack signatures appear in the queries. The initial experimental results of our framework indicate the approach can identify wide variety of SQL injection attacks with negligible impact on performance.
Mobile ad hoc networks (MANET) is a type of networks that consists of autonomous nodes connecting directly without a top-down network architecture or central controller. Absence of base stations in MANET force the nodes to rely on their adjacent nodes in transmitting messages. The dynamic nature of MANET makes the relationship between nodes untrusted due to mobility of nodes. A malicious node may start denial of service attack at network layer to discard the packets instead of forwarding them to destination which is known as black hole attack. In this paper a secure and trust based approach based on ad hoc on demand distance vector (STAODV) has been proposed to improve the security of AODV routing protocol. The approach isolates the malicious nodes that try to attack the network depending on their previous information. A trust level is attached to each participating node to detect the level of trust of that node. Each incoming packet will be examined to prevent the black hole attack.
Science of security necessitates conducting methodologically-defensible research and reporting such research comprehensively to enable replication and future research to build upon the reported study. The comprehensiveness of reporting is as important as the research itself in building a science of security. Key principles of science - replication, meta-analysis, and theory building - are affected by the ability to understand the context and findings of published studies. The goal of this paper is to aid the security research community in understanding the state of scientific communication through the analysis of research published at top security conferences. To analyze scientific communication, we use literature on scientific evaluation to develop a set of rubrics as a guide to check the comprehensiveness of papers published in the IEEE Security and Privacy and ACM Computer and Communications Security conferences. Our review found that papers often omit certain types of information from their reports, including research objectives and threats to validity. Our hope is that this effort sheds some light on one of the essential steps towards advancement of the science of security.
Transport Layer Security (TLS), has become the de-facto standard for secure Internet communication. When used correctly, it provides secure data transfer, but used incorrectly, it can leave users vulnerable to attacks while giving them a false sense of security. Numerous efforts have studied the adoption of TLS (and its predecessor, SSL) and its use in the desktop ecosystem, attacks, and vulnerabilities in both desktop clients and servers. However, there is a dearth of knowledge of how TLS is used in mobile platforms. In this paper we use data collected by Lumen, a mobile measurement platform, to analyze how 7,258 Android apps use TLS in the wild. We analyze and fingerprint handshake messages to characterize the TLS APIs and libraries that apps use, and also evaluate weaknesses. We see that about 84% of apps use default OS APIs for TLS. Many apps use third-party TLS libraries; in some cases they are forced to do so because of restricted Android capabilities. Our analysis shows that both approaches have limitations, and that improving TLS security in mobile is not straightforward. Apps that use their own TLS configurations may have vulnerabilities due to developer inexperience, but apps that use OS defaults are vulnerable to certain attacks if the OS is out of date, even if the apps themselves are up to date. We also study certificate verification, and see low prevalence of security measures such as certificate pinning, even among high-risk apps such as those providing financial services, though we did observe major third-party tracking and advertisement services deploying certificate pinning.
Situational awareness during sophisticated cyber attacks on the power grid is critical for the system operator to perform suitable attack response and recovery functions to ensure grid reliability. The overall theme of this paper is to identify existing practical issues and challenges that utilities face while monitoring substations, and to suggest potential approaches to enhance the situational awareness for the grid operators. In this paper, we provide a broad discussion about the various gaps that exist in the utility industry today in monitoring substations, and how those gaps could be addressed by identifying the various data sources and monitoring tools to improve situational awareness. The paper also briefly describes the advantages of contextualizing and correlating substation monitoring alerts using expert systems at the control center to obtain a holistic systems-level view of potentially malicious cyber activity at the substations before they cause impacts to grid operation.
Smart water networks can provide great benefits to our society in terms of efficiency and sustainability. However, smart capabilities and connectivity also expose these systems to a wide range of cyber attacks, which enable cyber-terrorists and hostile nation states to mount cyber-physical attacks. Cyber-physical attacks against critical infrastructure, such as water treatment and distribution systems, pose a serious threat to public safety and health. Consequently, it is imperative that we improve the resilience of smart water networks. We consider three approaches for improving resilience: redundancy, diversity, and hardening. Even though each one of these "canonical" approaches has been throughly studied in prior work, a unified theory on how to combine them in the most efficient way has not yet been established. In this paper, we address this problem by studying the synergy of these approaches in the context of protecting smart water networks from cyber-physical contamination attacks.
The proliferation of mobile devices, equipped with numerous sensors and Internet connectivity, has laid the foundation for the emergence of a diverse set of crowdsourcing services. By leveraging the multitude, geographical dispersion, and technical abilities of smartphones, these services tackle challenging tasks by harnessing the power of the crowd. One such service, Crowd GPS, has gained traction in the industry and research community alike, materializing as a class of systems that track lost objects or individuals (e.g., children or elders). While these systems can have significant impact, they suffer from major privacy threats. In this paper, we highlight the inherent risks to users from the centralized designs adopted by such services and demonstrate how adversaries can trivially misuse one of the most popular crowd GPS services to track their users. As an alternative, we present Techu, a privacy-preserving crowd GPS service for tracking Bluetooth tags. Our architecture follows a hybrid decentralized approach, where an untrusted server acts as a bulletin board that collects reports of tags observed by the crowd, while observers store the location information locally and only disclose it upon proof of ownership of the tag. Techu does not require user authentication, allowing users to remain anonymous. As no user authentication is required and cloud messaging queues are leveraged for communication between users, users remain anonymous. Our security analysis highlights the privacy offered by Techu, and details how our design prevents adversaries from tracking or identifying users. Finally, our experimental evaluation demonstrates that Techu has negligible impact on power consumption, and achieves superior effectiveness to previously proposed systems while offering stronger privacy guarantees.
The applications of 3D Virtual Environments are taking giant leaps with more sophisticated 3D user interfaces and immersive technologies. Interactive 3D and Virtual Reality platforms present a great opportunity for data analytics and can represent large amounts of data to help humans in decision making and insight. For any of the above to be effective, it is essential to understand the characteristics of these interfaces in displaying different types of content. Text is an essential and widespread content and legibility acts as an important criterion to determine the style, size and quantity of the text to be displayed. This study evaluates the maximum amount of text per visual angle, that is, the maximum density of text that will be legible in a virtual environment displayed on different platforms. We used Extensible 3D (X3D) to provide the portable (cross-platform) stimuli. The results presented here are based on a user study conducted in DeepSix (a tiled LCD display with 5750×2400 resolution) and the Hypercube (an immersive CAVE-style active stereo projection system with three walls and floor at 2560×2560 pixels active stereo per wall). We found that more legible text can be displayed on an immersive projection due to its larger Field of Regard; in the immersive case, stereo versus monoscopic rendering did not have a significant effect on legibility.
In our era, most of the communication between people is realized in the form of electronic messages and especially through smart mobile devices. As such, the written text exchanged suffers from bad use of punctuation, misspelling words, continuous chunk of several words without spaces, tables, internet addresses etc. which make traditional text analytics methods difficult or impossible to be applied without serious effort to clean the dataset. Our proposed method in this paper can work in massive noisy and scrambled texts with minimal preprocessing by removing special characters and spaces in order to create a continuous string and detect all the repeated patterns very efficiently using the Longest Expected Repeated Pattern Reduced Suffix Array (LERP-RSA) data structure and a variant of All Repeated Patterns Detection (ARPaD) algorithm. Meta-analyses of the results can further assist a digital forensics investigator to detect important information to the chunk of text analyzed.
In the age of IOT, as more and more devices are getting connected to the internet through wireless networks, a better security infrastructure is required to protect these devices from massive attacks. For long SSIDs and passwords have been used to authenticate and secure Wi-Fi networks. But the SSID and password combination is vulnerable to security exploits like phishing and brute-forcing. In this paper, a completely automated Wi-Fi authentication system is proposed, that generates Time-based One-Time Passwords (TOTP) to secure Wi-Fi networks. This approach aims to black box the process of connecting to a Wi-Fi network for the user and the process of generating periodic secure passwords for the network without human intervention.
Submitted
The notion of commitment is widely studied as a high-level abstraction for modeling multiagent interaction. An important challenge is supporting flexible decentralized enactments of commitment specifications. In this paper, we combine recent advances on specifying commitments and information protocols. Specifically, we contribute Tosca, a technique for automatically synthesizing information protocols from commitment specifications. Our main result is that the synthesized protocols support commitment alignment, which is the idea that agents must make compatible inferences about their commitments despite decentralization.
Software-defined networking (SDN) overcomes many limitations of traditional networking architectures because of its programmable and flexible nature. Security applications,for instance, can dynamically reprogram a network to respond to ongoing threats in real time. However, the same flexibility also creates risk, since it can be used against the network. Current SDN architectures potentially allow adversaries to disrupt one or more SDN system components and to hide their actions in doing so. That makes assurance and reasoning about past network
events more difficult, if not impossible. In this paper, we argue that an SDN architecture must incorporate various notions of accountability for achieving systemwide cyber resiliency goals.
We analyze accountability based on a conceptual framework, and we identify how that analysis fits in with the SDN architecture’s entities and processes. We further consider a case study in which accountability is necessary for SDN network applications, and we discuss the limits of current approaches.
The trend in computing is towards the use of FPGAs to improve performance at reduced costs. An indication of this is the adoption of FPGAs for data centre and server application acceleration by notable technological giants like Microsoft, Amazon, and Baidu. The continued protection of Intellectual Properties (IPs) on the FPGA has thus become both more important and challenging. To facilitate IP security, FPGA vendors have provided bitstream authentication and encryption. However, advancements in FPGA programming technology have engendered a bitstream manipulation technique like partial bitstream relocation (PBR), which is promising in terms of reducing bitstream storage cost and facilitating adaptability. Meanwhile, encrypted bitstreams are not amenable to PBR. In this paper, we present three methods for performing encrypted PBR with varying overheads of resources and time. These methods ensure that PBR can be applied to bitstreams without losing the protection of IPs.
To reshape energy systems towards renewable energy resources, decision makers need to decide today on how to make the transition. Energy scenarios are widely used to guide decision making in this context. While considerable effort has been put into developing energy scenarios, researchers have pointed out three requirements for energy scenarios that are not fulfilled satisfactorily yet: The development and evaluation of energy scenarios should (1) incorporate the concept of sustainability, (2) provide decision support in a transparent way and (3) be replicable for other researchers. To meet these requirements, we combine different methodological approaches: story-and-simulation (SAS) scenarios, multi-criteria decision-making (MCDM), information modeling and co-simulation. We show in this paper how the combination of these methods can lead to an integrated approach for sustainability evaluation of energy scenarios with automated information exchange. Our approach consists of a sustainability evaluation process (SEP) and an information model for modeling dependencies. The objectives are to guide decisions towards sustainable development of the energy sector and to make the scenario and decision support processes more transparent for both decision makers and researchers.
SDN networks rely mainly on a set of software defined modules, running on generic hardware platforms, and managed by a central SDN controller. The tight coupling and lack of isolation between the controller and the underlying host limit the controller resilience against host-based attacks and failures. That controller is a single point of failure and a target for attackers. ``Linux-containers'' is a successful thin virtualization technique that enables encapsulated, host-isolated execution-environments for running applications. In this paper we present PAFR, a controller sandboxing mechanism based on Linux-containers. PAFR enables controller/host isolation, plug-and-play operation, failure-and-attack-resilient execution, and fast recovery. PAFR employs and manages live remote checkpointing and migration between different hosts to evade failures and attacks. Experiments and simulations show that the frequent employment of PAFR's live-migration minimizes the chance of successful attack/failure with limited to no impact on network performance.



