Biblio
Advances in nanotechnology, large scale computing and communications infrastructure, coupled with recent progress in big data analytics, have enabled linking several billion devices to the Internet. These devices provide unprecedented automation, cognitive capabilities, and situational awareness. This new ecosystem–termed as the Internet-of-Things (IoT)–also provides many entry points into the network through the gadgets that connect to the Internet, making security of IoT systems a complex problem. In this position paper, we argue that in order to build a safer IoT system, we need a radically new approach to security. We propose a new security framework that draws ideas from software defined networks (SDN), and data analytics techniques; this framework provides dynamic policy enforcements on every layer of the protocol stack and can adapt quickly to a diverse set of industry use-cases that IoT deployments cater to. Our proposal does not make any assumptions on the capabilities of the devices - it can work with already deployed as well as new types of devices, while also conforming to a service-centric architecture. Even though our focus is on industrial IoT systems, the ideas presented here are applicable to IoT used in a wide array of applications. The goal of this position paper is to initiate a dialogue among standardization bodies and security experts to help raise awareness about network-centric approaches to IoT security.
Critical resource sharing among multiple entities in a processing system is inevitable, which in turn calls for the presence of appropriate authentication and access control mechanisms. Generally speaking, these mechanisms are implemented via trusted software "policy checkers" that enforce certain high level application-specific "rules" to enforce a policy. Whether implemented as operating system modules or embedded inside the application ad hoc, these policy checkers expose additional attack surface in addition to the application logic. In order to protect application software from an adversary, modern secure processing platforms, such as Intel's Software Guard Extensions (SGX), employ principled hardware isolation to offer secure software containers or enclaves to execute trusted sensitive code with some integrity and privacy guarantees against a privileged software adversary. We extend this model further and propose using these hardware isolation mechanisms to shield the authentication and access control logic essential to policy checker software. While relying on the fundamental features of modern secure processors, our framework introduces productive software design guidelines which enable a guarded environment to execute sensitive policy checking code - hence enforcing application control flow integrity - and afford flexibility to the application designer to construct appropriate high-level policies to customize policy checker software.
The urgent task of the organization of confidential calculations in crucial objects of informatization on the basis of domestic TPM technologies (Trusted Platform Module) is considered. The corresponding recommendations and architectural concepts of the special hardware TPM module (Trusted Platform Module) which is built in a computing platform are proposed and realize a so-called ``root of trust''. As a result it gave the organization the confidential calculations on the basis of domestic electronic base.
Cloud computing paradigm continues to revolutionize the way business processes are being conducted through the provision of massive resources, reliability across networks and ability to offer parallel processing. However, miniaturization, proliferation and nanotechnology within devices has enabled digitization of almost every object which eventually has seen the rise of a new technological marvel dubbed Internet of Things (IoT). IoT enables self-configurable/smart devices to connect intelligently through Radio Frequency Identification (RFID), WI-FI, LAN, GPRS and other methods by further enabling timeously processing of information. Based on these developments, the integration of the cloud and IoT infrastructures has led to an explosion of the amount of data being exchanged between devices which have in turn enabled malicious actors to use this as a platform to launch various cybercrime activities. Consequently, digital forensics provides a significant approach that can be used to provide an effective post-event response mechanism to these malicious attacks in cloud-based IoT infrastructures. Therefore, the problem being addressed is that, at the time of writing this paper, there still exist no accepted standards or frameworks for conducting digital forensic investigation on cloud-based IoT infrastructures. As a result, the authors have proposed a cloud-centric framework that is able to isolate Big data as forensic evidence from IoT (CFIBD-IoT) infrastructures for proper analysis and examination. It is the authors' opinion that if the CFIBD-IoT framework is implemented fully it will support cloud-based IoT tool creation as well as support future investigative techniques in the cloud with a degree of certainty.
Technological advancement enables the need of internet everywhere. The power industry is not an exception in the technological advancement which makes everything smarter. Smart grid is the advanced version of the traditional grid, which makes the system more efficient and self-healing. Synchrophasor is a device used in smart grids to measure the values of electric waves, voltages and current. The phasor measurement unit produces immense volume of current and voltage data that is used to monitor and control the performance of the grid. These data are huge in size and vulnerable to attacks. Intrusion Detection is a common technique for finding the intrusions in the system. In this paper, a big data framework is designed using various machine learning techniques, and intrusions are detected based on the classifications applied on the synchrophasor dataset. In this approach various machine learning techniques like deep neural networks, support vector machines, random forest, decision trees and naive bayes classifications are done for the synchrophasor dataset and the results are compared using metrics of accuracy, recall, false rate, specificity, and prediction time. Feature selection and dimensionality reduction algorithms are used to reduce the prediction time taken by the proposed approach. This paper uses apache spark as a platform which is suitable for the implementation of Intrusion Detection system in smart grids using big data analytics.
Edge Computing is a scheme to improve the performance, latency and security guidelines for IoT applications. However, edge deployment of an application also comes with additional complexity in management, an increased attack surface for security vulnerability, and could potentially result in a more expensive solution. As a result, the conditions under which an edge deployment of IoT applications delivers a better solution is not always obvious. Metrics which would be able to predict whether or not an IoT application is suitable for edge deployment can provide useful insights to address this question. In this paper, we examine the key performance indicators for IoT applications, namely the responsiveness, scalability and cost models for different types of IoT applications. Our analysis identifies that network centrality of an IoT application is a key characteristic which determines whether or not an IoT application is a good candidate for edge deployment. We discuss the different measures of network centrality that can be used to characterize applications, and the relative performance of edge deployment compared to centralized deployment for various IoT applications.
Knowledge work such as summarizing related research in preparation for writing, typically requires the extraction of useful information from scientific literature. Nowadays the primary source of information for researchers comes from electronic documents available on the Web, accessible through general and academic search engines such as Google Scholar or IEEE Xplore. Yet, the vast amount of resources makes retrieving only the most relevant results a difficult task. As a consequence, researchers are often confronted with loads of low-quality or irrelevant content. To address this issue we introduce a novel system, which combines a rich, interactive Web-based user interface and different visualization approaches. This system enables researchers to identify key phrases matching current information needs and spot potentially relevant literature within hierarchical document collections. The chosen context was the collection and summarization of related work in preparation for scientific writing, thus the system supports features such as bibliography and citation management, document metadata extraction and a text editor. This paper introduces the design rationale and components of the PaperViz. Moreover, we report the insights gathered in a formative design study addressing usability.
Metaheuristic search technique is one of the advance approach when compared with traditional heuristic search technique. To select one option among different alternatives is not hard to get but really hard is give assurance that being cost effective. This hard problem is solved by the meta-heuristic search technique with the help of fitness function. Fitness function is a crucial metrics or a measure which helps in deciding which solution is optimal to choose from available set of test sets. This paper discusses hill climbing, simulated annealing, tabu search, genetic algorithm and particle swarm optimization techniques in detail explaining with the help of the algorithm. If metaheuristic search techniques combine some of the security testing methods, it would result in better searching technique as well as secure too. This paper primarily focusses on the metaheuristic search techniques.
With the advances in the areas of mobile computing and wireless communications, V2X systems have become a promising technology enabling deployment of applications providing road safety, traffic efficiency and infotainment. Due to their increasing popularity, V2X networks have become a major target for attackers, making them vulnerable to security threats and network conditions, and thus affecting the safety of passengers, vehicles and roads. Existing research in V2X does not effectively address the safety, security and performance limitation threats to connected vehicles, as a result of considering these aspects separately instead of jointly. In this work, we focus on the analysis of the tradeoffs between safety, security and performance of V2X systems and propose a dynamic adaptability approach considering all three aspects jointly based on application needs and context to achieve maximum safety on the roads using an Internet of vehicles. Experiments with a simple V2V highway scenario demonstrate that an adaptive safety/security approach is essential and V2X systems have great potential for providing low reaction times.
Verifying that hardware design implementations adhere to specifications is a time intensive and sometimes intractable problem due to the massive size of the system's state space. Formal methods techniques can be used to prove certain tractable specification properties; however, they are expensive, and often require subject matter experts to develop and solve. Nonetheless, hardware verification is a critical process to ensure security and safety properties are met, and encapsulates problems associated with trust and reliability. For complex designs where coverage of the entire state space is unattainable, prioritizing regions most vulnerable to security or reliability threats would allow efficient allocation of valuable verification resources. Stackelberg security games model interactions between a defender, whose goal is to assign resources to protect a set of targets, and an attacker, who aims to inflict maximum damage on the targets after first observing the defender's strategy. In equilibrium, the defender has an optimal security deployment strategy, given the attacker's best response. We apply this Stackelberg security framework to synthesized hardware implementations using the design's network structure and logic to inform defender valuations and verification costs. The defender's strategy in equilibrium is thus interpreted as a prioritization of the allocation of verification resources in the presence of an adversary. We demonstrate this technique on several open-source synthesized hardware designs.
Cyber-Physical Systems (CPS) such as Unmanned Aerial Systems (UAS) sense and actuate their environment in pursuit of a mission. The attack surface of these remotely located, sensing and communicating devices is both large, and exposed to adversarial actors, making mission assurance a challenging problem. While best-practice security policies should be followed, they are rarely enough to guarantee mission success as not all components in the system may be trusted and the properties of the environment (e.g., the RF environment) may be under the control of the attacker. CPS must thus be built with a high degree of resilience to mitigate threats that security cannot alleviate. In this paper, we describe the Agile and Resilient Embedded Systems (ARES) methodology and metric set. The ARES methodology pursues cyber security and resilience (CSR) as high level system properties to be developed in the context of the mission. An analytic process guides system developers in defining mission objectives, examining principal issues, applying CSR technologies, and understanding their interactions.
Efficient management and control of modern and next-gen networks is of paramount importance as networks have to maintain highly reliable service quality whilst supporting rapid growth in traffic demand and new application services. Rapid mitigation of network service degradations is a key factor in delivering high service quality. Automation is vital to achieving rapid mitigation of issues, particularly at the network edge where the scale and diversity is the greatest. This automation involves the rapid detection, localization and (where possible) repair of service-impacting faults and performance impairments. However, the most significant challenge here is knowing what events to detect, how to correlate events to localize an issue and what mitigation actions should be performed in response to the identified issues. These are defined as policies to systems such as ECOMP. In this paper, we present AESOP, a data-driven intelligent system to facilitate automatic learning of policies and rules for triggering remedial actions in networks. AESOP combines best operational practices (domain knowledge) with a variety of measurement data to learn and validate operational policies to mitigate service issues in networks. AESOP's design addresses the following key challenges: (i) learning from high-dimensional noisy data, (ii) capturing multiple fault models, (iii) modeling the high service-cost of false positives, and (iv) accounting for the evolving network infrastructure. We present the design of our system and show results from our ongoing experiments to show the effectiveness of our policy leaning framework.
In 2007, Shacham published a seminal paper on Return-Oriented Programming (ROP), the first systematic formulation of code reuse. The paper has been highly influential, profoundly shaping the way we still think about code reuse today: an attacker analyzes the "geometry" of victim binary code to locate gadgets and chains these to craft an exploit. This model has spurred much research, with a rapid progression of increasingly sophisticated code reuse attacks and defenses over time. After ten years, the common perception is that state-of-the-art code reuse defenses are effective in significantly raising the bar and making attacks exceedingly hard. In this paper, we challenge this perception and show that an attacker going beyond "geometry" (static analysis) and considering the "dynamics" (dynamic analysis) of a victim program can easily find function call gadgets even in the presence of state-of-the-art code-reuse defenses. To support our claims, we present Newton, a run-time gadget-discovery framework based on constraint-driven dynamic taint analysis. Newton can model a broad range of defenses by mapping their properties into simple, stackable, reusable constraints, and automatically generate gadgets that comply with these constraints. Using Newton, we systematically map and compare state-of-the-art defenses, demonstrating that even simple interactions with popular server programs are adequate for finding gadgets for all state-of-the-art code-reuse defenses. We conclude with an nginx case study, which shows that a Newton-enabled attacker can craft attacks which comply with the restrictions of advanced defenses, such as CPI and context-sensitive CFI.
Today's mobile applications increasingly rely on communication with a remote backend service to perform many critical functions, including handling user-specific information. This implies that some form of authentication should be used to associate a user with their actions and data. Since schemes involving tedious account creation procedures can represent "friction" for users, many applications are moving toward alternative solutions, some of which, while increasing usability, sacrifice security. This paper focuses on a new trend of authentication schemes based on what we call "device-public" information, which consists of properties and data that any application running on a device can obtain. While these schemes are convenient to users, since they require little to no interaction, they are vulnerable by design, since all the needed information to authenticate a user is available to any app installed on the device. An attacker with a malicious app on a user's device could easily hijack the user's account, steal private information, send (and receive) messages on behalf of the user, or steal valuable virtual goods. To demonstrate how easily these vulnerabilities can be weaponized, we developed a generic exploitation technique that first mines all relevant data from a victim's phone, and then transfers and injects them into an attacker's phone to fool apps into granting access to the victim's account. Moreover, we developed a dynamic analysis detection system to automatically highlight problematic apps. Using our tool, we analyzed 1,000 popular applications and found that 41 of them, including the popular messaging apps WhatsApp and Viber, were vulnerable. Finally, our work proposes solutions to this issue, based on modifications to the Android API.
Adversaries with physical access to a target platform can perform cold boot or DMA attacks to extract sensitive data from the RAM. To prevent such attacks, hardware vendors announced respective processor extensions. AMD's extension SME will provide means to encrypt the RAM to protect security-relevant assets that reside there. The encryption will protect the user's content against passive eavesdropping. However, the level of protection it provides in scenarios that involve an adversary who cannot only read from RAM but also change content in RAM is less clear. This paper addresses the open research question whether encryption alone is a dependable protection mechanism in practice when considering an active adversary. To this end, we first build a software based memory encryption solution on a desktop system which mimics AMD's SME. Subsequently, we demonstrate a proof-of-concept fault attack on this system, by which we are able to extract the private RSA key of a GnuPG user. Our work suggests that transparent memory encryption is not enough to prevent active attacks.
Highly privileged software, such as firmware, is an attractive target for attackers. Thus, BIOS vendors use cryptographic signatures to ensure firmware integrity at boot time. Nevertheless, such protection does not prevent an attacker from exploiting vulnerabilities at runtime. To detect such attacks, we propose an event-based behavior monitoring approach that relies on an isolated co-processor. We instrument the code executed on the main CPU to send information about its behavior to the monitor. This information helps to resolve the semantic gap issue. Our approach does not depend on a specific model of the behavior nor on a specific target. We apply this approach to detect attacks targeting the System Management Mode (SMM), a highly privileged x86 execution mode executing firmware code at runtime. We model the behavior of SMM using invariants of its control-flow and relevant CPU registers (CR3 and SMBASE). We instrument two open-source firmware implementations: EDKII and coreboot. We evaluate the ability of our approach to detect state-of-the-art attacks and its runtime execution overhead by simulating an x86 system coupled with an ARM Cortex A5 co-processor. The results show that our solution detects intrusions from the state of the art, without any false positives, while remaining acceptable in terms of performance overhead in the context of the SMM (i.e., less than the 150 us threshold defined by Intel).
This paper presents a wireless intrusion prevention tool for distributed denial of service attacks DDoS. This tool, called Wireless Distributed IPS WIDIP, uses a different collection of data to identify attackers from inside a private network. WIDIP blocks attackers and also propagates its information to other wireless routers that run the IPS. This communication behavior provides higher fault tolerance and stops attacks from different network endpoints. WIDIP also block network attackers at its first hop and thus reduce the malicious traffic near its source. Comparative tests of WIDIP with other two tools demonstrated that our tool reduce the delay of target response after attacks in application servers by 11%. In addition to reducing response time, WIDIP comparatively reduces the number of control messages on the network when compared to IREMAC.
In this paper, we propose and implement CommunityGuard, a system which comprises of intelligent Guardian Nodes that learn and prevent malicious traffic from coming into and going out of a user's personal area network. In the CommunityGuard model, each Guardian Node tells others about emerging threats, blocking these threats for all users as soon as they begin. Furthermore, Guardian Nodes regularly update themselves with latest threat models to provide effective security against new and emerging threats. Our evaluation proves that CommunityGuard provides immunity against a range of incoming and outgoing attacks at all points of time with an acceptable impact on network performance. Oftentimes, the sources of DDoS attack traffic are personal devices that have been compromised without the owner's knowledge. We have modeled CommunityGuard to prevent such outgoing DDoS traffic on a wide scale which can hamstring the otherwise very frightening prospects of crippling DDoS attacks.
In this paper, we propose a hardware-based defense system in Software-Defined Networking architecture to protect against the HTTP GET Flooding attacks, one of the most dangerous Distributed Denial of Service (DDoS) attacks in recent years. Our defense system utilizes per-URL counting mechanism and has been implemented on FPGA as an extension of a NetFPGA-based OpenFlow switch.
Outsourcing services to third-party providers comes with a high security cost-to fully trust the providers. Using trusted hardware can help, but current trusted execution environments do not adequately support services that process very large scale datasets. We present LASTGT, a system that bridges this gap by supporting the execution of self-contained services over a large state, with a small and generic trusted computing base (TCB). LASTGT uses widely deployed trusted hardware to guarantee integrity and verifiability of the execution on a remote platform, and it securely supplies data to the service through simple techniques based on virtual memory. As a result, LASTGT is general and applicable to many scenarios such as computational genomics and databases, as we show in our experimental evaluation based on an implementation of LAST-GT on a secure hypervisor. We also describe a possible implementation on Intel SGX.
With the growing number of cyberattack incidents, organizations are required to have proactive knowledge on the cybersecurity landscape for efficiently defending their resources. To achieve this, organizations must develop the culture of sharing their threat information with others for effectively assessing the associated risks. However, sharing cybersecurity information is costly for the organizations due to the fact that the information conveys sensitive and private data. Hence, making the decision for sharing information is a challenging task and requires to resolve the trade-off between sharing advantages and privacy exposure. On the other hand, cybersecurity information exchange (CYBEX) management is crucial in stabilizing the system through setting the correct values for participation fees and sharing incentives. In this work, we model the interaction of organizations, CYBEX, and attackers involved in a sharing system using dynamic game. With devising appropriate payoff models for each player, we analyze the best strategies of the entities by incorporating the organizations' privacy component in the sharing model. Using the best response analysis, the simulation results demonstrate the efficiency of our proposed framework.
Safety-critical system engineering and traditional safety analyses have for decades been focused on problems caused by natural or accidental phenomena. Security analyses, on the other hand, focus on preventing intentional, malicious acts that reduce system availability, degrade user privacy, or enable unauthorized access. In the context of safety-critical systems, safety and security are intertwined, e.g., injecting malicious control commands may lead to system actuation that causes harm. Despite this intertwining, safety and security concerns have traditionally been designed and analyzed independently of one another, and examined in very different ways. In this work we examine a new hazard analysis technique—Systematic Analysis of Faults and Errors (SAFE)—and its deep integration of safety and security concerns. This is achieved by explicitly incorporating a semantic framework of error "effects" that unifies an adversary model long used in security contexts with a fault/error categorization that aligns with previous approaches to hazard analysis. This categorization enables analysts to separate the immediate, component-level effects of errors from their cause or precise deviation from specification. This paper details SAFE's integrated handling of safety and security through a) a methodology grounded in—and adaptable to—different approaches from the literature, b) explicit documentation of system assumptions which are implicit in other analyses, and c) increasing the tractability of analyzing modern, complex, component-based software-driven systems. We then discuss how SAFE's approach supports the long-term goals of of increased compositionality and formalization of safety/security analysis.