Biblio
In low-power wireless networking, new applications such as cooperative robots or industrial closed-loop control demand for network-wide consensus at low-latency and high reliability. Distributed consensus protocols is a mature field of research in a wired context, but has received little attention in low-power wireless settings. In this paper, we present A2: Agreement in the Air, a system that brings distributed consensus to low-power multi-hop networks. A2 introduces Synchrotron, a synchronous transmissions kernel that builds a robust mesh by exploiting the capture effect, frequency hopping with parallel channels, and link-layer security. A2 builds on top of this reliable base layer and enables the two- and three-phase commit protocols, as well as network services such as group membership, hopping sequence distribution and re-keying. We evaluate A2 on four public testbeds with different deployment densities and sizes. A2 requires only 475 ms to complete a two-phase commit over 180 nodes. The resulting duty cycle is 0.5% for 1-minute intervals. We show that A2 achieves zero losses end-to-end over long experiments, representing millions of data points. When adding controlled failures, we show that two-phase commit ensures transaction consistency in A2 while three-phase commit provides liveness at the expense of inconsistency under specific failure scenarios.
With the advance of fifth generation (5G) networks, network density needs to grow significantly in order to meet the required capacity demands. A massive deployment of small cells may lead to a high cost for providing fiber connectivity to each node. Consequently, many small cells are expected to be connected through wireless links to the umbrella eNodeB, leading to a mesh backhaul topology. This backhaul solution will most probably be composed of high capacity point-to-point links, typically operating in the millimeter wave (mmWave) frequency band due to its massive bandwidth availability. In this paper, we propose a mathematical model that jointly solves the user association and backhaul routing problem in the aforementioned context, aiming at the energy efficiency maximization of the network. Our study considers the energy consumption of both the access and backhaul links, while taking into account the capacity constraints of all the nodes as well as the fulfillment of the service-level agreements (SLAs). Due to the high complexity of the optimal solution, we also propose an energy efficient heuristic algorithm (Joint), which solves the discussed joint problem, while inducing low complexity in the system. We numerically evaluate the algorithm performance by comparing it not only with the optimal solution but also with reference approaches under different traffic load scenarios and backhaul parameters. Our results demonstrate that Joint outperforms the state-of-the-art, while being able to find good solutions, close to optimal, in short time.
Software Defined Radio (SDR) can move the complicated signal processing and handling procedures involved in communications from radio equipment into computer software. Consequently, SDR equipment could consist of only a few chips connected to an antenna. In this paper, we present an implemented SDR testbed, which consists of four complete SDR nodes. Using the designed testbed, we have conducted two case studies. The first is designed to facilitate video transmission via adaptive LTE links. Our experimental results demonstrate that adaptive LTE link video transmission could reduce the bandwidth usage for data transmission. In the second case study, we perform UE location estimation by leveraging the signal strength from nearby cell towers, pertinent to various applications, such as public safety and disaster rescue scenarios where GPS (Global Position System) is not available (e.g., indoor environment). Our experimental results show that it is feasible to accurately derive the location of a UE (User Equipment) by signal strength. In addition, we design a Hardware In the Loop (HIL) simulation environment using the Vienna LTE simulator, srsLTE library, and our SDR testbed. We develop a software wrapper to connect the Vienna LTE simulator to our SDR testbed via the srsLTE library. Our experimental results demonstrate the comparative performance of simulated UEs and eNodeBs against real SDR UEs and eNodeBs, as well as how a simulated environment can interact with a real-world implementation.
The high mobility of Army tactical networks, combined with their close proximity to hostile actors, elevates the risks associated with short-range network attacks. The connectivity model for such short range connections under active operations is extremely fluid, and highly dependent upon the physical space within which the element is operating, as well as the patterns of movement within that space. To handle these dependencies, we introduce the notion of "key cyber-physical terrain": locations within an area of operations that allow for effective control over the spread of proximity-dependent malware in a mobile tactical network, even as the elements of that network are in constant motion with an unpredictable pattern of node-to-node connectivity. We provide an analysis of movement models and approximation strategies for finding such critical nodes, and demonstrate via simulation that we can identify such key cyber-physical terrain quickly and effectively.
In Advanced Metering Infrastructure (AMI) networks, power data collections from smart meters are static. Due to such static nature, attackers may predict the transmission behavior of the smart meters which can be used to launch selective jamming attacks that can block the transmissions. To avoid such attack scenarios and increase the resilience of the AMI networks, in this paper, we propose dynamic data reporting schedules for smart meters based on the idea of moving target defense (MTD) paradigm. The idea behind MTD-based schedules is to randomize the transmission times so that the attackers will not be able to guess these schedules. Specifically, we assign a time slot for each smart meter and in each round we shuffle the slots with Fisher-Yates shuffle algorithm that has been shown to provide secure randomness. We also take into account the periodicity of the data transmissions that may be needed by the utility company. With the proposed approach, a smart meter is guaranteed to send its data at a different time slot in each round. We implemented the proposed approach in ns-3 using IEEE 802.11s wireless mesh standard as the communication infrastructure. Simulation results showed that our protocol can secure the network from the selective jamming attacks without sacrificing performance by providing similar or even better performance for collection time, packet delivery ratio and end-to-end delay compared to previously proposed protocols.
Using mobile sinks to collect sensed data in WSNs (Wireless Sensor Network) is an effective technique for significantly improving the network lifetime. We investigate the problem of collecting sensed data using a mobile sink in a WSN with unreachable regions such that the network lifetime is maximized and the total tour length is minimized, and propose a polynomial-time heuristic, an ILP-based (Integer Linear Programming) heuristic and an MINLP-based (Mixed-Integer Non-Linear Programming) algorithm for constructing a shortest path routing forest for the sensor nodes in unreachable regions, two energy-efficient heuristics for partitioning the sensor nodes in reachable regions into disjoint clusters, and an efficient approach to convert the tour construction problem into a TSP (Travelling Salesman Problem). We have performed extensive simulations on 100 instances with 100, 150, 200, 250 and 300 sensor nodes in an urban area and a forest area. The simulation results show that the average lifetime of all the network instances achieved by the polynomial-time heuristic is 74% of that achieved by the ILP-based heuristic and 65% of that obtained by the MINLP-based algorithm, and our tour construction heuristic significantly outperforms the state-of-the-art tour construction heuristic EMPS.
Multi-hop Wireless Mesh Networks (WMNs) is a promising new technique for communication with routing protocol designs being critical to the effective and efficient of these WMNs. A common approach for routing traffic in these networks is to select a minimal distance from source to destination as in wire-line networks. Opportunistic Routing(OR) makes use of the broadcasting ability of wireless network and is especially very helpful for WMN because all nodes are static. Our proposed scheme of Multicast Opportunistic Routing(MOR) in WMNs is based on the broadcast transmissions and Learning Au-tomata (LA) to expand the potential candidate nodes that can aid in the process of retransmission of the data. The receivers are required to be in sync with one another in order to avoid duplicated broadcasting of data which is generally achieved by formulating the forwarding candidates according to some LA based metric. The most adorable aspect of this protocol is that it intelligently "learns" from the past experience and improves its performance. The results obtained via this approach of MOR, shows that the proposed scheme outperforms with some existing sachems and is an improved and more effective version of opportunistic routing in mesh network.
This paper discusses two issues with multi-channel multi-radio Wireless Mesh Networks (WMN): gateway placement and gateway selection. To address these issues, a method will be proposed that places gateways at strategic locations to avoid congestion and adaptively learns to select a more efficient gateway for each wireless router by using learning automata. This method, called the N-queen Inspired Gateway Placement and Learning Automata-based Selection (NQ-GPLS), considers multiple metrics such as loss ratio, throughput, load at the gateways and delay. Simulation results from NS-2 simulator demonstrate that NQ-GPLS can significantly improve the overall network performance compared to a standard WMN.
Wireless mesh network (WMN) consists of mesh gateways, mesh routers and mesh clients. In hybrid WMN, both backbone mesh network and client mesh network are mesh connected. Capacity analysis of multi-hop wireless networks has proven to be an interesting and challenging research topic. The capacity of hybrid WMN depends on several factors such as traffic model, topology, scheduling strategy and bandwidth allocation strategy, etc. In this paper, the capacity of hybrid WMN is studied according to the traffic model and bandwidth allocation. The traffic of hybrid WMN is categorized into internal and external traffic. Then the capacity of each mesh client is deduced according to appropriate bandwidth allocation. The analytical results show that hybrid WMN achieves lower capacity than infrastructure WMN. The results and conclusions can guide for the construction of hybrid WMN.
Rootkits detecting in the Windows operating system is an important part of information security monitoring and audit system. Methods of hided process detection were analyzed. The software is developed which implements the four methods of hidden process detection in a user mode (PID based method, the descriptor based method, system call based method, opened windows based method) to use in the monitoring and audit systems.
Because the Internet makes human lives easier, many devices are connected to the Internet daily. The private data of individuals and large companies, including health-related data, user bank accounts, and military and manufacturing data, are increasingly accessible via the Internet. Because almost all data is now accessible through the Internet, protecting these valuable assets has become a major concern. The goal of cyber security is to protect such assets from unauthorized use. Attackers use automated tools and manual techniques to penetrate systems by exploiting existing vulnerabilities and software bugs. To provide good enough security; attack methodologies, vulnerability concepts and defence strategies should be thoroughly investigated. The main purpose of this study is to show that the patches released for existing vulnerabilities at the operating system (OS) level and in software programs does not completely prevent cyber-attack. Instead, producing specific patches for each company and fixing software bugs by being aware of the software running on each specific system can provide a better result. This study also demonstrates that firewalls, antivirus software, Windows Defender and other prevention techniques are not sufficient to prevent attacks. Instead, this study examines different aspects of penetration testing to determine vulnerable applications and hosts using the Nmap and Metasploit frameworks. For a test case, a virtualized system is used that includes different versions of Windows and Linux OS.
Organizations face the issue of how to best allocate their security resources. Thus, they need an accurate method for assessing how many new vulnerabilities will be reported for the operating systems (OSs) they use in a given time period. Our approach consists of clustering vulnerabilities by leveraging the text information within vulnerability records, and then simulating the mean value function of vulnerabilities by relaxing the monotonic intensity function assumption, which is prevalent among the studies that use software reliability models (SRMs) and nonhomogeneous Poisson process (NHPP) in modeling. We applied our approach to the vulnerabilities of four OSs: Windows, Mac, IOS, and Linux. For the OSs analyzed in terms of curve fitting and prediction capability, our results, compared to a power-law model without clustering issued from a family of SRMs, are more accurate in all cases we analyzed.
The advancement in technology has changed how people work and what software and hardware people use. From conventional personal computer to GPU, hardware technology and capability have dramatically improved so does the operating systems that come along. Unfortunately, current industry practice to compare OS is performed with single perspective. It is either benchmark the hardware level performance or performs penetration testing to check the security features of an OS. This rigid method of benchmarking does not really reflect the true performance of an OS as the performance analysis is not comprehensive and conclusive. To illustrate this deficiency, the study performed hardware level and operational level benchmarking on Windows XP, Windows 7 and Windows 8 and the results indicate that there are instances where Windows XP excels over its newer counterparts. Overall, the research shows Windows 8 is a superior OS in comparison to its predecessors running on the same hardware. Furthermore, the findings also show that the automated benchmarking tools are proved less efficient benchmark systems that run on Windows XP and older OS as they do not support DirectX 11 and other advanced features that the hardware supports. There lies the need to have a unified benchmarking approach to compare other aspects of OS such as user oriented tasks and security parameters to provide a complete comparison. Therefore, this paper is proposing a unified approach for Operating System (OS) comparisons with the help of a Windows OS case study. This unified approach includes comparison of OS from three aspects which are; hardware level, operational level performance and security tests.
Due to the increasing threat of network attacks, Firewall has become crucial elements in network security, and have been widely deployed in most businesses and institutions for securing private networks. The function of a firewall is to examine each packet that passes through it and decide whether to letting them pass or halting them based on preconfigured rules and policies, so firewall now is the first defense line against cyber attacks. However most of people doesn't know how firewall works, and the most users of windows operating system doesn't know how to use the windows embedded firewall. This paper explains how firewall works, firewalls types, and all you need to know about firewall policies, then presents a novel application (QudsWall) developed by authors that manages windows embedded firewall and make it easy to use.
VisorFlow aims to monitor the flow of information between processes without requiring modifications to the operating system kernel or its userspace. VisorFlow runs in a privileged Xen domain and monitors the system calls executing in other domains running either Linux or Windows. VisorFlow uses its observations to prevent confidential information from leaving a local network. We describe the design and implementation of VisorFlow, describe how we used VisorFlow to confine na\"ıve users and malicious insiders during the 2017 Cyber-Defense Exercise, and provide performance measurements. We have released VisorFlow and its companion library, libguestrace, as open-source software.
Lehigh University has set a goal to implement System Center Configuration Manager by the end of 2017. This project is being spearheaded by one of our Senior Computing Consultants who has been researching and trained in the Microsoft Virtualization stack. We will discuss our roadmaps, results from our proof-of-concept environments, and discussions in driving this project.
In Distributed Denial of Service (DDoS) attacks, an attacker tries to disable a service with a flood of seemingly legitimate requests from multiple devices; this is usually accompanied by a sharp spike in the number of distinct IP addresses / flows accessing the system in a short time frame. Hence, the number of distinct elements over sliding windows is a fundamental signal in DDoS identification. Additionally, assessing whether a specific flow has recently accessed the system, known as the Set Membership problem, can help us identify the attacking parties. Here, we show how to extend the functionality of a state of the art algorithm for set membership over a W elements sliding window. We now also support estimation of the distinct flow count, using as little as log2 (W) additional bits.
Digitally signed malware can bypass system protection mechanisms that install or launch only programs with valid signatures. It can also evade anti-virus programs, which often forego scanning signed binaries. Known from advanced threats such as Stuxnet and Flame, this type of abuse has not been measured systematically in the broader malware landscape. In particular, the methods, effectiveness window, and security implications of code-signing PKI abuse are not well understood. We propose a threat model that highlights three types of weaknesses in the code-signing PKI. We overcome challenges specific to code-signing measurements by introducing techniques for prioritizing the collection of code signing certificates that are likely abusive. We also introduce an algorithm for distinguishing among different types of threats. These techniques allow us to study threats that breach the trust encoded in the Windows code signing PKI. The threats include stealing the private keys associated with benign certificates and using them to sign malware or by impersonating legitimate companies that do not develop software and, hence, do not own code-signing certificates. Finally, we discuss the actionable implications of our findings and propose concrete steps for improving the security of the code-signing ecosystem.
We consider the automatic verification of information flow security policies of web-based workflows, such as conference submission systems like EasyChair. Our workflow description language allows for loops, non-deterministic choice, and an unbounded number of participating agents. The information flow policies are specified in a temporal logic for hyperproperties. We show that the verification problem can be reduced to the satisfiability of a formula of first-order linear-time temporal logic, and provide decidability results for relevant classes of workflows and specifications. We report on experimental results obtained with an implementation of our approach on a series of benchmarks.
Internet-of-Things devices often collect and transmit sensitive information like camera footage, health monitoring data, or whether someone is home. These devices protect data in transit with end-to-end encryption, typically using TLS connections between devices and associated cloud services. But these TLS connections also prevent device owners from observing what their own devices are saying about them. Unlike in traditional Internet applications, where the end user controls one end of a connection (e.g., their web browser) and can observe its communication, Internet-of-Things vendors typically control the software in both the device and the cloud. As a result, owners have no way to audit the behavior of their own devices, leaving them little choice but to hope that these devices are transmitting only what they should. This paper presents TLS–Rotate and Release (TLS-RaR), a system that allows device owners (e.g., consumers, security researchers, and consumer watchdogs) to authorize devices, called auditors, to decrypt and verify recent TLS traffic without compromising future traffic. Unlike prior work, TLS-RaR requires no changes to TLS's wire format or cipher suites, and it allows the device's owner to conduct a surprise inspection of recent traffic, without prior notice to the device that its communications will be audited.
Trust networks have been widely used to mitigate the data sparsity and cold-start problems of collaborative filtering. Recently, some approaches have been proposed which exploit explicit signed trust relationships, i.e., trust and distrust relationships. These approaches ignore the fact that users despite trusting/distrusting each other in a trust network may have different preferences in real-life. Most of these approaches also handle the notion of the transitivity of distrust as well as trust. However, other existing work observed that trust is transitive while distrust is intransitive. Moreover, explicit signed trust relationships are fairly sparse and may not contribute to infer true preferences of users. In this paper, we propose to create implicit signed trust relationships and exploit them along with explicit signed trust relationship to solve sparsity problem of trust relationships. We also confirm the similarity (resp. dissimilarity) of implicit and explicit trust (resp. distrust) relationships by using the similarity score between users so that users' true preferences can be inferred. In addition to these strategies, we also propose a matrix factorization model that simultaneously exploits implicit and explicit signed trust relationships along with rating information and also handles transitivity of trust and intransitivity of distrust. Extensive experiments on Epinions dataset show that the proposed approach outperforms existing approaches in terms of accuracy.
Client-side JavaScript has become ubiquitous in web applications to improve user experience and reduce server load. However, since clients are untrusted, servers cannot rely on the confidentiality or integrity of client-side JavaScript code and the data that it operates on. For example, client-side input validation must be repeated at server side, and confidential business logic cannot be offloaded. In this paper, we present TrustJS, a framework that enables trustworthy execution of security-sensitive JavaScript inside commodity browsers. TrustJS leverages trusted hardware support provided by Intel SGX to protect the client-side execution of JavaScript, enabling a flexible partitioning of web application code. We present the design of TrustJS and provide initial evaluation results, showing that trustworthy JavaScript offloading can further improve user experience and conserve more server resources.
As the Internet of Thing (IoT) matures, a lot of concerns are being raised about security, privacy and interoperability. The Web of Things (WoT) model leverages web technologies to improve interoperability. Due to its distributed components, the web scaled well beyond initial expectations. Still, secure authentication and communication across organization boundaries rely on the Public Key Infrastructure (PKI) which is a non-transparent, centralized single point of failure. We can improve transparency and reduce the chain of trust–-thus significantly improving the IoT security–-by empowering blockchain technology and web security standards. In this paper, we build a scalable, decentralized IoT-centric PKI and discuss how we can combine it with the emerging web authentication and authorization framework for constrained environments.
The Semantic Web today is a web that allows for intelligent knowledge retrieval by means of semantically annotated tags. This web also known as Intelligent web aims to provide meaningful information to man and machines equally. However, the information thus provided lacks the component of trust. Therefore we propose a method to embed trust in semantic web documents by the concept of provenance which provides answers to who, when, where and by whom the documents were created or modified. This paper demonstrates the same using the Manchester approach of provenance implemented in a University Ontology.
There are vast amounts of information in our world. Accessing the most accurate information in a speedy way is becoming more difficult and complicated. A lot of relevant information gets ignored which leads to much duplication of work and effort. The focuses tend to provide rapid and intelligent retrieval systems. Information retrieval (IR) is the process of searching for information that is related to some topics of interest. Due to the massive search results, the user will normally have difficulty in identifying the relevant ones. To alleviate this problem, a recommendation system is used. A recommendation system is a sort of filtering information system, which predicts the relevance of retrieved information to the user's needs according to some criteria. Hence, it can provide the user with the results that best fit their needs. The services provided through the web normally provide massive information about any requested item or service. An efficient recommendation system is required to classify this information result. A recommendation system can be further improved if augmented with a level of trust information. That is, recommendations are ranked according to their level of trust. In our research, we produced a recommendation system combined with an efficient level of trust system to guarantee that the posts, comments and feedbacks from users are trusted. We customized the concept of LoT (Level of Trust) [1] since it can cover medical, shopping and learning through social media. The proposed system TRS\_LoT provides trusted recommendations to the users with a high percentage of accuracy. Whereas a 300 post with more than 5000 comments from ``Amazon'' was selected to be used as a dataset, the experiment has been conducted by using same dataset based on ``post rating''.