Biblio
Multi-module Cyber-Physical Systems (CPSs), such as satellite clusters, swarms of Unmanned Aerial Vehicles (UAV), and fleets of Unmanned Underwater Vehicles (UUV) are examples of managed distributed real-time systems where mission-critical applications, such as sensor fusion or coordinated flight control, are hosted. These systems are dynamic and reconfigurable, and provide a “CPS cluster-as-a-service” for mission-specific scientific applications that can benefit from the elasticity of the cluster membership and heterogeneity of the cluster members. Distributed and remote nature of these systems often necessitates the use of Deployment and Configuration (D&C) services to manage lifecycle of software applications. Fluctuating resources, volatile cluster membership and changing environmental conditions require resilience. However, due to the dynamic nature of the system, human intervention is often infeasible. This necessitates a self-adaptive D&C infrastructure that supports autonomous resilience. Such an infrastructure must have the ability to adapt existing applications on the fly in order to provide application resilience and must itself be able to adapt to account for changes in the system as well as tolerate failures.
This paper describes the design and architectural considerations to realize a self-adaptive, D&C infrastructure for CPSs. Previous efforts in this area have resulted in D&C infrastructures that support application adaptation via dynamic re-deployment and re-configuration mechanisms. Our work, presented in this paper, improves upon these past efforts by implementing a self- adaptive D&C infrastructure which itself is resilient. The paper concludes with experimental results that demonstrate the autonomous resilience capabilities of our new D&C infrastructure.
Cellular data networks are proliferating to address the need for ubiquitous connectivity. To cope with the increasing number of subscribers and with the spatiotemporal variations of the wireless signals, current cellular networks use opportunistic schedulers, such as the Proportional Fairness scheduler (PF), to maximize network throughput while maintaining fairness among users. Such scheduling decisions are based on channel quality metrics and Automatic Repeat reQuest (ARQ) feedback reports provided by the User's Equipment (UE). Implicit in current networks is the a priori trust on every UE's feedback. Malicious UEs can, thus, exploit this trust to disrupt service by intelligently faking their reports. This work proposes a trustworthy version of the PF scheduler (called TPF) to mitigate the effects of such Denial-of-Service (DoS) attacks. In brief, based on the channel quality reported by the UE, we assign a probability to possible ARQ feedbacks. We then use the probability associated with the actual ARQ report to assess the UE's reporting trustworthiness. We adapt the scheduling mechanism to give higher priority to more trusted users. Our evaluations show that TPF 1) does not induce any performance degradation under benign settings, and 2) it completely mitigates the effects of the activity of malicious UEs. In particular, while colluding attackers can obtain up to 77 percent of the time slots with the most sophisticated attack, TPF is able to contain this percentage to as low as 6 percent.
Presented as part of the Illinois Science of Security Lablet Bi-Weekly Meeting, September 2014.
Evaluating the trade-offs involved in cybersecurity professionalization.
Decision makers need capabilities to quickly model and effectively assess consequences of actions and reactions in crisis de-escalation environments. The creation and what-if exercising of such models has traditionally had onerous resource requirements. This research demonstrates fast and viable ways to build such models in operational environments. Through social network extraction from texts, network analytics to identify key actors, and then simulation to assess alternative interventions, advisors can support practicing and execution of crisis de-escalation activities. We describe how we used this approach as part of a scenario-driven modeling effort. We demonstrate the strength of moving from data to models and the advantages of data-driven simulation, which allow for iterative refinement. We conclude with a discussion of the limitations of this approach and anticipated future work.
Security features are often hardwired into software applications, making it difficult to adapt security responses to reflect changes in runtime context and new attacks. In prior work, we proposed the idea of architecture-based self-protection as a way of separating adaptation logic from application logic and providing a global perspective for reasoning about security adaptations in the context of other business goals. In this paper, we present an approach, based on this idea, for combating denial-of-service (DoS) attacks. Our approach allows DoS-related tactics to be composed into more sophisticated mitigation strategies that encapsulate possible responses to a security problem. Then, utility-based reasoning can be used to consider different business contexts and qualities. We describe how this approach forms the underpinnings of a scientific approach to self-protection, allowing us to reason about how to make the best choice of mitigation at runtime. Moreover, we also show how formal analysis can be used to determine whether the mitigations cover the range of conditions the system is likely to encounter, and the effect of mitigations on other quality attributes of the system. We evaluate the approach using the Rainbow self-adaptive framework and show how Rainbow chooses DoS mitigation tactics that are sensitive to different business contexts.
Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance.
Phishing is a social engineering tactic that targets internet users in an attempt to trick them into divulging personal information. When opening an email, users are faced with the decision of determining if an email is legitimate or an attempt at phishing. Although software has been developed to assist the user, studies have shown they are not foolproof, leaving the user vulnerable. Multiple training programs have been developed to educate users in their efforts to make informed decisions; however, training that conveys the real world consequences of phishing or training that increases a user’s fear level have not been developed. Conveying real world consequences of a situation and increasing a user’s fear level have been proven to enhance the effects of training in other fields. Ninety-six participants were recruited and randomly assigned to training programs with phishing consequences, training programs designed to increase fear, or a control group. Preliminary results indicate that training helped users identify phishing emails; however, little difference was seen among the three groups. Future analysis will include a factor analysis of personality and individual differences that influence training efficacy.
Complex software systems are becoming increasingly prevalent in aerospace applications: in particular, to
accomplish critical tasks. Ensuring the safety of these systems is crucial, as they can have subtly different behaviors
under slight variations in operating conditions. This paper advocates the use of formal verification techniques and in
particular theorem proving for hybrid software-intensive systems as a well-founded complementary approach to the
classical aerospace verification and validation techniques, such as testing or simulation. As an illustration of these
techniques, a novel lateral midair collision-avoidance maneuver is studied in an ideal setting, without accounting for
the uncertainties of the physical reality. The challenges that naturally arise when applying such technology to
industrial-scale applications is then detailed, and proposals are given on how to address these issues.
Identifying factors behind countries’ weakness to cyber-attacks is an important step towards addressing these weaknesses at the root level. For example, identifying factors why some countries become cyber- crime safe heavens can inform policy actions about how to reduce the attractiveness of these countries to cyber-criminals. Currently, however, identifying these factors is mostly based on expert opinions and speculations.
In this work, we perform an empirical study to statistically test the validity of these opinions and specu- lations. In our analysis, we use Symantec’s World Intelligence Network Environment (WINE) Intrusion Prevention System (IPS) telemetry data which contain attack reports from more than 10 million customer computers worldwide. We use regression analysis to test for the relevance of multiple factors including monetary and computing resources, cyber-security research and institutions, and corruption.
Our analysis confirms some hypotheses and disproves others. We find that many countries in Eastern Europe extensively host attacking computers because of a combination of good computing infrastructure and high corruption rate. We also find that web attacks and fake applications are most prevalent in rich countries because attacks on these countries are more lucrative. Finally, we find that computers in Africa launch the lowest rates of cyber-attacks. This is surprising given the bad cyber reputation of some African countries such as Nigeria. Our research has many policy implications.
Both SAT and #SAT can represent difficult problems in seemingly dissimilar areas such as planning, verification, and probabilistic inference. Here, we examine an expressive new language, #∃SAT, that generalizes both of these languages. #∃SAT problems require counting the number of satisfiable formulas in a concisely-describable set of existentially quantified, propositional formulas. We characterize the expressiveness and worst-case difficulty of #∃SAT by proving it is complete for the complexity class #P NP [1], and re- lating this class to more familiar complexity classes. We also experiment with three new
general-purpose #∃SAT solvers on a battery of problem distributions including a simple logistics domain. Our experiments show that, despite the formidable worst-case complex-
ity of #P NP [1], many of the instances can be solved efficiently by noticing and exploiting a particular type of frequent structure.
Cloud computing is a new paradigm and emerged technology for hosting and delivering resources over a network such as internet by using concepts of virtualization, processing power and storage. However, many challenging issues are still unclear in cloud-based environments and decrease the rate of reliability and efficiency for service providers and users. User Authentication is one of the most challenging issues in cloud-based environments and according to this issue this paper proposes an efficient user authentication model that involves both of defined phases during registration and accessing processes. Geo Detection and Digital Signature Authorization (GD2SA) is a user authentication tool for provisional access permission in cloud computing environments. The main aim of GD2SA is to compare the location of an un-registered device with the location of the user by using his belonging devices (e.g. smart phone). In addition, this authentication algorithm uses the digital signature of account owner to verify the identity of applicant. This model has been evaluated in this paper according to three main parameters: efficiency, scalability, and security. In overall, the theoretical analysis of the proposed model showed that it can increase the rate of efficiency and reliability in cloud computing as an emerging technology.
As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.
Radio Frequency IDentification (RFID) is a technique for speedy and proficient identification system, it has been around for more than 50 years and was initially developed for improving warfare machinery. RFID technology bridges two technologies in the area of Information and Communication Technologies (ICT), namely Product Code (PC) technology and Wireless technology. This broad-based rapidly expanding technology impacts business, environment and society. The operating principle of an RFID system is as follows. The reader starts a communication process by radiating an electromagnetic wave. This wave will be intercepted by the antenna of the RFID tag, placed on the item to be identified. An induced current will be created at the tag and will activate the integrated circuit, enabling it to send back a wave to the reader. The reader redirects information to the host where it will be processed. RFID is used for wide range of applications in almost every field (Health, education, industry, security, management ...). In this review paper, we will focus on agricultural and environmental applications.
The trusted network connection is a hot spot in trusted computing field and the trust measurement and access control technology are used to deal with network security threats in trusted network. But the trusted network connection lacks fine-grained states and real-time measurement support for the client and the authentication mechanism is difficult to apply in the trusted network connection, it is easy to cause the loss of identity privacy. In order to solve the above-described problems, this paper presents a trust measurement scheme suitable for clients in the trusted network, the scheme integrates the following attributes such as authentication mechanism, state measurement, and real-time state measurement and so on, and based on the authentication mechanism and the initial state measurement, the scheme uses the real-time state measurement as the core method to complete the trust measurement for the client. This scheme presented in this paper supports both static and dynamic measurements. Overall, the characteristics of this scheme such as fine granularity, dynamic, real-time state measurement make it possible to make more fine-grained security policy and therefore it overcomes inadequacies existing in the current trusted network connection.
Wireless sensor and actuator networks (WSAN) constitute an emerging technology with multiple applications in many different fields. Due to the features of WSAN (dynamism, redundancy, fault tolerance, and self-organization), this technology can be used as a supporting technology for the monitoring of critical infrastructures (CIs). For decades, the monitoring of CIs has centered on supervisory control and data acquisition (SCADA) systems, where operators can monitor and control the behavior of the system. The reach of the SCADA system has been hampered by the lack of deployment flexibility of the sensors that feed it with monitoring data. The integration of a multihop WSAN with SCADA for CI monitoring constitutes a novel approach to extend the SCADA reach in a cost-effective way, eliminating this handicap. However, the integration of WSAN and SCADA presents some challenges which have to be addressed in order to comprehensively take advantage of the WSAN features. This paper presents a solution for this joint integration. The solution uses a gateway and a Web services approach together with a Web-based SCADA, which provides an integrated platform accessible from the Internet. A real scenario where this solution has been successfully applied to monitor an electrical power grid is presented.
To reduce human efforts in browsing long surveillance videos, synopsis videos are proposed. Traditional synopsis video generation applying optimization on video tubes is very time consuming and infeasible for real-time online generation. This dilemma significantly reduces the feasibility of synopsis video generation in practical situations. To solve this problem, the synopsis video generation problem is formulated as a maximum a posteriori probability (MAP) estimation problem in this paper, where the positions and appearing frames of video objects are chronologically rearranged in real time without the need to know their complete trajectories. Moreover, a synopsis table is employed with MAP estimation to decide the temporal locations of the incoming foreground objects in the synopsis video without needing an optimization procedure. As a result, the computational complexity of the proposed video synopsis generation method can be significantly reduced. Furthermore, as it does not require prescreening the entire video, this approach can be applied on online streaming videos.
Cryptographic misuse affects a sizeable portion of Android applications. However, there is only an empirical study that has been made about this problem. In this paper, we perform a systematic analysis on the cryptographic misuse, build the cryptographic misuse vulnerability model and implement a prototype tool Crypto Misuse Analyser (CMA). The CMA can perform static analysis on Android apps and select the branches that invoke the cryptographic API. Then it runs the app following the target branch and records the cryptographic API calls. At last, the CMA identifies the cryptographic API misuse vulnerabilities from the records based on the pre-defined model. We also analyze dozens of Android apps with the help of CMA and find that more than a half of apps are affected by such vulnerabilities.
Keeping up with rapid advances in research in various fields of Engineering and Technology is a challenging task. Decision makers including academics, program managers, venture capital investors, industry leaders and funding agencies not only need to be abreast of latest developments but also be able to assess the effect of growth in certain areas on their core business. Though analyst agencies like Gartner, McKinsey etc. Provide such reports for some areas, thought leaders of all organisations still need to amass data from heterogeneous collections like research publications, analyst reports, patent applications, competitor information etc. To help them finalize their own strategies. Text mining and data analytics researchers have been looking at integrating statistics, text analytics and information visualization to aid the process of retrieval and analytics. In this paper, we present our work on automated topical analysis and insight generation from large heterogeneous text collections of publications and patents. While most of the earlier work in this area provides search-based platforms, ours is an integrated platform for search and analysis. We have presented several methods and techniques that help in analysis and better comprehension of search results. We have also presented methods for generating insights about emerging and popular trends in research along with contextual differences between academic research and patenting profiles. We also present novel techniques to present topic evolution that helps users understand how a particular area has evolved over time.
An important challenge in networked control systems is to ensure the confidentiality and integrity of the message in order to secure the communication and prevent attackers or intruders from compromising the system. However, security mechanisms may jeopardize the temporal behavior of the network data communication because of the computation and communication overhead. In this paper, we study the effect of adding Hash Based Message Authentication (HMAC) to a time-triggered networked control system. Time Triggered Architectures (TTAs) provide a deterministic and predictable timing behavior that is used to ensure safety, reliability and fault tolerance properties. The paper analyzes the computation and communication overhead of adding HMAC and the impact on the performance of the time-triggered network. Experimental validation and performance evaluation results using a TTEthernet network are also presented.
A physical unclonable function (PUF) is an integrated circuit (IC) that serves as a hardware security primitive due to its complexity and the unpredictability between its outputs and the applied inputs. PUFs have received a great deal of research interest and significant commercial activity. Public PUFs (PPUFs) address the crucial PUF limitation of being a secret-key technology. To some extent, the first generation of PPUFs are similar to SIMulation Possible, but Laborious (SIMPL) systems and one-time hardware pads, and employ the time gap between direct execution and simulation. The second PPUF generation employs both process variation and device aging which results in matched devices that are excessively difficult to replicate. The third generation leaves the analog domain and employs reconfigurability and device aging to produce digital PPUFs. We survey representative PPUF architectures, related public protocols and trusted information flows, and related testing issues. We conclude by identifying the most important, challenging, and open PPUF-related problems.
Resilience in the information sciences is notoriously difficult to define much less to measure. But in mechanical engineering, the resilience of a substance is mathematically well-defined as an area under the stress-strain curve. We combined inspiration from mechanics of materials and axioms from queuing theory in an attempt to define resilience precisely for information systems. We first examine the meaning of resilience in linguistic and engineering terms and then translate these definitions to information sciences. As a general assessment of our approach's fitness, we quantify how resilience may be measured in a simple queuing system. By using a very simple model we allow clear application of established theory while being flexible enough to apply to many other engineering contexts in information science and cyber security. We tested our definitions of resilience via simulation and analysis of networked queuing systems. We conclude with a discussion of the results and make recommendations for future work.
Multiple Inductive Loop Detectors are advanced Inductive Loop Sensors that can measure traffic flow parameters in even conditions where the traffic is heterogeneous and does not conform to lanes. This sensor consists of many inductive loops in series, with each loop having a parallel capacitor across it. These inductive and capacitive elements of the sensor may undergo open or short circuit faults during operation. Such faults lead to erroneous interpretation of data acquired from the loops. Conventional methods used for fault diagnosis in inductive loop detectors consume time and effort as they require experienced technicians and involve extraction of loops from the saw-cut slots on the road. This also means that the traffic flow parameters cannot be measured until the sensor system becomes functional again. The repair activities would also disturb traffic flow. This paper presents a method for automating fault diagnosis for series-connected Multiple Inductive Loop Detectors, based on an impulse test. The system helps in the diagnosis of open/short faults associated with the inductive and capacitive elements of the sensor structure by displaying the fault status conveniently. Since the fault location as well as the fault type can be precisely identified using this method, the repair actions are also localised. The proposed system thereby results in significant savings in both repair time and repair costs. An embedded system was developed to realize this scheme and the same was tested on a loop prototype.