Biblio
TV networks are no longer just closed networks. They are increasingly carrying Internet services, integrating and interoperating with home IoT and the Internet. In addition, client devices are becoming intelligent. At the same time, they are facing more security risks. Security incidents such as attacks on TV systems are commonplace, and there are many incidents that cause negative effects. The security protection of TV networks mainly adopts security protection schemes similar to other networks, such as constructing a security perimeter; there are few security researches specifically carried out for client-side devices. This paper focuses on the mainstream architecture of the integration of HFC TV network and the Internet, and conducts a comprehensive security test and analysis for client-side devices including EOC cable bridge gateways and smart TV Set-Top-BoX. Results show that the TV network client devices have severe vulnerabilities such as command injection and system debugging interfaces. Attackers can obtain the system control of TV clients without authorization. In response to the results, we put forward systematic suggestions on the client security protection of smart TV networks in current days.
Localizing concurrency faults that occur in production is hard because, (1) detailed field data, such as user input, file content and interleaving schedule, may not be available to developers to reproduce the failure; (2) it is often impractical to assume the availability of multiple failing executions to localize the faults using existing techniques; (3) it is challenging to search for buggy locations in an application given limited runtime data; and, (4) concurrency failures at the system level often involve multiple processes or event handlers (e.g., software signals), which can not be handled by existing tools for diagnosing intra-process(thread-level) failures. To address these problems, we present SCMiner, a practical online bug diagnosis tool to help developers understand how a system-level concurrency fault happens based on the logs collected by the default system audit tools. SCMiner achieves online bug diagnosis to obviate the need for offline bug reproduction. SCMiner does not require code instrumentation on the production system or rely on the assumption of the availability of multiple failing executions. Specifically, after the system call traces are collected, SCMiner uses data mining and statistical anomaly detection techniques to identify the failure-inducing system call sequences. It then maps each abnormal sequence to specific application functions. We have conducted an empirical study on 19 real-world benchmarks. The results show that SCMiner is both effective and efficient at localizing system-level concurrency faults.
This paper studies the deletion propagation problem in terms of minimizing view side-effect. It is a problem funda-mental to data lineage and quality management which could be a key step in analyzing view propagation and repairing data. The investigated problem is a variant of the standard deletion propagation problem, where given a source database D, a set of key preserving conjunctive queries Q, and the set of views V obtained by the queries in Q, we try to identify a set T of tuples from D whose elimination prevents all the tuples in a given set of deletions on views △V while preserving any other results. The complexity of this problem has been well studied for the case with only a single query. Dichotomies, even trichotomies, for different settings are developed. However, no results on multiple queries are given which is a more realistic case. We study the complexity and approximations of optimizing the side-effect on the views, i.e., find T to minimize the additional damage on V after removing all the tuples of △V. We focus on the class of key-preserving conjunctive queries which is a dichotomy for the single query case. It is surprising to find that except the single query case, this problem is NP-hard to approximate within any constant even for a non-trivial set of multiple project-free conjunctive queries in terms of view side-effect. The proposed algorithm shows that it can be approximated within a bound depending on the number of tuples of both V and △V. We identify a class of polynomial tractable inputs, and provide a dynamic programming algorithm to solve the problem. Besides data lineage, study on this problem could also provide important foundations for the computational issues in data repairing. Furthermore, we introduce some related applications of this problem, especially for query feedback based data cleaning.
The era of information technology has, unfortunately, contributed to the tremendous rise in the number of criminal activities. However, digital artifacts can be utilized in convicting cybercriminal and exposing their activities. The digital forensics science concerns about all aspects related to cybercrimes. It seeks digital evidence by following standard methodologies to be admitted in court rooms. This paper concerns about memory forensics for the unique artifacts it holds. Memory contains information about the current state of systems and applications. Moreover, an application's data explains how a criminal has been interacting the application just before the memory is acquired. Memory forensics at the application level is currently random and cumbersome. Targeting specific applications is what forensic researchers and practitioner are currently striving to provide. This paper suggests a general solution to investigate any application. Our solution aims to utilize an application's data structures and variables' information in the investigation process. This is because an application's data has to be stored and retrieved in the means of variables. Data structures and variables' information can be generated by compilers for debugging purposes. We show that an application's information is a valuable resource to the investigator.
Static code analysis is a convenient technique to support the development of software. Without prior test setup, information about a later runtime behavior can be inferred and errors in the code can be found before using a regular compiler. Solutions to apply static code analysis to PLC software following the IEC 61131-3 already exist, but using these separate tools usually creates a gap in the development process. In this paper we introduce an architecture to use static analysis directly in a development environment and give instant feedback to the developer while he is still editing the PLC software.
Complex systems are prevalent in many fields such as finance, security and industry. A fundamental problem in system management is to perform diagnosis in case of system failure such that the causal anomalies, i.e., root causes, can be identified for system debugging and repair. Recently, invariant network has proven a powerful tool in characterizing complex system behaviors. In an invariant network, a node represents a system component, and an edge indicates a stable interaction between two components. Recent approaches have shown that by modeling fault propagation in the invariant network, causal anomalies can be effectively discovered. Despite their success, the existing methods have a major limitation: they typically assume there is only a single and global fault propagation in the entire network. However, in real-world large-scale complex systems, it's more common for multiple fault propagations to grow simultaneously and locally within different node clusters and jointly define the system failure status. Inspired by this key observation, we propose a two-phase framework to identify and rank causal anomalies. In the first phase, a probabilistic clustering is performed to uncover impaired node clusters in the invariant network. Then, in the second phase, a low-rank network diffusion model is designed to backtrack causal anomalies in different impaired clusters. Extensive experimental results on real-life datasets demonstrate the effectiveness of our method.
Software defined networking promises network operators to dramatically simplify network management. It provides flexibility and innovation through network programmability. With SDN, network management moves from codifying functionality in terms of low-level device configuration to building software that facilitates network management and debugging[1]. SDN provides new techniques to solve long-standing problems in networking like routing by separating the complexity of state distribution from network specification. Despite all the hype surrounding SDNs, exploiting its full potential is demanding. Security is still the major issue and a striking challenge that reduces the growth of SDNs. Moreover the introduction of various architectural components and up cycling of novel entities of SDN poses new security issues and threats. SDN is considered as major target for digital threats and cyber-attacks[2] and have more devastating effects than simple networks. Initial SDN design doesn't considered security as its part; therefore, it must be raised on the agenda. This article discusses the security solutions proposed to secure SDNs. We categorize the security solutions in the article by presenting a thematic taxonomy based on SDN architectural layers/interfaces[3], security measures and goals, simulation framework. Moreover, the literature also points out the possible attacks[2] targeting different layers/interfaces of SDNs. For securing SDNs, the potential requirements and their key enablers are also identified and presented. Also, the articles sketch the design of secure and dependable SDNs. At last, we discuss open issues and challenges of SDN security that may be rated appropriate to be handled by professionals and researchers in the future.
In this paper, we propose a new approach to diagnosing problems in complex distributed systems. Our approach is based on the insight that many of the trickiest problems are anomalies. For instance, in a network, problems often affect only a small fraction of the traffic (e.g., perhaps a certain subnet), or they only manifest infrequently. Thus, it is quite common for the operator to have “examples” of both working and non-working traffic readily available – perhaps a packet that was misrouted, and a similar packet that was routed correctly. In this case, the cause of the problem is likely to be wherever the two packets were treated differently by the network. We present the design of a debugger that can leverage this information using a novel concept that we call differential provenance. Differential provenance tracks the causal connections between network states and state changes, just like classical provenance, but it can additionally perform root-cause analysis by reasoning about the differences between two provenance trees. We have built a diagnostic tool that is based on differential provenance, and we have used our tool to debug a number of complex, realistic problems in two scenarios: software-defined networks and MapReduce jobs. Our results show that differential provenance can be maintained at relatively low cost, and that it can deliver very precise diagnostic information; in many cases, it can even identify the precise root cause of the problem.
Existing approaches to diagnosing sensor networks are generally sink based, which rely on actively pulling state information from sensor nodes so as to conduct centralized analysis. First, sink-based tools incur huge communication overhead to the traffic-sensitive sensor networks. Second, due to the unreliable wireless communications, sink often obtains incomplete and suspicious information, leading to inaccurate judgments. Even worse, it is always more difficult to obtain state information from problematic or critical regions. To address the given issues, we present a novel self-diagnosis approach, which encourages each single sensor to join the fault decision process. We design a series of fault detectors through which multiple nodes can cooperate with each other in a diagnosis task. Fault detectors encode the diagnosis process to state transitions. Each sensor can participate in the diagnosis by transiting the detector's current state to a new state based on local evidences and then passing the detector to other nodes. Having sufficient evidences, the fault detector achieves the Accept state and outputs a final diagnosis report. We examine the performance of our self-diagnosis tool called TinyD2 on a 100-node indoor testbed and conduct field studies in the GreenOrbs system, which is an operational sensor network with 330 nodes outdoor.