Biblio
Network covert channels enable stealthy communications for malware and data exfiltration. For this reason, the development of effective countermeasures for covert channels is important for the protection of individuals and organizations. However, due to the number of available covert channel techniques, it can be considered impractical to develop countermeasures for all existing covert channels. In recent years, researchers started to develop countermeasures that (instead of only countering one particular hiding technique) can be applied to a whole family of similar hiding techniques. These families are referred to as hiding patterns. The main contribution of this paper is that we extend the idea of hiding patterns by introducing the concept of countermeasure variation. Countermeasure variation is the slight modification of a given countermeasure that was designed to detect covert channels of one specific hiding pattern so that the countermeasure can also detect covert channels that are representing other hiding patterns. We exemplify countermeasure variation using the compressibility score originally presented by Cabuk et al. The compressibility score is used to detect covert channels of the 'inter-packet times' pattern and we show that countermeasure variation allows the application of the compressibility score to detect covert channels of the 'size modulation' pattern, too.
The authors have proposed the Fallback Control System (FCS) as a countermeasure after cyber-attacks happen in Industrial Control Systems (ICSs). For increased robustness against cyber-attacks, introducing multiple countermeasures is desirable. Then, an appropriate collaboration is essential. This paper introduces two FCSs in ICS: field network signal is driven FCS and analog signal driven FCS. This paper also implements a collaborative FCS by a collaboration function of the two FCSs. The collaboration function is that the analog signal driven FCS estimates the state of the other FCS. The collaborative FCS decides the countermeasure based on the result of the estimation after cyber-attacks happen. Finally, we show practical experiment results to analyze the effectiveness of the proposed method.
Internet of Things refers to a paradigm consisting of a variety of uniquely identifiable day to day things communicating with one another to form a large scale dynamic network. Securing access to this network is a current challenging issue. This paper proposes an encryption system suitable to IoT features. In this system we integrated the fuzzy commitment scheme in DCT-based recognition method for fingerprint. To demonstrate the efficiency of our scheme, the obtained results are analyzed and compared with direct matching (without encryption) according to the most used criteria; FAR and FRR.
Kerberos is a third party and widely used authentication protocol, in which it enables computers to connect securely using a single sign-on over an insecure channel. It proves the identity of clients and encrypts all the communications between them to ensure data privacy and integrity. Typically, Kerberos composes of three communication phases to establish a secure session between any two clients. The authentication is based on a password-based scheme, in which it is a secret long-term key shared between the client and the Kerberos. Therefore, Kerberos suffers from a password-guessing attack, the main drawback of Kerberos. In this paper, we overcome this limitation by modifying the first initial phase using the virtual password and biometric data. In addition, the proposed protocol provides a strong authentication scenario against multiple types of attacks.
The rapid evolution of the power grid into a smart one calls for innovative and compelling means to experiment with the upcoming expansions, and analyze their behavioral response under normal circumstances and when targeted by attacks. Such analysis is fundamental to setting up solid foundations for the smart grid. Smart grid Hardware-In-the-Loop (HIL) co-simulation environments serve as a key approach to answer questions on the systems components, functionality, security concerns along with analysis of the system outcome and expected behavior. In this paper, we introduce a HIL co-simulation framework capable of simulating the smart grid actions and responses to attacks targeting its power and communication components. Our testbed is equipped with a real-time power grid simulator, and an associated OpenStack-based communication network. Through the utilized communication network, we can emulate a multitude of attacks targeting the power system, and evaluating the grid response to those attacks. Moreover, we present different illustrative cyber attacks use cases, and analyze the smart grid behavior in the presence of those attacks.
In a scenario where user files are stored in a network shared volume, a single computer infected by ransomware could encrypt the whole set of shared files, with a large impact on user productivity. On the other hand, medium and large companies maintain hardware or software probes that monitor the traffic in critical network links, in order to evaluate service performance, detect security breaches, account for network or service usage, etc. In this paper we suggest using the monitoring capabilities in one of these tools in order to keep a trace of the traffic between the users and the file server. Once the ransomware is detected, the lost files can be recovered from the traffic trace. This includes any user modifications posterior to the last snapshot of periodic backups. The paper explains the problems faced by the monitoring tool, which is neither the client nor the server of the file sharing operations. It also describes the data structures in order to process the actions of users that could be simultaneously working on the same file. A proof of concept software implementation was capable of successfully recovering the files encrypted by 18 different ransomware families.
Behavioral malware detection aims to improve on the performance of static signature-based techniques used by anti-virus systems, which are less effective against modern polymorphic and metamorphic malware. Behavioral malware classification aims to go beyond the detection of malware by also identifying a malware's family according to a naming scheme such as the ones used by anti-virus vendors. Behavioral malware classification techniques use run-time features, such as file system or network activities, to capture the behavioral characteristic of running processes. The increasing volume of malware samples, diversity of malware families, and the variety of naming schemes given to malware samples by anti-virus vendors present challenges to behavioral malware classifiers. We describe a behavioral classifier that uses a Convolutional Recurrent Neural Network and data from Microsoft Windows Prefetch files. We demonstrate the model's improvement on the state-of-the-art using a large dataset of malware families and four major anti-virus vendor naming schemes. The model is effective in classifying malware samples that belong to common and rare malware families and can incrementally accommodate the introduction of new malware samples and families.
The CPS standard can be more objective to evaluate the effect of control behavior in each control area on the interconnected power grid. The CPS standard is derived from statistical methods emphasizing the long-term control performance of AGC, which is beneficial to the frequency control of the power grid by mutual support between the various power grids in the case of an accident. Moreover, CPS standard reduces the wear of the equipment caused by the frequent adjustment of the AGC unit. The key is to adjust the AGC control strategy to meet the performance of CPS standard. This paper proposed a dynamic optimal CPS control methodology for interconnected power systems based on model predictive control which can achieve optimal control under the premise of meeting the CPS standard. The effectiveness of the control strategy is verified by simulation examples.
Efficient application of Internet of Battlefield Things (IoBT) technology on the battlefield calls for innovative solutions to control and manage the deluge of heterogeneous IoBT devices. This paper presents an innovative paradigm to address heterogeneity in controlling IoBT and IoT devices, enabling multi-force cooperation in challenging battlefield scenarios.
Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in polylogarithmic time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees. While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating interleaved queries and updates to this randomized data structure may be of independent interest.
With the rapid development of the Internet of vehicles, there is a huge amount of multimedia data becoming a hidden trouble in the Internet of Things. Therefore, it is necessary to process and store them in real time as a way of big data curation. In this paper, a method of real-time processing and storage based on CDN in vehicle monitoring system is proposed. The MPEG-DASH standard is used to process the multimedia data by dividing them into MPD files and media segments. A real-time monitoring system of vehicle on the basis of the method introduced is designed and implemented.
Modern software development and deployment practices encourage complexity and bloat while unintentionally sacrificing efficiency and security. A major driver in this is the overwhelming emphasis on programmers' productivity. The constant demands to speed up development while reducing costs have forced a series of individual decisions and approaches throughout software engineering history that have led to this point. The current state-of-the-practice in the field is a patchwork of architectures and frameworks, packed full of features in order to appeal to: the greatest number of people, obscure use cases, maximal code reuse, and minimal developer effort. The Office of Naval Research (ONR) Total Platform Cyber Protection (TPCP) program seeks to de-bloat software binaries late in the life-cycle with little or no access to the source code or the development process.
False alarm and miss are two general kinds of alarm errors and they can decrease operator's trust in the alarm system. Specifically, there are two different forms of trust in such systems, represented by two kinds of responses to alarms in this research. One is compliance and the other is reliance. Besides false alarm and miss, the two responses are differentially affected by properties of the alarm system, situational factors or operator factors. However, most of the existing studies have qualitatively analyzed the relationship between a single variable and the two responses. In this research, all available experimental studies are identified through database searches using keyword "compliance and reliance" without restriction on year of publication to December 2017. Six relevant studies and fifty-two sets of key data are obtained as the data base of this research. Furthermore, neural network is adopted as a tool to establish the quantitative relationship between multiple factors and the two forms of trust, respectively. The result will be of great significance to further study the influence of human decision making on the overall fault detection rate and the false alarm rate of the human machine system.
The e-government concept and healthcare have usually been studied separately. Even when and where both e-government and healthcare systems were combined in a study, the roles of e-government in healthcare have not been examined. As a result., the complementarity of the systems poses potential challenges. The interpretive approach was applied in this study. Existing materials in the areas of healthcare and e-government were used as data from a qualitative method viewpoint. Dimension of change from the perspective of the structuration theory was employed to guide the data analysis. From the analysis., six factors were found to be the main roles of e-government in the implementation and application of e-health in the delivering of healthcare services. An understanding of the roles of e-government promotes complementarity., which enhances the healthcare service delivery to the community.
Due to its costly and time-consuming nature and a wide range of passive barrier elements and tools for their breaching, testing the delay time of passive barriers is only possible as an experimental tool to verify expert judgements of said delay times. The article focuses on the possibility of creating and utilizing a new method of acquiring values of delay time for various passive barrier elements using expert judgements which could add to the creation of charts where interactions between the used elements of mechanical barriers and the potential tools for their bypassing would be assigned a temporal value. The article consists of basic description of methods of expert judgements previously applied for making prognoses of socio-economic development and in other societal areas, which are called soft system. In terms of the problem of delay time, this method needed to be modified in such a way that the prospective output would be expressible by a specific quantitative value. To achieve this goal, each stage of the expert judgements was adjusted to the use of suitable scientific methods to select appropriate experts and then to achieve and process the expert data. High emphasis was placed on evaluation of quality and reliability of the expert judgements, which takes into account the specifics of expert selection such as their low numbers, specialization and practical experience.
This paper advocates programming high-performance code using partial evaluation. We present a clean-slate programming system with a simple, annotation-based, online partial evaluator that operates on a CPS-style intermediate representation. Our system exposes code generation for accelerators (vectorization/parallelization for CPUs and GPUs) via compiler-known higher-order functions that can be subjected to partial evaluation. This way, generic implementations can be instantiated with target-specific code at compile time. In our experimental evaluation we present three extensive case studies from image processing, ray tracing, and genome sequence alignment. We demonstrate that using partial evaluation, we obtain high-performance implementations for CPUs and GPUs from one language and one code base in a generic way. The performance of our codes is mostly within 10%, often closer to the performance of multi man-year, industry-grade, manually-optimized expert codes that are considered to be among the top contenders in their fields.
Successive interference cancellation (SIC) receiver is adopted by power domain non-orthogonal multiple access (NOMA) at the receiver side as the baseline receiver scheme taking the forthcoming expected mobile device evolution into account. Development technologies and advanced techniques are boldly being considered in order to achieve power saving in many networks, to reach sustainability and reliability in communication due to envisioned huge amount of data delivery. In this paper, we propose a novel scheme of NOMA-SIC for the sake of balancing the trade-off between system performance and complexity. In the proposed scheme, each SIC level is comprised by a matching filter (MF), a MF detector and a regenerator. In simulations, the proposed scheme demonstrates the best performance on power saving, of which energy efficiency increases with an increase in the number of NOMA device pairs.
Deep Learning Models are vulnerable to adversarial inputs, samples modified in order to maximize error of the system. We hereby introduce Spartan Networks, Deep Learning models that are inherently more resistant to adverarial examples, without doing any input preprocessing out of the network or adversarial training. These networks have an adversarial layer within the network designed to starve the network of information, using a new activation function to discard data. This layer trains the neural network to filter-out usually-irrelevant parts of its input. These models thus have a slightly lower precision, but report a higher robustness under attack than unprotected models.
The paper outlines the concept of the Digital economy, defines the role and types of intellectual resources in the context of digitalization of the economy, reviews existing approaches and methods to intellectual property valuation and analyzes drawbacks of quantitative evaluation of intellectual resources (based intellectual property valuation) related to: uncertainty, noisy data, heterogeneity of resources, nonformalizability, lack of reliable tools for measuring the parameters of intellectual resources and non-stationary development of intellectual resources. The results of the study offer the ways of further development of methods for quantitative evaluation of intellectual resources (inter alia aimed at their capitalization).