Biblio
Cloud systems are becoming more complex and vulnerable to attacks. Cyber attacks are also becoming more sophisticated and harder to detect. Therefore, it is increasingly difficult for a single cloud-based intrusion detection system (IDS) to detect all attacks, because of limited and incomplete knowledge about attacks. The recent researches in cyber-security have shown that a co-operation among IDSs can bring higher detection accuracy in such complex computer systems. Through collaboration, a cloud-based IDS can consult other IDSs about suspicious intrusions and increase the decision accuracy. The problem of existing cooperative IDS approaches is that they overlook having untrusted (malicious or not) IDSs that may negatively effect the decision about suspicious intrusions in the cloud. Moreover, they rely on a centralized architecture in which a central agent regulates the cooperation, which contradicts the distributed nature of the cloud. In this paper, we propose a framework that enables IDSs to distributively form trustworthy IDSs communities. We devise a novel decentralized algorithm, based on coalitional game theory, that allows a set of cloud-based IDSs to cooperatively set up their coalition in such a way to make their individual detection accuracy increase, even in the presence of untrusted IDSs.
In the light of the information revolution, and the propagation of big social data, the dissemination of misleading information is certainly difficult to control. This is due to the rapid and intensive flow of information through unconfirmed sources under the propaganda and tendentious rumors. This causes confusion, loss of trust between individuals and groups and even between governments and their citizens. This necessitates a consolidation of efforts to stop penetrating of false information through developing theoretical and practical methodologies aim to measure the credibility of users of these virtual platforms. This paper presents an approach to domain-based prediction to user's trustworthiness of Online Social Networks (OSNs). Through incorporating three machine learning algorithms, the experimental results verify the applicability of the proposed approach to classify and predict domain-based trustworthy users of OSNs.
In the digital age, drug makers have never been more exposed to cyber threats, from a wide range of actors pursuing very different motivations. These threats can have unpredictable consequences for the reliability and integrity of the pharmaceutical supply chain. Cyber threats do not have to target drug makers directly; a recent wargame by the Atlantic Council highlighted how malware affecting one entity can degrade equipment and systems functions using the same software. The NotPetya ransomware campaign in mid-2017 was not specifically interested in affecting the pharmaceutical industry, but nevertheless disrupted Merck’s HPV vaccine production line. Merck lost 310 million dollars in revenue subsequent quarter, as a result of lost productivity and a halt in production for almost a week.
Congestion diffusion resulting from the coupling by resource competing is a kind of typical failure propagation in network systems. The existing models of failure propagation mainly focused on the coupling by direct physical connection between nodes, the most efficiency path, or dependence group, while the coupling by resource competing is ignored. In this paper, a model of network congestion diffusion with resource competing is proposed. With the analysis of the similarities to resource competing in biomolecular network, the model describing the dynamic changing process of biomolecule concentration based on titration mechanism provides reference for our model. Then the innovation on titration mechanism is proposed to describe the dynamic changing process of link load in networks, and a novel congestion model is proposed. By this model, the global congestion can be evaluated. Simulations show that network congestion with resource competing can be obtained from our model.
Networked control systems improve the efficiency of cyber-physical plants both functionally, by the availability of data generated even in far-flung locations, and operationally, by the adoption of standard protocols. A side-effect, however, is that now the safety and stability of a local process and, in turn, of the entire plant are more vulnerable to malicious agents. Leveraging the communication infrastructure, the authors here present the design of networked control systems with built-in resilience. Specifically, the paper addresses attacks known as false data injections that originate within compromised sensors. In the proposed framework for closed-loop control, the feedback signal is constructed by weighted consensus of estimates of the process state gathered from other interconnected processes. Observers are introduced to generate the state estimates from the local data. Side-channel monitors are attached to each primary sensor in order to assess proper code execution. These monitors provide estimates of the trust assigned to each observer output and, more importantly, independent of it; these estimates serve as weights in the consensus algorithm. The authors tested the concept on a multi-sensor networked physical experiment with six primary sensors. The weighted consensus was demonstrated to yield a feedback signal within specified accuracy even if four of the six primary sensors were injecting false data.
In this paper we examine the use of covert channels based on CPU load in order to achieve persistent user identification through browser sessions. In particular, we demonstrate that an HTML5 video, a GIF image, or CSS animations on a webpage can be used to force the CPU to produce a sequence of distinct load levels, even without JavaScript or any client-side code. These load levels can be then captured either by another browsing session, running on the same or a different browser in parallel to the browsing session we want to identify, or by a malicious app installed on the device. To get a good estimation of the CPU load caused by the target session, the receiver can observe system statistics about CPU activity (app), or constantly measure time it takes to execute a known code segment (app and browser). Furthermore, for mobile devices we propose a sensor-based approach to estimate the CPU load, based on exploiting disturbances of the magnetometer sensor data caused by the high CPU activity. Captured loads can be decoded and translated into an identifying bit string, which is transmitted back to the attacker. Due to the way loads are produced, these methods are applicable even in highly restrictive browsers, such as the Tor Browser, and run unnoticeably to the end user. Therefore, unlike existing ways of web tracking, our methods circumvent most of the existing countermeasures, as they store the identifying information outside the browsing session being targeted. Finally, we also thoroughly evaluate and assess each presented method of generating and receiving the signal, and provide an overview of potential countermeasures.
This paper presents the enhancement of speech signals in a noisy environment by using a Two-Sensor Fast Normalized Least Mean Square adaptive algorithm combined with the backward blind source separation structure. A comparative study with other competitive algorithms shows the superiority of the proposed algorithm in terms of various objective criteria such as the segmental signal to noise ratio (SegSNR), the cepstral distance (CD), the system mismatch (SM) and the segmental mean square error (SegMSE).
Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries.
Primary user emulation (PUE) attack causes security issues in a cognitive radio network (CRN) while sensing the unused spectrum. In PUE attack, malicious users transmit an emulated primary signal in spectrum sensing interval to secondary users (SUs) to forestall them from accessing the primary user (PU) spectrum bands. In the present paper, the defense against such attack by Neyman-Pearson criterion is shown in terms of total error probability. Impact of several parameters such as attacker strength, attacker's presence probability, and signal-to-noise ratio on SU is shown. Result shows proposed method protect the harmful effects of PUE attack in spectrum sensing.
We have proposed the Media Access Control method based on the Synchronization Phenomena of coupled oscillators (SP-MAC) to improve a total throughput of wireless terminals connected to a Access Point. SP-MAC can avoid the collision of data frames that occur by applying Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) based on IEEE 802.11 in Wireless local area networks (WLAN). Furthermore, a new throughput guarantee control method based on SP-MAC has been proposed. This method enable each terminal not only to avoid the collision of frames but also to obtain the requested throughput by adjusting the parameters of SP-MAC. In this paper, we propose a new throughput control method that realizes the fairness among groups of terminals that use the different TCP versions, by taking the advantage of our method that is able to change acquired throughput by adjusting parameters. Moreover, we confirm the effectiveness of the proposed method by the simulation evaluation.
Autonomous systems are gaining momentum in various application domains, such as autonomous vehicles, autonomous transport robotics and self-adaptation in smart homes. Product liability regulations impose high standards on manufacturers of such systems with respect to dependability (safety, security and privacy). Today's conventional engineering methods are not adequate for providing guarantees with respect to dependability requirements in a cost-efficient manner, e.g. road tests in the automotive industry sum up millions of miles before a system can be considered sufficiently safe. System engineers will no longer be able to test and respectively formally verify autonomous systems during development time in order to guarantee the dependability requirements in advance. In this vision paper, we introduce a new holistic software systems engineering approach for autonomous systems, which integrates development time methods as well as operation time techniques. With this approach, we aim to give the users a transparent view of the confidence level of the autonomous system under use with respect to the dependability requirements. We present already obtained results and point out research goals to be addressed in the future.
To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.
Crowd sensing is one of the core features of internet of vehicles, the use of internet of vehicles for crowd sensing is conducive to the rational allocation of sensing tasks. This paper mainly studies the problem of task allocation for crowd sensing in internet of vehicles, proposes a trajectory-based task allocation scheme for crowd sensing in internet of vehicles. With limited budget constraints, participants' trajectory is taken as an indicator of the spatiotemporal availability. Based on the solution idea of the minimal-cover problem, select the minimum number of participating vehicles to achieve the coverage of the target area.
The recently developed deep belief network (DBN) has been shown to be an effective methodology for solving time series forecasting problems. However, the performance of DBN is seriously depended on the reasonable setting of hyperparameters. At present, random search, grid search and Bayesian optimization are the most common methods of hyperparameters optimization. As an alternative, a state-of-the-art derivative-free optimizer-negative correlation search (NCS) is adopted in this paper to decide the sizes of DBN and learning rates during the training processes. A comparative analysis is performed between the proposed method and other popular techniques in the time series forecasting experiment based on two types of time series datasets. Experiment results statistically affirm the efficiency of the proposed model to obtain better prediction results compared with conventional neural network models.
In recent years, the spreading of malicious social media messages about financial stocks has threatened the security of financial market. Market Anomaly Attacks is an illegal practice in the stock or commodities markets that induces investors to make purchase or sale decisions based on false information. Identifying these threats from noisy social media datasets remains challenging because of the long time sequence in these social media postings, ambiguous textual context and the difficulties for traditional deep learning approaches to handle both temporal and text dependent data such as financial social media messages. This research developed a temporal recurrent neural network (TRNN) approach to capturing both time and text sequence dependencies for intelligent detection of market anomalies. We tested the approach by using financial social media of U.S. technology companies and their stock returns. Compared with traditional neural network approaches, TRNN was found to more efficiently and effectively classify abnormal returns.
The Internet of Battlefield Things (IoBT) might be one of the most expensive cyber-physical systems of the next decade, yet much research remains to develop its fundamental enablers. A challenge that distinguishes the IoBT from its civilian counterparts is resilience to a much larger spectrum of threats.
The idea to use multiple paths to transport TCP traffic seems very attractive due to its potential benefits it may offer for both redundancy and better utilization of available resources by load balancing. Fixed and mobile network providers employ frequently load-balancers that use multiple paths on either per-flow or per-destination level, but very seldom on per-packet level. Despite of the benefits of packet-level load balancing mechanisms (e.g., low computational complexity and high bandwidth utilization) network providers can't use them mainly because of TCP packet reorderings that harm TCP performance. Emerging network architectures also support multiple paths, but they face with the same obstacle in balancing their load to multiple paths. Indeed, packet level load balancing research is paralyzed by the reordering vulnerability of TCP.A couple of TCP variants exist that deal with TCP packet reordering problem, but due to lack of end-to-end transparency they were not widely deployed and adopted. In this paper, we revisit TCP's packet reorderings problem and present a transparent and light-weight algorithm, Out-of-Order Robustness for TCP with Transparent Acknowledgment (ACK) Intervention (ORTA), to deal with out-of-order deliveries.ORTA works as a transparent thin layer below TCP and hides harmful side-effects of packet-level load balancing. ORTA monitors all TCP flow packets and uses ACK traffic shaping, without any modifications to either TCP sender or receiver sides. Since it is transparent to TCP end-points, it can be easily deployed on TCP sender end-hosts (EHs), gateway (GW) routers, or access points (APs). ORTA opens a door for network providers to use per-packet load balancing.The proposed ORTA algorithm is implemented and tested in NS-2. The results show that ORTA can prevent TCP performance decrease when per-packet load balancing is used.
Throughout the last few decades, a breakthrough took place in the field of autonomous robotics. They have been introduced to perform dangerous, dirty, difficult, and dull tasks, to serve the community. They have been also used to address health-care related tasks, such as enhancing the surgical skills of the surgeons and enabling surgeries in remote areas. This may help to perform operations in remote areas efficiently and in timely manner, with or without human intervention. One of the main advantages is that robots are not affected with human-related problems such as: fatigue or momentary lapses of attention. Thus, they can perform repeated and tedious operations. In this paper, we propose a framework to establish trust in autonomous medical robots based on mutual understanding and transparency in decision making.
With wide applications like surveillance and imaging, securing underwater acoustic Mobile Ad-hoc NETworks (MANET) becomes a double-edged sword for oceanographic operations. Underwater acoustic MANET inherits vulnerabilities from 802.11-based MANET which renders traditional cryptographic approaches defenseless. A Trust Management Framework (TMF), allowing maintained confidence among participating nodes with metrics built from their communication activities, promises secure, efficient and reliable access to terrestrial MANETs. TMF cannot be directly applied to the underwater environment due to marine characteristics that make it difficult to differentiate natural turbulence from intentional misbehavior. This work proposes a trust model to defend underwater acoustic MANETs against attacks using a machine learning method with carefully chosen communication metrics, and a cloud model to address the uncertainty of trust in harsh underwater environments. By integrating the trust framework of communication with the cloud model to combat two kinds of uncertainties: fuzziness and randomness, trust management is greatly improved for underwater acoustic MANETs.
As a valuable source of information, Word Of Mouth1 has always been valued by consumers and business marketers. The Internet provides a new medium for Word Of Mouth communication. Consumers share their views and comments on products, services, brands and enterprises through online platforms, thus forming Internet Word Of Mouth, which will be of great importance to B2C enterprises. However, disturbing and even false information as well as uncertainties and risks existing in the online communication environment lead to the crisis of online trust. Accordingly, this study constructs a trust mechanism model of Internet Word Of Mouth effect, which shows that the professionalism of communicators, online relationship strength, communication channels, and product involvement are key factors significantly affecting the Word Of Mouth effect. This model can provide theoretical guidance in the word-of-mouth marketing and the operation of B2C e-commerce enterprises.
Deep learning is a highly effective machine learning technique for large-scale problems. The optimization of nonconvex functions in deep learning literature is typically restricted to the class of first-order algorithms. These methods rely on gradient information because of the computational complexity associated with the second derivative Hessian matrix inversion and the memory storage required in large scale data problems. The reward for using second derivative information is that the methods can result in improved convergence properties for problems typically found in a non-convex setting such as saddle points and local minima. In this paper we introduce TRMinATR - an algorithm based on the limited memory BFGS quasi-Newton method using trust region - as an alternative to gradient descent methods. TRMinATR bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate Hessian approximations. We provide empirical results on the classification task of the MNIST dataset and show robust convergence with preferred generalization characteristics.
Sequential recommendation algorithms aim to predict users' future behavior given their historical interactions. A recent line of work has achieved state-of-the-art performance on sequential recommendation tasks by adapting ideas from metric learning and knowledge-graph completion. These algorithms replace inner products with low-dimensional embeddings and distance functions, employing a simple translation dynamic to model user behavior over time. In this paper, we propose TransFM, a model that combines translation and metric-based approaches for sequential recommendation with Factorization Machines (FMs). Doing so allows us to reap the benefits of FMs (in particular, the ability to straightforwardly incorporate content-based features), while enhancing the state-of-the-art performance of translation-based models in sequential settings. Specifically, we learn an embedding and translation space for each feature dimension, replacing the inner product with the squared Euclidean distance to measure the interaction strength between features. Like FMs, we show that the model equation for TransFM can be computed in linear time and optimized using classical techniques. As TransFM operates on arbitrary feature vectors, additional content information can be easily incorporated without significant changes to the model itself. Empirically, the performance of TransFM significantly increases when taking content features into account, outperforming state-of-the-art models on sequential recommendation tasks for a wide variety of datasets.