Biblio
SSL certificates are a core component of the public key infrastructure that underpins encrypted communication in the Internet. In this paper, we report the results of a longitudinal study of the characteristics of SSL certificate chains presented to clients during secure web (HTTPS) connection setup. Our data set consists of 23B SSL certificate chains collected from a global panel consisting of over 2M residential client machines over a period of 6 months. The data informing our analyses provide perspective on the entire chain of trust, including root certificates, across a wide distribution of client machines. We identify over 35M unique certificate chains with diverse relationships at all levels of the PKI hierarchy. We report on the characteristics of valid certificates, which make up 99.7% of the total corpus. We also examine invalid certificate chains, finding that 93% of them contain an untrusted root certificate and we find they have shorter average chain length than their valid counterparts. Finally, we examine two unintended but prevalent behaviors in our data: the deprecation of root certificates and secure traffic interception. Our results support aspects of prior, scan-based studies on certificate characteristics but contradict other findings, highlighting the importance of the residential client-side perspective.
Functionally safe control logic design without full duplication is difficult due to the complexity of random control logic. The Reorder buffer (ROB) is a control logic function commonly used in high performance computing systems. In this study, we focus on a safe ROB design used in an industry quality Network-on-Chip (NoC) Advanced eXtensible Interface (AXI) Network Interface (NI) block. We developed and applied area efficient safe design techniques including partial duplication, Error Detection Code (EDC) and invariance checking with formal proofs and showed that we can achieve a desired safe Diagnostic Coverage (DC) requirement with small area and power overheads and no performance degradation.
Nowadays due to economic reasons most of the semiconductor companies prefer to outsource the manufacturing part of their designs to third fabrication foundries, the so-called fabs. Untrustworthy fabs can extract circuit blocks, the called intellectual properties (IPs), from the layouts and then pirate them. Such fabs are suspected of hardware Trojan (HT) threat in which malicious circuits are added to the layouts for sabotage objectives. HTs lead up to increase power consumption in HT-infected circuits. However, due to process variations, the power of HTs including few gates in million-gate circuits is not detectable in power consumption analysis (PCA). Thus, such circuits should be considered as a collection of small sub-circuits, and PCA must be individually performed for each one of them. In this article, we introduce an approach facilitating PCA-based HT detection methods. Concerning this approach, we propose a new logic locking method and algorithm. Logic locking methods and algorithm are usually employed against IP piracy. They modify circuits such that they do not correctly work without applying a correct key to. Our experiments at the gate level and post-synthesis show that the proposed locking method and algorithm increase the proportion of HT activity and consequently HT power to circuit power.
With the unprecedented prevalence of mobile network applications, cryptographic protocols, such as the Secure Socket Layer/Transport Layer Security (SSL/TLS), are widely used in mobile network applications for communication security. The proven methods for encrypted video stream classification or encrypted protocol detection are unsuitable for the SSL/TLS traffic. Consequently, application-level traffic classification based networking and security services are facing severe challenges in effectiveness. Existing encrypted traffic classification methods exhibit unsatisfying accuracy for applications with similar state characteristics. In this paper, we propose a multiple-attribute-based encrypted traffic classification system named Multi-Attribute Associated Fingerprints (MAAF). We develop MAAF based on the two key insights that the DNS traces generated during the application runtime contain classification guidance information and that the handshake certificates in the encrypted flows can provide classification clues. Apart from the exploitation of key insights, MAAF employs the context of the encrypted traffic to overcome the attribute-lacking problem during the classification. Our experimental results demonstrate that MAAF achieves 98.69% accuracy on the real-world traceset that consists of 16 applications, supports the early prediction, and is robust to the scale of the training traceset. Besides, MAAF is superior to the state-of-the-art methods in terms of both accuracy and robustness.
Digitization has increased exposure and opened up for more cyber threats and attacks. To proactively handle this issue, enterprise modeling needs to include threat management during the design phase that considers antagonists, attack vectors, and damage domains. Agile methods are commonly adopted to efficiently develop and manage software and systems. This paper proposes to use an enterprise architecture repository to analyze not only shipped components but the overall architecture, to improve the traditional designs represented by legacy systems in the situated IT-landscape. It shows how the hidden structure method (with Design Structure Matrices) can be used to evaluate the enterprise architecture, and how it can contribute to agile development. Our case study uses an architectural descriptive language called ArchiMate for architecture modeling and shows how to predict the ripple effect in a damaging domain if an attacker's malicious components are operating within the network.
Software-Defined Network (SDN) is the dynamic network technology to address the issues of traditional networks. It provides centralized view of the whole network through decoupling the control planes and data planes of a network. Most SDN-based security services globally detect and block a malicious host based on IP address. However, the IP address is not verified during the forwarding process in most cases and SDN-based security service may block a normal host with forged IP address in the whole network, which means false-positive. In this paper, we introduce an attack scenario that uses forged packets to make the security service consider a victim host as an attacker so that block the victim. We also introduce cost-effective risk avoidance strategy.
This paper presents an access control modelling that integrates risk assessment elements in the attribute-based model to organize the identification, authentication and authorization rules. Access control is complex in integrated systems, which have different actors accessing different information in multiple levels. In addition, systems are composed by different components, much of them from different developers. This requires a complete supply chain trust to protect the many existent actors, their privacy and the entire ecosystem. The incorporation of the risk assessment element introduces additional variables like the current environment of the subjects and objects, time of the day and other variables to help produce more efficient and effective decisions in terms of granting access to specific objects. The risk-based attributed access control modelling was applied in a health platform, Project CityZen.
Keeping Internet users safe from attacks and other threats is one of the biggest security challenges nowadays. Distributed Denial of Service (DDoS) [1] is one of the most common attacks. DDoS makes the system stop working by resource overload. Software Define Networking (SDN) [2] has recently emerged as a new networking technology offering an unprecedented programmability that allows network operators to dynamically configure and manage their infrastructures. The flexible processing and centralized management of SDN controller allow flexibly deploying complex security algorithms and mitigation methods. In this paper, we propose a new TCP-SYN flood attack mitigation in SDN networks using machine learning. By using a testbed, we implement the proposed algorithms, evaluate their accuracy and address the trade-off between the accuracy and capacity of the security device. The results show that the algorithms can mitigate TCP-SYN Flood attack over 96.
The automotive domain currently experiences a radical transition towards automation, connectivity and digitalization. This is a cause for major change in human-machine interaction. The research presented here examines 1) company visions of future mobility 2) user's reaction to the first trials of these visions. The data analyses reveal that implementing companies' visions for 2040 requires improvement concerning user acceptance. One way of improving user acceptance is to integrate emotion recognition in manual and automated vehicles. By reacting to users' positive and negative emotions, vehicles can learn to improve driving behavior, communication and to adjust driver assistance accordingly. Therefore, a roadmap for future research in emotion recognition has been developed by interviews with twelve experts in the field. Emotions that they judged to be most relevant to detect include anger, stress and fear, amongst others. Furthermore, ideas on sensors for emotion recognition, potential countermeasures for the negative effects of emotions and additional challenges were collected. The research presented is designed to shape further research directions of in-car emotion recognition.
While the introduction of the softwarelization technologies such as SDN and NFV transfers main focus of network management from hardware to software, the network operators still have to care for a lot of network and computing equipment located in the network center. Toward fully automated network management, we believe that robotic approach will be significant, meaning that robot will care for the physical equipment on behalf of human. This paper explains our experience and insight gained throughout development of a network management robot. We utilize ROS(Robot Operating System) which is a powerful platform for robot development and secures the ease of development and expandability. Our roadmap of the network management robot is also shown as well as three use cases such as environmental monitoring, operator assistance and autonomous maintenance of the equipment. Finally, the paper briefly explains experimental results conducted in a commercial network center.
The internet of things (IoT) is the popular wireless network for data collection applications. The IoT networks are deployed in dense or sparse architectures, out of which the dense networks are vastly popular as these are capable of gathering the huge volumes of data. The collected data is analyzed using the historical or continuous analytical systems, which uses the back testing or time-series analytics to observe the desired patterns from the target data. The lost or bad interval data always carries the high probability to misguide the analysis reports. The data is lost due to a variety of reasons, out of which the most popular ones are associated with the node failures and connectivity holes, which occurs due to physical damage, software malfunctioning, blackhole/wormhole attacks, route poisoning, etc. In this paper, the work is carried on the new routing scheme for the IoTs to avoid the connectivity holes, which analyzes the activity of wireless nodes and takes the appropriate actions when required.
This paper presents the details of the roving proxy framework for SMS spam and SMS phishing (SMishing) detection. The framework aims to protect organizations and enterprises from the danger of SMishing attacks. Feasibility and functionality studies of the framework are presented along with an update process study to define the minimum requirements for the system to adapt with the latest spam and SMishing trends.
With the rapid proliferation of mobile users, the spectrum scarcity has become one of the issues that have to be addressed. Cognitive Radio technology addresses this problem by allowing an opportunistic use of the spectrum bands. In cognitive radio networks, unlicensed users can use licensed channels without causing harmful interference to licensed users. However, cognitive radio networks can be subject to different security threats which can cause severe performance degradation. One of the main attacks on these networks is the primary user emulation in which a malicious node emulates the characteristics of the primary user signals. In this paper, we propose a detection technique of this attack based on the RSS-based localization with the maximum likelihood estimation. The simulation results show that the proposed technique outperforms the RSS-based localization method in detecting the primary user emulation attacker.
Style transfer is a research hotspot in computer vision. Up to now, it is still a challenge although many researches have been conducted on it for high quality style transfer. In this work, we propose an algorithm named ASTCNN which is a real-time Arbitrary Style Transfer Convolution Neural Network. The ASTCNN consists of two independent encoders and a decoder. The encoders respectively extract style and content features from style and content and the decoder generates the style transferred image images. Experimental results show that ASTCNN achieves higher quality output image than the state-of-the-art style transfer algorithms and the floating point computation of ASTCNN is 23.3% less than theirs.
Aiming at the problems of poor stability and low accuracy of current communication data informatization processing methods, this paper proposes a research on nonlinear frequency hopping communication data informatization under the framework of big data security evaluation. By adding a frequency hopping mediation module to the frequency hopping communication safety evaluation framework, the communication interference information is discretely processed, and the data parameters of the nonlinear frequency hopping communication data are corrected and converted by combining a fast clustering analysis algorithm, so that the informatization processing of the nonlinear frequency hopping communication data under the big data safety evaluation framework is completed. Finally, experiments prove that the research on data informatization of nonlinear frequency hopping communication under the framework of big data security evaluation could effectively improve the accuracy and stability.
In this paper we study the problem of computing robust invariant sets for state-constrained perturbed polynomial systems within the Hamilton-Jacobi reachability framework. A robust invariant set is a set of states such that every possible trajectory starting from it never violates the given state constraint, irrespective of the actual perturbation. The main contribution of this work is to describe the maximal robust invariant set as the zero level set of the unique Lipschitz-continuous viscosity solution to a Hamilton-Jacobi-Bellman (HJB) equation. The continuity and uniqueness property of the viscosity solution facilitates the use of existing numerical methods to solve the HJB equation for an appropriate number of state variables in order to obtain an approximation of the maximal robust invariant set. We furthermore propose a method based on semi-definite programming to synthesize robust invariant sets. Some illustrative examples demonstrate the performance of our methods.
A classic reachability problem for safety of dynamic systems is to compute the set of initial states from which the state trajectory is guaranteed to stay inside a given constraint set over a given time horizon. In this paper, we leverage existing theory of reachability analysis and risk measures to devise a risk-sensitive reachability approach for safety of stochastic dynamic systems under non-adversarial disturbances over a finite time horizon. Specifically, we first introduce the notion of a risk-sensitive safe set as a set of initial states from which the risk of large constraint violations can be reduced to a required level via a control policy, where risk is quantified using the Conditional Value-at-Risk (CVaR) measure. Second, we show how the computation of a risk-sensitive safe set can be reduced to the solution to a Markov Decision Process (MDP), where cost is assessed according to CVaR. Third, leveraging this reduction, we devise a tractable algorithm to approximate a risk-sensitive safe set, and provide theoretical arguments about its correctness. Finally, we present a realistic example inspired from stormwater catchment design to demonstrate the utility of risk-sensitive reachability analysis. In particular, our approach allows a practitioner to tune the level of risk sensitivity from worst-case (which is typical for Hamilton-Jacobi reachability analysis) to risk-neutral (which is the case for stochastic reachability analysis).
Power communication network is an important infrastructure of power system. For a large number of widely distributed business terminals and communication terminals. The data protection is related to the safe and stable operation of the whole power grid. How to solve the problem that lots of nodes need a large number of keys and avoid the situation that these nodes cannot exchange information safely because of the lack of keys. In order to solve the problem, this paper proposed a segmentation and combination technology based on quantum key to extend the limited key. The basic idea was to obtain a division scheme according to different conditions, and divide a key into several different sub-keys, and then combine these key segments to generate new keys and distribute them to different terminals in the system. Sufficient keys were beneficial to key updating, and could effectively enhance the ability of communication system to resist damage and intrusion. Through the analysis and calculation, the validity of this method in the use of limited quantum keys to achieve the business data secure transmission of a large number of terminal was further verified.
In this paper, we present RT-Gang: a novel real-time gang scheduling framework that enforces a one-gang-at-a-time policy. We find that, in a multicore platform, co-scheduling multiple parallel real-time tasks would require highly pessimistic worst-case execution time (WCET) and schedulability analysis - even when there are enough cores - due to contention in shared hardware resources such as cache and DRAM controller. In RT-Gang, all threads of a parallel real-time task form a real-time gang and the scheduler globally enforces the one-gang-at-a-time scheduling policy to guarantee tight and accurate task WCET. To minimize under-utilization, we integrate a state-of-the-art memory bandwidth throttling framework to allow safe execution of best-effort tasks. Specifically, any idle cores, if exist, are used to schedule best-effort tasks but their maximum memory bandwidth usages are strictly throttled to tightly bound interference to real-time gang tasks. We implement RT-Gang in the Linux kernel and evaluate it on two representative embedded multicore platforms using both synthetic and real-world DNN workloads. The results show that RT-Gang dramatically improves system predictability and the overhead is negligible.
Jeopardy to cybersecurity threats in electronic systems is persistent and growing. Such threats present in hardware, by means such as Trojans and counterfeits, and in software, by means such as viruses and other malware. Against such threats, we propose a range of embedded instruments that are capable of real-time hardware assurance and online monitoring.