Biblio
This paper identifies a small, essential set of static software code metrics linked to the software product quality characteristics of reliability and maintainability and to the most commonly identified sources of technical debt. An open-source plug-in is created for the Understand code analysis tool that calculates and visualizes these metrics. The plug-in was developed as a first step in an ongoing project aimed at applying case-based reasoning to the issue of software product quality.1
With the rapid development of the smart grid, a large number of intelligent sensors and meters have been introduced in distribution network, which will inevitably increase the integration of physical networks and cyber networks, and bring potential security threats to the operating system. In this paper, the functions of the information system on distribution network are described when cyber attacks appear at the intelligent electronic devices (lED) or at the distribution main station. The effect analysis of the distribution network under normal operating condition or in the fault recovery process is carried out, and the reliability assessment model of the distribution network considering cyber attacks is constructed. Finally, the IEEE-33-bus distribution system is taken as a test system to presented the evaluation process based on the proposed model.
Internet of things has become a subject of interest across a different industry domain. It includes 6LoWPAN (Low-Power Wireless Personal Area Network) which is used for a variety of application including home automation, sensor networks, manufacturing and industry application etc. However, gathering such a huge amount of data from such a different domain causes a problem of traffic congestion, high reliability, high energy efficiency etc. In order to address such problems, content based routing (CBR) technique is proposed, where routing paths are decided according to the type of content. By routing the correlated data to hop nodes for processing, a higher data aggregation ratio can be obtained, which in turns reducing the traffic congestion and minimizes the energy consumption. CBR is implemented on top of existing RPL (Routing Protocol for Low Power and Lossy network) and implemented in contiki operating system using cooja simulator. The analysis are carried out on the basis average power consumption, packet delivery ratio etc.
Traceability has grown from being a specialized need for certain safety critical segments of the industry, to now being a recognized value-add tool for the industry as a whole that can be utilized for manual to automated processes End to End throughout the supply chain. The perception of traceability data collection persists as being a burden that provides value only when the most rare and disastrous of events take place. Disparate standards have evolved in the industry, mainly dictated by large OEM companies in the market create confusion, as a multitude of requirements and definitions proliferate. The intent of the IPC-1782 project is to bring the whole principle of traceability up to date and enable business to move faster, increase revenue, increase productivity, and decrease costs as a result of increased trust. Traceability, as defined in this standard will represent the most effective quality tool available, becoming an intrinsic part of best practice operations, with the encouragement of automated data collection from existing manufacturing systems which works well with Industry 4.0, integrating quality, reliability, product safety, predictive (routine, preventative, and corrective) maintenance, throughput, manufacturing, engineering and supply-chain data, reducing cost of ownership as well as ensuring timeliness and accuracy all the way from a finished product back through to the initial materials and granular attributes about the processes along the way. The goal of this standard is to create a single expandable and extendable data structure that can be adopted for all levels of traceability and enable easily exchanged information, as appropriate, across many industries. The scope includes support for the most demanding instances for detail and integrity such as those required by critical safety systems, all the way through to situations where only basic traceability, such as for simple consumer products, are required. A key driver for the adoption of the standard is the ability to find a relevant and achievable level of traceability that exactly meets the requirement following risk assessment of the business. The wealth of data accessible from traceability for analysis (e.g.; Big Data, etc.) can easily and quickly yield information that can raise expectations of very significant quality and performance improvements, as well as providing the necessary protection against the costs of issues in the market and providing very timely information to regulatory bodies along with consumers/customers as appropriate. This information can also be used to quickly raise yields, drive product innovation that resonates with consumers, and help drive development tests & design requirements that are meaningful to the Marketplace. Leveraging IPC 1782 to create the best value of Component Traceability for your business.
This paper describes a novel aerospace electronic component risk assessment methodology and supporting virtual laboratory structure designed to augment existing supply chain management practices and aid in Microelectronics Trust Assurance. This toolkit and methodology applies structure to the unclear and evolving risk assessment problem, allowing quantification of key risks affecting both advanced and obsolete systems that rely on semiconductor technologies. The impacts of logistics & supply chain risk, technology & counterfeit risk, and faulty component risk on trusted and non-trusted procurement options are quantified. The benefits of component testing on part reliability are assessed and incorporated into counterfeit mitigation calculations. This toolkit and methodology seek to assist acquisition staff by providing actionable decision data regarding the increasing threat of counterfeit components by assessing the risks faced by systems, identifying mitigation strategies to reduce this risk, and resolving these risks through the optimal test and procurement path based on the component criticality risk tolerance of the program.
Cyber-physical systems connect the physical world and the information world by sensors and actuators. These sensors are usually small embedded systems which have many limitations on wireless communication, computing and storage. This paper proposes a lightweight coding method for secure and reliable transmission over a wireless communication links in cyber-physical systems. The reliability of transmission is provided by forward error correction. And to ensure the confidentiality, we utilize different encryption matrices at each time of coding which are generated by the sequence number of packets. So replay attacks and other cyber threats can be resisted simultaneously. The issues of the prior reliable transmission protocols and secure communication protocols in wireless networks of a cyber-physical system are reduced, such as large protocol overhead, high interaction delay and large computation cost.
Transmission techniques based on channel coding with feedback are proposed in this paper to enhance the security of wireless communications systems at the physical layer. Reliable and secure transmission over an additive noise Gaussian wiretap channel is investigated using Bose-Chaudhuri-Hocquenghem (BCH) and Low-Density Parity-Check (LDPC) channel codes. A hybrid automatic repeat-request (HARQ) protocol is used to allow for the retransmission of coded packets requested by the intended receiver (Bob). It is assumed that an eavesdropper (Eve) has access to all forward and feedback transmitted packets. To limit the information leakage to Eve, retransmitted packets are subdivided into smaller granular subpackets. Retransmissions are stopped as soon as the decoding process at the legitimate (Bob) receiver converges. For the hard decision decoded BCH codes, a framework to compute the frame error probability with granular HARQ is proposed. For LDPC codes, the HARQ retransmission requests are based on received symbols likelihood computations: the legitimate recipient request for the retransmission of the set of bits that are more likely to help for successful LDPC decoding. The performances of the proposed techniques are assessed for nul and negative security gap (SG) values, that is when the eavesdropper's channel benefits from equal or better channel conditions than the legitimate channel.
Mobile Ad hoc Network has a wide range of applications in military and civilian domains. It is generally assumed that the nodes are trustworthy and cooperative in routing protocols of MANETs viz. AODV, DSR etc. This assumption makes wireless ad hoc network more prone to interception and manipulation which further open possibilities of various types of Denial of Service (DoS) attacks. In order to mitigate the effect of malicious nodes, a reputation based secure routing protocol is proposed in this paper. The basic idea of the proposed scheme is organize the network with 25 nodes which are deployed in a 5×5 grid structure. Each normal node in the network has a specific prime number, which acts as Node identity. A Backbone Network (BBN) is deployed in a 5×5 grid structure. The proposed scheme uses legitimacy value table and reputation level table maintained by backbone network in the network. These tables are used to provide best path selection after avoiding malicious nodes during path discovery. Based on the values collected in their legitimacy table & reputation level table backbone nodes separate and avoid the malicious nodes while making path between source and destination.
Tactical MANETs are deployed in several challenging situations such as node mobility, presence of radio interference together with malicious jamming attacks, and execrable terrain features etc. Jamming attacks are especially harmful to the reliability of wireless communication, as they can effectively disrupt communication between any node pairs. The nature of Tactical MANETs hinders ineffective most of existing reliable routing schemes for ordinary wireless mobile networks. Routing Protocols in Tactical MANET s face serious security and reliability challenges. Selecting a long lasting and steady-going route is a critical task. Due to the lack of accurate acquisition and evaluation of the transmission characteristics, routing algorithms may result in continual reconstruction and high control overhead. This paper studies the impact of jamming and interference on the common protocols of tactical communications and presents a neighbor dependency-based reliable routing algorithm. According to the neighbor dependency based on channel state information evaluated by Exponential Smoothing Method, how to select a neighboring node as the next hop will greatly affect the transmission reliability. Finally, the performance of the reliable routing protocol based on neighbor dependency is tested in OPNET, and compared with the classical AODV algorithm and the improved AODV based on link Cost (CAODV) algorithm. The simulation results show that the protocol presented in this paper has better data transmission reliability.
The paper presents an example Sensor-cloud architecture that integrates security as its native ingredient. It is based on the multi-layer client-server model with separation of physical and virtual instances of sensors, gateways, application servers and data storage. It proposes the application of virtualised sensor nodes as a prerequisite for increasing security, privacy, reliability and data protection. All main concerns in Sensor-Cloud security are addressed: from secure association, authentication and authorization to privacy and data integrity and protection. The main concept is that securing the virtual instances is easier to implement, manage and audit and the only bottleneck is the physical interaction between real sensor and its virtual reflection.
The evolution of convolutional neural networks (CNNs) into more complex forms of organization, with additional layers, larger convolutions and increasing connections, established the state-of-the-art in terms of accuracy errors for detection and classification challenges in images. Moreover, as they evolved to a point where Gigabytes of memory are required for their operation, we have reached a stage where it becomes fundamental to understand how their inference capabilities can be impaired if data elements somehow become corrupted in memory. This paper introduces fault-injection in these systems by simulating failing bit-cells in hardware memories brought on by relaxing the 100% reliable operation assumption. We analyze the behavior of these networks calculating inference under severe fault-injection rates and apply fault mitigation strategies to improve on the CNNs resilience. For the MNIST dataset, we show that 8x less memory is required for the feature maps memory space, and that in sub-100% reliable operation, fault-injection rates up to 10-1 (with most significant bit protection) can withstand only a 1% error probability degradation. Furthermore, considering the offload of the feature maps memory to an embedded dynamic RAM (eDRAM) system, using technology nodes from 65 down to 28 nm, up to 73 80% improved power efficiency can be obtained.
The large number of malicious files that are produced daily outpaces the current capacity of malware analysis and detection. For example, Intel Security Labs reported that during the second quarter of 2016, their system found more than 40M of new malware [1]. The damage of malware attacks is also increasingly devastating, as witnessed by the recent Cryptowall malware that has reportedly generated more than \$325M in ransom payments to its perpetrators [2]. In terms of defense, it has been widely accepted that the traditional approach based on byte-string signatures is increasingly ineffective, especially for new malware samples and sophisticated variants of existing ones. New techniques are therefore needed for effective defense against malware. Motivated by this problem, the paper investigates a new defense technique against malware. The technique presented in this paper is utilized for automatic identification of malware packers that are used to obfuscate malware programs. Signatures of malware packers and obfuscators are extracted from the CFGs of malware samples. Unlike conventional byte signatures that can be evaded by simply modifying one or multiple bytes in malware samples, these signatures are more difficult to evade. For example, CFG-based signatures are shown to be resilient against instruction modifications and shuffling, as a single signature is sufficient for detecting mildly different versions of the same malware. Last but not least, the process for extracting CFG-based signatures is also made automatic.
PUFs are an emerging security primitive that offers a lightweight security alternative to highly constrained devices like RFIDs. PUFs used in authentication protocols however suffer from unreliable outputs. This hinders their scaling, which is necessary for increased security, and makes them also problematic to use with cryptographic functions. We introduce a new Dual Arbiter PUF design that reveals additional information concerning the stability of the outputs. We then employ a novel filtering scheme that discards unreliable outputs with a minimum number of evaluations, greatly reducing the BER of the PUF.
Software Defined Networking (SDN) is an emerging paradigm that changes the way networks are managed by separating the control plane from data plane and making networks programmable. The separation brings about flexibility, automation, orchestration and offers savings in both capital and operational expenditure. Despite all the advantages offered by SDN it introduces new threats that did not exist before or were harder to exploit in traditional networks, making network penetration potentially easier. One of the key threat to SDN is the authentication and authorisation of network applications that control network behaviour (unlike the traditional network where network devices like routers and switches are autonomous and run proprietary software and protocols to control the network). This paper proposes a mechanism that helps the control layer authenticate network applications and set authorisation permissions that constrict manipulation of network resources.
Wireless wearable embedded devices dominate the Internet of Things (IoT) due to their ability to provide useful information about the body and its local environment. The constrained resources of low power processors, however, pose a significant challenge to run-time error logging and hence, product reliability. Error logs classify error type and often system state following the occurrence of an error. Traditional error logging algorithms attempt to balance storage and accuracy by selectively overwriting past log entries. Since a specific combination of firmware faults may result in system instability, preserving all error occurrences becomes increasingly beneficial as IOT systems become more complex. In this paper, a novel hash-based error logging algorithm is presented which has both constant insertion time and constant memory while also exhibiting no false negatives and an acceptable false positive error rate. Both theoretical analysis and simulations are used to compare the performance of the hash-based and traditional approaches.
The paper suggests several techniques for computer network risk assessment based on Common Vulnerability Scoring System (CVSS) and attack modeling. Techniques use a set of integrated security metrics and consider input data from security information and event management (SIEM) systems. Risk assessment techniques differ according to the used input data. They allow to get risk assessment considering requirements to the accuracy and efficiency. Input data includes network characteristics, attacks, attacker characteristics, security events and countermeasures. The tool that implements these techniques is presented. Experiments demonstrate operation of the techniques for different security situations.
There are vast amounts of information in our world. Accessing the most accurate information in a speedy way is becoming more difficult and complicated. A lot of relevant information gets ignored which leads to much duplication of work and effort. The focuses tend to provide rapid and intelligent retrieval systems. Information retrieval (IR) is the process of searching for information that is related to some topics of interest. Due to the massive search results, the user will normally have difficulty in identifying the relevant ones. To alleviate this problem, a recommendation system is used. A recommendation system is a sort of filtering information system, which predicts the relevance of retrieved information to the user's needs according to some criteria. Hence, it can provide the user with the results that best fit their needs. The services provided through the web normally provide massive information about any requested item or service. An efficient recommendation system is required to classify this information result. A recommendation system can be further improved if augmented with a level of trust information. That is, recommendations are ranked according to their level of trust. In our research, we produced a recommendation system combined with an efficient level of trust system to guarantee that the posts, comments and feedbacks from users are trusted. We customized the concept of LoT (Level of Trust) [1] since it can cover medical, shopping and learning through social media. The proposed system TRS\_LoT provides trusted recommendations to the users with a high percentage of accuracy. Whereas a 300 post with more than 5000 comments from ``Amazon'' was selected to be used as a dataset, the experiment has been conducted by using same dataset based on ``post rating''.
Establishing a secret and reliable wireless communication is a challenging task that is of paramount importance. In this paper, we investigate the physical layer security of a legitimate transmission link between a user that assists an Intrusion Detection System (IDS) in detecting eavesdropping and jamming attacks in the presence of an adversary that is capable of conducting an eavesdropping or a jamming attack. The user is being faced by a challenge of whether to transmit, thus becoming vulnerable to an eavesdropping or a jamming attack, or to keep silent and consequently his/her transmission will be delayed. The adversary is also facing a challenge of whether to conduct an eavesdropping or a jamming attack that will not get him/her to be detected. We model the interactions between the user and the adversary as a two-state stochastic game. Explicit solutions characterize some properties while highlighting some interesting strategies that are being embraced by the user and the adversary. Results show that our proposed system outperform current systems in terms of communication secrecy.
The wireless spectrum is a scarce resource, and the number of wireless terminals is constantly growing. One way to mitigate this strong constraint for wireless traffic is the use of dynamic mechanisms to utilize the spectrum, such as cognitive and software-defined radios. This is especially important for the upcoming wireless sensor and actuator networks in aircraft, where real-time guarantees play an important role in the network. Future wireless networks in aircraft need to be scalable, cater to the specific requirements of avionics (e.g., standardization and certification), and provide interoperability with existing technologies. In this paper, we demonstrate that dynamic network reconfigurability is a solution to the aforementioned challenges. We supplement this claim by surveying several flexible approaches in the context of wireless sensor and actuator networks in aircraft. More specifically, we examine the concept of dynamic resource management, accomplished through more flexible transceiver hardware and by employing dedicated spectrum agents. Subsequently, we evaluate the advantages of cross-layer network architectures which overcome the fixed layering of current network stacks in an effort to provide quality of service for event-based and time-triggered traffic. Lastly, the challenges related to implementation of the aforementioned mechanisms in wireless sensor and actuator networks in aircraft are elaborated, and key requirements to future research are summarized.
The application of mobile Wireless Sensor Networks (WSNs) with a big amount of participants poses many challenges. For instance, high transmission loss rates which are caused i.a. by collisions might occur. Additionally, WSNs frequently operate under harsh conditions, where a high probability of link or node failures is inherently given. This leads to reliable data maintenance being a key issue. Existing approaches which were developed to keep data dependably in WSNs often either perform well in highly dynamic or in completely static scenarios, or require complex calculations. Herein, we present the Network Coding based Multicast Growth Codes (MCGC), which represent a solution for reliable data maintenance in large-scale WSNs. MCGC are able to tolerate high fault rates and reconstruct more originally collected data in a shorter period of time than compared existing approaches. Simulation results show performance improvements of up to 75% in comparison to Growth Codes (GC). These results are achieved independently of the systems' dynamics and despite of high fault probabilities.