Biblio
The complexity and scale of modern software programs often lead to overlooked programming errors and security vulnerabilities. Developers often rely on automatic tools, like static analysis tools, to look for bugs and vulnerabilities. Static analysis tools are widely used because they can understand nontrivial program behaviors, scale to millions of lines of code, and detect subtle bugs. However, they are known to generate an excess of false alarms which hinder their utilization as it is counterproductive for developers to go through a long list of reported issues, only to find a few true positives. One of the ways proposed to suppress false positives is to use machine learning to identify them. However, training machine learning models requires good quality labeled datasets. For this purpose, we developed D2A [3], a differential analysis based approach that uses the commit history of a code repository to create a labeled dataset of Infer [2] static analysis output.
Flexibility and speed in the development of new industrial machines are essential factors for the success of capital goods industries. When assembling a printed circuit board (PCB), since all the components are surface mounted devices (SMD), the whole process is automatic. However, in many PCBs, it is necessary to place components that are not SMDs, called pin through hole components (PTH), having to be inserted manually, which leads to delays in the production line. This work proposes and validates a prototype work cell based on a collaborative robot and vision systems whose objective is to insert these components in a completely autonomous or semi-autonomous way. Different tests were made to validate this work cell, showing the correct implementation and the possibility of replacing the human worker on this PCB assembly task.
We propose new, more efficient targeted whitebox attacks against deep neural networks. Our attacks better align with the attacker’s goal: (1) tricking a model to assign higher probability to the target class than to any other class, while (2) staying within an -distance of the attacked input. First, we demonstrate a loss function that explicitly encodes (1) and show that Auto-PGD finds more attacks with it. Second, we propose a new attack method, Constrained Gradient Descent (CGD), using a refinement of our loss function that captures both (1) and (2). CGD seeks to satisfy both attacker objectives—misclassification and bounded `p-norm—in a principled manner, as part of the optimization, instead of via ad hoc postprocessing techniques (e.g., projection or clipping). We show that CGD is more successful on CIFAR10 (0.9–4.2%) and ImageNet (8.6–13.6%) than state-of-the-art attacks while consuming less time (11.4–18.8%). Statistical tests confirm that our attack outperforms others against leading defenses on different datasets and values of .
Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and detect problems that the system is unaware of. One way of achieving this synergy is by placing the human operator on the loop—i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, an explanation can play an important role in allowing the human operator to understand why the system is making certain decisions and improve the level of knowledge that the operator has about the system. This, in turn, may improve the operator’s capability to intervene and, if necessary, override the decisions being made by the system. However, explanations may incur costs, in terms of delay in actions and the possibility that a human may make a bad judgment. Hence, it is not always obvious whether an explanation will improve overall utility and, if so, then what kind of explanation should be provided to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic system adaptation approach that leverages a probabilistic reasoning technique to determine when an explanation should be used to improve overall system utility. We evaluate our explanation framework in the context of a realistic industrial control system with adaptive behaviors.
Cloud data integrity verification was an important means to ensure data security. We used public key infrastructure (PKI) to manage user keys in Traditional way, but there were problems of certificate verification and high cost of key management. In this paper, RSA signature was used to construct a new identity-based cloud audit protocol, which solved the previous problems caused by PKI and supported forward security, and reduced the loss caused by key exposure. Through security analysis, the design scheme could effectively resist forgery attack and support forward security.
In healthcare 4.0 ecosystems, authentication of healthcare information allows health stakeholders to be assured that data is originated from correct source. Recently, biometric based authentication is a preferred choice, but as the templates are stored on central servers, there are high chances of copying and generating fake biometrics. An adversary can forge the biometric pattern, and gain access to critical health systems. Thus, to address the limitation, the paper proposes a scheme, PHBio, where an encryption-based biometric system is designed prior before storing the template to the server. Once a user provides his biometrics, the authentication process does not decrypt the data, rather uses a homomorphic-enabled Paillier cryptosystem. The scheme presents the encryption and the comparison part which is based on euclidean distance (EUD) strategy between the user input and the stored template on the server. We consider the minimum distance, and compare the same with a predefined threshold distance value to confirm a biometric match, and authenticate the user. The scheme is compared against parameters like accuracy, false rejection rates (FARs), and execution time. The proposed results indicate the validity of the scheme in real-time health setups.
Intrusion detection systems (IDSs) are widely deployed in the industrial control systems to protect network security. IDSs typically generate a huge number of alerts, which are time-consuming for system operators to process. Most of the alerts are individually insignificant false alarms. However, it is not the best solution to discard these alerts, as they can still provide useful information about network situation. Based on the study of characteristics of alerts in the industrial control systems, we adopt an enhanced method of exponentially weighted moving average (EWMA) control charts to help operators in processing alerts. We classify all detection signatures as regular and irregular according to their frequencies, set multiple control limits to detect anomalies, and monitor regular signatures for network security situational awareness. Extensive experiments have been performed using real-world alert data. Simulation results demonstrate that the proposed enhanced EWMA method can greatly reduce the volume of alerts to be processed while reserving significant abnormal information.
Code-graph based software defect prediction methods have become a research focus in SDP field. Among them, Code Property Graph is used as a form of data representation for code defects due to its ability to characterize the structural features and dependencies of defect codes. However, since the coarse granularity of Code Property Graph, redundant information which is not related to defects often attached to the characterization of software defects. Thus, it is a problem to be solved in how to locate software defects at a finer granularity in Code Property Graph. Static analysis is a technique for identifying software defects using set defect rules, and there are many proven static analysis tools in the industry. In this paper, we propose a method for locating specific types of defects in the Code Property Graph based on the result of static analysis tool. Experiments show that the location method based on static analysis results can effectively predict the location of specific defect types in real software program.