Biblio
"Moving fast, and breaking things", instead of "being safe and secure", is the credo of the IT industry. However, if we look at the wide societal impact of IT security incidents in the past years, it seems like it is no longer sustainable. Just like in the case of Equifax, people simply forget updates, just like in the case of Maersk, companies do not use sufficient network segmentation. Security certification does not seem to help with this issue. After all, Equifax was IS027001 compliant.In this paper, we take a look at how we handle and (do not) learn from security incidents in IT security. We do this by comparing IT security incidents to early and later aviation safety. We find interesting parallels to early aviation safety, and outline the governance levers that could make the world of IT more secure, which were already successful in making flying the most secure way of transportation.
Bitcoin was the first successful decentralized cryptocurrency and remains the most popular of its kind to this day. Despite the benefits of its blockchain, Bitcoin still faces serious scalability issues, most importantly its ever-increasing blockchain size. While alternative designs introduced schemes to periodically create snapshots and thereafter prune older blocks, already-deployed systems such as Bitcoin are often considered incapable of adopting corresponding approaches. In this work, we revise this popular belief and present CoinPrune, a snapshot-based pruning scheme that is fully compatible with Bitcoin. CoinPrune can be deployed through an opt-in velvet fork, i.e., without impeding the established Bitcoin network. By requiring miners to publicly announce and jointly reaffirm recent snapshots on the blockchain, CoinPrune establishes trust into the snapshots' correctness even in the presence of powerful adversaries. Our evaluation shows that CoinPrune reduces the storage requirements of Bitcoin already by two orders of magnitude today, with further relative savings as the blockchain grows. In our experiments, nodes only have to fetch and process 5GiB instead of 230GiB of data when joining the network, reducing the synchronization time on powerful devices from currently 5h to 46min, with even more savings for less powerful devices.
We propose a high efficiency Early-Complete Brute Force Elimination method that speeds up the analysis flow of the Camouflage Integrated Circuit (IC). The proposed method is targeted for security qualification of the Camouflaged IC netlists in Intellectual Property (IP) protection. There are two main features in the proposed method. First, the proposed method features immediate elimination of the incorrect Camouflage gates combination for the rest of computation, concentrating the resources into other potential correct Camouflage gates combination. Second, the proposed method features early complete, i.e. revealing the correct Camouflage gates once all incorrect gates combination are eliminated, increasing the computation speed for the overall security analysis. Based on the Python programming platform, we implement the algorithm of the proposed method and test it for three circuits including ISCAS’89 benchmarks. From the simulation results, our proposed method, on average, features 71% lesser number of trials and 79% shorter run time as compared to the conventional method in revealing the correct Camouflage gates from the Camouflaged IC netlist.
The use of biometrics in security applications may be vulnerable to several challenges of hacking. Thus, the emergence of cancellable biometrics becomes a suitable solution to this problem. This paper presents a one-way cancellable biometric transform that depends on 3D chaotic maps for face and fingerprint encryption. It aims to avoid cloning of original biometrics and allow the templates used by each user in different applications to be variable. The permutations achieved with the chaotic maps guarantee high security of the biometric templates, especially with the 3D implementation of the encryption algorithm. In addition, the paper presents a hardware implementation for this framework. The proposed algorithm also achieves good performance in the presence of low and moderate levels of noise. An experimental version of the proposed cancellable biometric system has been applied on FPGA model. The obtained results achieve a powerful performance of the proposed cancellable biometric system.
This paper presents hybrid system to minimize damage by zero-day attack. Proposed system consists of signature-based NIDPS, honeypot and temporary queue. When proposed system receives packet from external network, packet which is known for attack packet is dropped by signature-based NIDPS. Passed packets are redirected to honeypot, because proposed system assumes that all packets which pass NIDPS have possibility of zero-day attack. Redirected packet is stored in temporary queue and if the packet has possibility of zero-day attack, honeypot extracts signature of the packet. Proposed system creates rule that match rule format of NIDPS based on extracted signatures and updates the rule. After the rule update is completed, temporary queue sends stored packet to NIDPS then packet with risk of attack can be dropped. Proposed system can reduce time to create and apply rule which can respond to unknown attack packets. Also, it can drop packets that have risk of zero-day attack in real time.
In the paper, an intrusion detection system to safeguard computer software is proposed. The detection is based on negative selection algorithm, inspired by the human immunity mechanism. It is composed of two stages, generation of receptors and anomaly detection. Experimental results of the proposed system are presented, analyzed, and concluded.
Trust is an important characteristic of successful interactions between humans and agents in many scenarios. Self-driving scenarios are of particular relevance when discussing the issue of trust due to the high-risk nature of erroneous decisions being made. The present study aims to investigate decision-making and aspects of trust in a realistic driving scenario in which an autonomous agent provides guidance to humans. To this end, a simulated driving environment based on a college campus was developed and presented. An online and an in-person experiment were conducted to examine the impacts of mistakes made by the self-driving AI agent on participants’ decisions and trust. During the experiments, participants were asked to complete a series of driving tasks and make a sequence of decisions in a time-limited situation. Behavior analysis indicated a similar relative trend in the decisions across these two experiments. Survey results revealed that a mistake made by the self-driving AI agent at the beginning had a significant impact on participants’ trust. In addition, similar overall experience and feelings across the two experimental conditions were reported. The findings in this study add to our understanding of trust in human-robot interaction scenarios and provide valuable insights for future research work in the field of human-robot trust.
As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people's trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans' sense of trust and companionship towards the robots. We conducted a between-subjects study where participants' trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants' choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.
With self-driving cars making their way on to our roads, we ask not what it would take for them to gain acceptance among consumers, but what impact they may have on other drivers. How they will be perceived and whether they will be trusted will likely have a major effect on traffic flow and vehicular safety. This work first undertakes an exploratory factor analysis to validate a trust scale for human-robot interaction and shows how previously validated metrics and general trust theory support a more complete model of trust that has increased applicability in the driving domain. We experimentally test this expanded model in the context of human-automation interaction during simulated driving, revealing how using these dimensions uncovers significant biases within human-robot trust that may have particularly deleterious effects when it comes to sharing our future roads with automated vehicles.
This is an innovative practice full paper. In past projects, we have successfully used a private TOR (anonymity network) platform that enabled our students to explore the end-to-end inner workings of the TOR anonymity network through a number of controlled hands-on lab assignments. These have saisfied the needs of curriculum focusing on networking functions and algorithms. To be able to extend the use and application of the private TOR platform into cryptography courses, there is a desperate need to enhance the platform to allow the development of hands-on lab assignments on the cryptographic algorithms and methods utilized in the creation of TOR secure connections and end-to-end circuits for anonymity.In tackling this challenge, and since TOR is open source software, we identify the cryptographic functions called by the TOR algorithms in the process of establishing TLS connections and creating end-to-end TOR circuits as well tearing them down. We instrumented these functions with the appropriate code to log the cryptographic keys dynamically created at all nodes involved in the creation of the end to end circuit between the Client and the exit relay (connected to the target server).We implemented a set of pedagogical lab assignments on a private TOR platform and present them in this paper. Using these assignments, students are able to investigate and validate the cryptographic procedures applied in the establishment of the initial TLS connection, the creation of the first leg of a TOR circuit, as well as extending the circuit through additional relays (at least two relays). More advanced assignments are created to challenge the students to unwrap the traffic sent from the Client to the exit relay at all onion skin layers and compare it with the actual traffic delivered to the target server.
Privacy preservation is a challenging task with the huge amount of data that are available in social media. The data those are stored in the distributed environment or in cloud environment need to ensure confidentiality to data. In addition, representing the voluminous data is graph will be convenient to perform keyword search. The proposed work initially reads the data corresponding to social media and converts that into a graph. In order to prevent the data from the active attacks Advanced Encryption Standard algorithm is used to perform graph encryption. Later, search operation is done using two algorithms: kNK keyword search algorithm and top k nearest keyword search algorithm. The first scheme is used to fetch all the data corresponding to the keyword. The second scheme is used to fetch the nearest neighbor. This scheme increases the efficiency of the search process. Here shortest path algorithm is used to find the minimum distance. Now, based on the minimum value the results are produced. The proposed algorithm shows high performance for graph generation and searching and moderate performance for graph encryption.