Biblio
Despite advances regarding autonomous functionality for robots, teleoperation remains a means for performing delicate tasks in safety critical contexts like explosive ordnance disposal (EOD) and ambiguous environments. Immersive stereoscopic displays have been proposed and developed in this regard, but bring about their own specific problems, e.g., simulator sickness. This work builds upon standardized test environments to yield reproducible comparisons between different robotic platforms. The focus was placed on testing three optronic systems of differing degrees of immersion: (1) A laptop display showing multiple monoscopic camera views, (2) an off-the-shelf virtual reality headset coupled with a pantilt-based stereoscopic camera, and (3) a so-called Telepresence Unit, providing fast pan, tilt, yaw rotation, stereoscopic view, and spatial audio. Stereoscopic systems yielded significant faster task completion only for the maneuvering task. As expected, they also induced Simulator Sickness among other results. However, the amount of Simulator Sickness varied between both stereoscopic systems. Collected data suggests that a higher degree of immersion combined with careful system design can reduce the to-be-expected increase of Simulator Sickness compared to the monoscopic camera baseline while making the interface subjectively more effective for certain tasks.
The security problem of networked control systems (NCSs) suffering denial of service(DoS) attacks with incomplete information is investigated in this paper. Data transmission among different components in NCSs may be blocked due to DoS attacks. We use the concept of security level to describe the degree of security of different components in an NCS. Intrusion detection system (IDS) is used to monitor the invalid data generated by DoS attacks. At each time slot, the defender considers which component to monitor while the attacker considers which place for invasion. A one-shot game between attacker and defender is built and both the complete information case and the incomplete information case are considered. Furthermore, a repeated game model with updating beliefs is also established based on the Bayes' rule. Finally, a numerical example is provided to illustrate the effectiveness of the proposed method.
E- Health systems, specifically, Telecare Medical Information Systems (TMIS), are deployed in order to provide patients with specific diseases with healthcare services that are usually based on remote monitoring. Therefore, making an efficient, convenient and secure connection between users and medical servers over insecure channels within medical services is a rather major issue. In this context, because of the biometrics' characteristics, many biometrics-based three factor user authentication schemes have been proposed in the literature to secure user/server communication within medical services. In this paper, we make a brief study of the most interesting proposals. Then, we propose a new three-factor authentication and key agreement scheme for TMIS. Our scheme tends not only to fix the security drawbacks of some studied related work, but also, offers additional significant features while minimizing resource consumption. In addition, we perform a formal verification using the widely accepted formal security verification tool AVISPA to demonstrate that our proposed scheme is secure. Also, our comparative performance analysis reveals that our proposed scheme provides a lower resource consumption compared to other related work's proposals.
This is an innovative practice full paper. In past projects, we have successfully used a private TOR (anonymity network) platform that enabled our students to explore the end-to-end inner workings of the TOR anonymity network through a number of controlled hands-on lab assignments. These have saisfied the needs of curriculum focusing on networking functions and algorithms. To be able to extend the use and application of the private TOR platform into cryptography courses, there is a desperate need to enhance the platform to allow the development of hands-on lab assignments on the cryptographic algorithms and methods utilized in the creation of TOR secure connections and end-to-end circuits for anonymity.In tackling this challenge, and since TOR is open source software, we identify the cryptographic functions called by the TOR algorithms in the process of establishing TLS connections and creating end-to-end TOR circuits as well tearing them down. We instrumented these functions with the appropriate code to log the cryptographic keys dynamically created at all nodes involved in the creation of the end to end circuit between the Client and the exit relay (connected to the target server).We implemented a set of pedagogical lab assignments on a private TOR platform and present them in this paper. Using these assignments, students are able to investigate and validate the cryptographic procedures applied in the establishment of the initial TLS connection, the creation of the first leg of a TOR circuit, as well as extending the circuit through additional relays (at least two relays). More advanced assignments are created to challenge the students to unwrap the traffic sent from the Client to the exit relay at all onion skin layers and compare it with the actual traffic delivered to the target server.
These days the digitization process is everywhere, spreading also across central governments and local authorities. It is hoped that, using open government data for scientific research purposes, the public good and social justice might be enhanced. Taking into account the European General Data Protection Regulation recently adopted, the big challenge in Portugal and other European countries, is how to provide the right balance between personal data privacy and data value for research. This work presents a sensitivity study of data anonymization procedure applied to a real open government data available from the Brazilian higher education evaluation system. The ARX k-anonymization algorithm, with and without generalization of some research value variables, was performed. The analysis of the amount of data / information lost and the risk of re-identification suggest that the anonymization process may lead to the under-representation of minorities and sociodemographic disadvantaged groups. It will enable scientists to improve the balance among risk, data usability, and contributions for the public good policies and practices.
In the current world, day by day the data growth and the investigation about that information increased due to the pervasiveness of computing devices, but people are reluctant to share their information on online portals or surveys fearing safety because sensitive information such as credit card information, medical conditions and other personal information in the wrong hands can mean danger to the society. These days privacy preserving has become a setback for storing data in data repository so for that reason data in the repository should be made undistinguishable, data is encrypted while storing and later decrypted when needed for analysis purpose in data mining. While storing the raw data of the individuals it is important to remove person-identifiable information such as name, employee id. However, the other attributes pertaining to the person should be encrypted so the methodologies used to implement. These methodologies can make data in the repository secure and PPDM task can made easier.
To our best knowledge, the p-sensitive k-anonymity model is a sophisticated model to resist linking attacks and homogeneous attacks in data publishing. However, if the distribution of sensitive values is skew, the model is difficult to defend against skew attacks and even faces sensitive attacks. In practice, the privacy requirements of different sensitive values are not always identical. The “one size fits all” unified privacy protection level may cause unnecessary information loss. To address these problems, the paper quantifies privacy requirements with the concept of IDF and concerns more about sensitive groups. Two enhanced anonymous models with personalized protection characteristic, that is, (p,αisg) -sensitive k-anonymity model and (pi,αisg)-sensitive k-anonymity model, are then proposed to resist skew attacks and sensitive attacks. Furthermore, two clustering algorithms with global search and local search are designed to implement our models. Experimental results show that the two enhanced models have outstanding advantages in better privacy at the expense of a little data utility.
Instant messaging is an application that is widely used to communicate. Based on the wearesocial.com report, three of the five most used social media platforms are chat or instant messaging. Instant messaging was chosen for communication because it has security features in log in using a One Time Password (OTP) code, end-to-end encryption, and even two-factor authentication. However, instant messaging applications still have a vulnerability to account theft. This account theft occurs when the user loses his cellphone. Account theft can happen when a cellphone is locked or not. As a result of this account theft, thieves can read confidential messages and send fake news on behalf of the victim. In this research, instant messaging application security will be applied using hybrid encryption and two-factor authentication, which are made interrelated. Both methods will be implemented in 2 implementation designs. The implementation design is securing login and securing sending and receiving messages. For login security, QR Code implementation is sent via email. In sending and receiving messages, the message decryption process will be carried out when the user is authenticated using a fingerprint. Hybrid encryption as message security uses RSA 2048 and AES 128. Of the ten attempts to steal accounts that have been conducted, it is shown that the implementation design is proven to reduce the impact of account theft.
Enterprises round the globe have been searching for a way to securely empower AndroidTM devices for work but have spurned away from the Android platform due to ongoing fragmentation and security concerns. Discrepant vulnerabilities have been reported in Android smartphones since Android Lollipop release. Smartphones can be easily hacked by installing a malicious application, visiting an infectious browser, receiving a crafted MMS, interplaying with plug-ins, certificate forging, checksum collisions, inter-process communication (IPC) abuse and much more. To highlight this issue a manual analysis of Android vulnerabilities is performed, by using data available in National Vulnerability Database NVD and Android Vulnerability website. This paper includes the vulnerabilities that risked the dual persona support in Android 5 and above, till Dec 2017. In our security threat analysis, we have identified a comprehensive list of Android vulnerabilities, vulnerable Android versions, manufacturers, and information regarding complete and partial patches released. So far, there is no published research work that systematically presents all the vulnerabilities and vulnerability assessment for dual persona feature of Android's smartphone. The data provided in this paper will open ways to future research and present a better Android security model for dual persona.
The concept of the adversary model has been widely applied in the context of cryptography. When designing a cryptographic scheme or protocol, the adversary model plays a crucial role in the formalization of the capabilities and limitations of potential attackers. These models further enable the designer to verify the security of the scheme or protocol under investigation. Although being well established for conventional cryptanalysis attacks, adversary models associated with attackers enjoying the advantages of machine learning techniques have not yet been developed thoroughly. In particular, when it comes to composed hardware, often being security-critical, the lack of such models has become increasingly noticeable in the face of advanced, machine learning-enabled attacks. This paper aims at exploring the adversary models from the machine learning perspective. In this regard, we provide examples of machine learning-based attacks against hardware primitives, e.g., obfuscation schemes and hardware root-of-trust, claimed to be infeasible. We demonstrate that this assumption becomes however invalid as inaccurate adversary models have been considered in the literature.
Artificial neural networks in general and deep learning networks in particular established themselves as popular and powerful machine learning algorithms. While the often tremendous sizes of these networks are beneficial when solving complex tasks, the tremendous number of parameters also causes such networks to be vulnerable to malicious behavior such as adversarial perturbations. These perturbations can change a model's classification decision. Moreover, while single-step adversaries can easily be transferred from network to network, the transfer of more powerful multi-step adversaries has - usually - been rather difficult.In this work, we introduce a method for generating strong adversaries that can easily (and frequently) be transferred between different models. This method is then used to generate a large set of adversaries, based on which the effects of selected defense methods are experimentally assessed. At last, we introduce a novel, simple, yet effective approach to enhance the resilience of neural networks against adversaries and benchmark it against established defense methods. In contrast to the already existing methods, our proposed defense approach is much more efficient as it only requires a single additional forward-pass to achieve comparable performance results.