Biblio
This paper describe most popular IoT protocols used for IoT embedded systems and research their advantage and disadvantage. Hardware stage used in this experiment is described in this article - it is used Esp32 and programming language C. It is very important to use corrected IoT protocol that is determines of purpose, hardware and software of system. There are so different IoT protocols, because they are cover vary requirements for vary cases.
This research used an Autonomous Security Robot (ASR) scenario to examine public reactions to a robot that possesses the authority and capability to inflict harm on a human. Individual differences in terms of personality and Perfect Automation Schema (PAS) were examined as predictors of trust in the ASR. Participants (N=316) from Amazon Mechanical Turk (MTurk) rated their trust of the ASR and desire to use ASRs in public and military contexts following a 2-minute video depicting the robot interacting with three research confederates. The video showed the robot using force against one of the three confederates with a non-lethal device. Results demonstrated that individual differences factors were related to trust and desired use of the ASR. Agreeableness and both facets of the PAS (high expectations and all-or-none beliefs) demonstrated unique associations with trust using multiple regression techniques. Agreeableness, intellect, and high expectations were uniquely related to desired use for both public and military domains. This study showed that individual differences influence trust and one's desired use of ASRs, demonstrating that societal reactions to ASRs may be subject to variation among individuals.
Accessing the secured data through the network is a major task in emerging technology. Data needs to be protected from the network vulnerabilities, malicious users, hackers, sniffers, intruders. The novel framework has been designed to provide high security in data transaction through computer network. The implant of network amalgamation in the recent trends, make the way in security enhancement in an efficient manner through the machine learning algorithm. In this system the usage of the biometric authenticity plays a vital role for unique approach. The novel mathematical approach is used in machine learning algorithms to solve these problems and provide the security enhancement. The result shows that the novel method has consistent improvement in enhancing the security of data transactions in the emerging technologies.
Recognizing the need for proactive analysis of cyber adversary behavior, this paper presents a new event-driven simulation model and implementation to reveal the efforts needed by attackers who have various entry points into a network. Unlike previous models which focus on the impact of attackers' actions on the defender's infrastructure, this work focuses on the attackers' strategies and actions. By operating on a request-response session level, our model provides an abstraction of how the network infrastructure reacts to access credentials the adversary might have obtained through a variety of strategies. We present the current capabilities of the simulator by showing three variants of Bronze Butler APT on a network with different user access levels.
As modern web browsers gain new and increasingly powerful features the importance of impact assessments of the new functionality becomes crucial. A web privacy impact assessment of a planned web browser feature, the Ambient Light Sensor API, indicated risks arising from the exposure of overly precise information about the lighting conditions in the user environment. The analysis led to the demonstration of direct risks of leaks of user data, such as the list of visited websites or exfiltration of sensitive content across distinct browser contexts. Our work contributed to the creation of web standards leading to decisions by browser vendors (i.e. obsolescence, non-implementation or modification to the operation of browser features). We highlight the need to consider broad risks when making reviews of new features. We offer practically-driven high-level observations lying on the intersection of web security and privacy risk engineering and modeling, and standardization. We structure our work as a case study from activities spanning over three years.
With the rapid development of Internet technology, the era of big data is coming. SQL injection attack is the most common and the most dangerous threat to database. This paper studies the working mode and workflow of the GreenSQL database firewall. Based on the analysis of the characteristics and patterns of SQL injection attack command, the input model of GreenSQL learning is optimized by constructing the patterned input and optimized whitelist. The research method can improve the learning efficiency of GreenSQL and intercept samples in IPS mode, so as to effectively maintain the security of background database.
Partitional Clustering Algorithm (PCA) on the Hadoop Distributed File System is to perform big data securities using the Perturbation Technique is the main idea of the proposed work. There are numerous clustering methods available that are used to categorize the information from the big data. PCA discovers the cluster based on the initial partition of the data. In this approach, it is possible to develop a security safeguarding of data that is impoverished to allow the calculations and communication. The performances were analyzed on Health Care database under the studies of various parameters like precision, accuracy, and F-score measure. The outcome of the results is to demonstrate that this method is used to decrease the complication in preserving privacy and better accuracy than that of the existing techniques.
The current study explored the influence of trust and distrust behaviors on performance, process, and purpose (trustworthiness) perceptions over time when participants were paired with a robot partner. We examined the changes in trustworthiness perceptions after trust violations and trust repair after those violations. Results indicated performance, process, and purpose perceptions were all affected by trust violations, but perceptions of process and purpose decreased more than performance following a distrust behavior. Similarly, trust repair was achieved in performance perceptions, but trust repair in perceived process and purpose was absent. When a trust violation occurred, process and purpose perceptions deteriorated and failed to recover from the violation. In addition, the trust violation resulted in untrustworthy perceptions of the robot. In contrast, trust violations decreased partner performance perceptions, and subsequent trust behaviors resulted in a trust repair. These findings suggest that people are more sensitive to distrust behaviors in their perceptions of process and purpose than they are in performance perceptions.
Today the integrity of digital documents and the authenticity of their origin is often hard to verify. Existing Public Key Infrastructures (PKIs) are capable of certifying digital identities but do not provide solutions to immutably store signatures, and the process of certification is often not transparent. In this work we propose Veritaa, a Distributed Public Key Infrastructure and Signature Store (DPKISS). The major innovation of Veritaa is the Graph of Trust, a directed graph that uses relations between identity claims to certify the identities and stores signed relations to digital document identifiers. The distributed architecture of Veritaa and the Graph of Trust enables a transparent certification process. To ensure non-repudiation and immutability of all actions that have been signed on the Graph of Trust, an application specific Distributed Ledger Technology (DLT) is used as secure storage. In this work a reference implementation of the proposed architecture was designed and implemented. Furthermore, a testbed was created and used for the evaluation of Veritaa. The evaluation of Veritaa shows the benefits and the high performance of the proposed architecture.
Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.
Vehicle-to-vehicle (V2V) communication systems are currently being prepared for real-world deployment, but they face strong opposition over privacy concerns. Position beacon messages are the main culprit, being broadcast in cleartext and pseudonymously signed up to 10 times per second. So far, no practical solutions have been proposed to encrypt or anonymously authenticate V2V messages. We propose two cryptographic innovations that enhance the privacy of V2V communication. As a core contribution, we introduce zone-encryption schemes, where vehicles generate and authentically distribute encryption keys associated to static geographic zones close to their location. Zone encryption provides security against eavesdropping, and, combined with a suitable anonymous authentication scheme, ensures that messages can only be sent by genuine vehicles, while adding only 224 Bytes of cryptographic overhead to each message. Our second contribution is an authentication mechanism fine-tuned to the needs of V2V which allows vehicles to authentically distribute keys, and is called dynamic group signatures with attributes. Our instantiation features unlimited locally generated pseudonyms, negligible credential download-and-storage costs, identity recovery by a trusted authority, and compact signatures of 216 Bytes at a 128-bit security level.
With self-driving cars making their way on to our roads, we ask not what it would take for them to gain acceptance among consumers, but what impact they may have on other drivers. How they will be perceived and whether they will be trusted will likely have a major effect on traffic flow and vehicular safety. This work first undertakes an exploratory factor analysis to validate a trust scale for human-robot interaction and shows how previously validated metrics and general trust theory support a more complete model of trust that has increased applicability in the driving domain. We experimentally test this expanded model in the context of human-automation interaction during simulated driving, revealing how using these dimensions uncovers significant biases within human-robot trust that may have particularly deleterious effects when it comes to sharing our future roads with automated vehicles.