Biblio
Initially, legitimate users were working under a normal web browser to do all activities over the internet [1]. To get more secure service and to get protection against Bot activity, the legitimate users switched their activity from Normal web browser to low latency anonymous communication such as Tor Browser. The Traffic monitoring in Tor Network is difficult as the packets are traveling from source to destination in an encrypted fashion and the Tor network hides its identity from destination. But lately, even the illegitimate users such as attackers/criminals started their activity on the Tor browser. The secured Tor network makes the detection of Botnet more difficult. The existing tools for botnet detection became inefficient against Tor-based bots because of the features of the Tor browser. As the Tor Browser is highly secure and because of the ethical issues, doing practical experiments on it is not advisable which could affect the performance and functionality of the Tor browser. It may also affect the endanger users in situations where the failure of Tor's anonymity has severe consequences. So, in the proposed research work, Private Tor Networks (PTN) on physical or virtual machines with dedicated resources have been created along with Trusted Middle Node. The motivation behind the trusted middle node is to make the Private Tor network more efficient and to increase its performance.
The current trend of IoT user is toward the use of services and data externally due to voluminous processing, which demands resourceful machines. Instead of relying on the cloud of poor connectivity or a limited bandwidth, the IoT user prefers to use a cloudlet-based fog computing. However, the choice of cloudlet is solely dependent on its trust and reliability. In practice, even though a cloudlet possesses a required trusted platform module (TPM), we argue that the presence of a TPM is not enough to make the cloudlet trustworthy as the TPM supports only the primitive security of the bootstrap. Besides uncertainty in security, other uncertain conditions of the network (e.g. network bandwidth, latency and expectation time to complete a service request for cloud-based services) may also prevail for the cloudlets. Therefore, in order to evaluate the trust value of multiple cloudlets under uncertainty, this paper broadly proposes the empirical process for evaluation of trust. This will be followed by a measure of trust-based reputation of cloudlets through computational intelligence such as fuzzy logic and ant colony optimization (ACO). In the process, fuzzy logic-based inference and membership evaluation of trust are presented. In addition, ACO and its pheromone communication across different colonies are being modeled with multiple cloudlets. Finally, a measure of affinity or popular trust and reputation of the cloudlets is also proposed. Together with the context of application under multiple cloudlets, the computationally intelligent approaches have been investigated in terms of performance. Hence the contribution is subjected towards building a trusted cloudlet-based fog platform.
In this paper, a time-driven performance-aware mathematical model for trust in the robot is proposed for a Human-Robot Collaboration (HRC) framework. The proposed trust model is based on both the human operator and the robot performances. The human operator’s performance is modeled based on both the physical and cognitive performances, while the robot performance is modeled over its unpredictable, predictable, dependable, and faithful operation regions. The model is validated via different simulation scenarios. The simulation results show that the trust in the robot in the HRC framework is governed by robot performance and human operator’s performance and can be improved by enhancing the robot performance.
Conflicts may arise at any time during military debriefing meetings, especially in high intensity deployed settings. When such conflicts arise, it takes time to get everyone back into a receptive state of mind so that they engage in reflective discussion rather than unproductive arguing. It has been proposed by some that the use of social robots equipped with social abilities such as emotion regulation through rapport building may help to deescalate these situations to facilitate critical operational decisions. However, in military settings, the same AI agent used in the pre-brief of a mission may not be the same one used in the debrief. The purpose of this study was to determine whether a brief rapport-building session with a social robot could create a connection between a human and a robot agent, and whether consistency in the embodiment of the robot agent was necessary for maintaining this connection once formed. We report the results of a pilot study conducted at the United States Air Force Academy which simulated a military mission (i.e., Gravity and Strike). Participants' connection with the agent, sense of trust, and overall likeability revealed that early rapport building can be beneficial for military missions.
This research used an Autonomous Security Robot (ASR) scenario to examine public reactions to a robot that possesses the authority and capability to inflict harm on a human. Individual differences in terms of personality and Perfect Automation Schema (PAS) were examined as predictors of trust in the ASR. Participants (N=316) from Amazon Mechanical Turk (MTurk) rated their trust of the ASR and desire to use ASRs in public and military contexts following a 2-minute video depicting the robot interacting with three research confederates. The video showed the robot using force against one of the three confederates with a non-lethal device. Results demonstrated that individual differences factors were related to trust and desired use of the ASR. Agreeableness and both facets of the PAS (high expectations and all-or-none beliefs) demonstrated unique associations with trust using multiple regression techniques. Agreeableness, intellect, and high expectations were uniquely related to desired use for both public and military domains. This study showed that individual differences influence trust and one's desired use of ASRs, demonstrating that societal reactions to ASRs may be subject to variation among individuals.
The current study explored the influence of trust and distrust behaviors on performance, process, and purpose (trustworthiness) perceptions over time when participants were paired with a robot partner. We examined the changes in trustworthiness perceptions after trust violations and trust repair after those violations. Results indicated performance, process, and purpose perceptions were all affected by trust violations, but perceptions of process and purpose decreased more than performance following a distrust behavior. Similarly, trust repair was achieved in performance perceptions, but trust repair in perceived process and purpose was absent. When a trust violation occurred, process and purpose perceptions deteriorated and failed to recover from the violation. In addition, the trust violation resulted in untrustworthy perceptions of the robot. In contrast, trust violations decreased partner performance perceptions, and subsequent trust behaviors resulted in a trust repair. These findings suggest that people are more sensitive to distrust behaviors in their perceptions of process and purpose than they are in performance perceptions.
As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people's trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans' sense of trust and companionship towards the robots. We conducted a between-subjects study where participants' trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants' choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.
Blockchain technology is the cornerstone of digital trust and systems' decentralization. The necessity of eliminating trust in computing systems has triggered researchers to investigate the applicability of Blockchain to decentralize the conventional security models. Specifically, researchers continuously aim at minimizing trust in the well-known Public Key Infrastructure (PKI) model which currently requires a trusted Certificate Authority (CA) to sign digital certificates. Recently, the Automated Certificate Management Environment (ACME) was standardized as a certificate issuance automation protocol. It minimizes the human interaction by enabling certificates to be automatically requested, verified, and installed on servers. ACME only solved the automation issue, but the trust concerns remain as a trusted CA is required. In this paper we propose decentralizing the ACME protocol by using the Blockchain technology to enhance the current trust issues of the existing PKI model and to eliminate the need for a trusted CA. The system was implemented and tested on Ethereum Blockchain, and the results showed that the system is feasible in terms of cost, speed, and applicability on a wide range of devices including Internet of Things (IoT) devices.