Visible to the public Biblio

Found 288 results

Filters: Keyword is artificial intelligence  [Clear All Filters]
2021-03-04
Abedin, N. F., Bawm, R., Sarwar, T., Saifuddin, M., Rahman, M. A., Hossain, S..  2020.  Phishing Attack Detection using Machine Learning Classification Techniques. 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS). :1125—1130.

Phishing attacks are the most common form of attacks that can happen over the internet. This method involves attackers attempting to collect data of a user without his/her consent through emails, URLs, and any other link that leads to a deceptive page where a user is persuaded to commit specific actions that can lead to the successful completion of an attack. These attacks can allow an attacker to collect vital information of the user that can often allow the attacker to impersonate the victim and get things done that only the victim should have been able to do, such as carry out transactions, or message someone else, or simply accessing the victim's data. Many studies have been carried out to discuss possible approaches to prevent such attacks. This research work includes three machine learning algorithms to predict any websites' phishing status. In the experimentation these models are trained using URL based features and attempted to prevent Zero-Day attacks by using proposed software proposal that differentiates the legitimate websites and phishing websites by analyzing the website's URL. From observations, the random forest classifier performed with a precision of 97%, a recall 99%, and F1 Score is 97%. Proposed model is fast and efficient as it only works based on the URL and it does not use other resources for analysis, as was the case for past studies.

2021-03-01
Sarathy, N., Alsawwaf, M., Chaczko, Z..  2020.  Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
Tao, J., Xiong, Y., Zhao, S., Xu, Y., Lin, J., Wu, R., Fan, C..  2020.  XAI-Driven Explainable Multi-view Game Cheating Detection. 2020 IEEE Conference on Games (CoG). :144–151.
Online gaming is one of the most successful applications having a large number of players interacting in an online persistent virtual world through the Internet. However, some cheating players gain improper advantages over normal players by using illegal automated plugins which has brought huge harm to game health and player enjoyment. Game industries have been devoting much efforts on cheating detection with multiview data sources and achieved great accuracy improvements by applying artificial intelligence (AI) techniques. However, generating explanations for cheating detection from multiple views still remains a challenging task. To respond to the different purposes of explainability in AI models from different audience profiles, we propose the EMGCD, the first explainable multi-view game cheating detection framework driven by explainable AI (XAI). It combines cheating explainers to cheating classifiers from different views to generate individual, local and global explanations which contributes to the evidence generation, reason generation, model debugging and model compression. The EMGCD has been implemented and deployed in multiple game productions in NetEase Games, achieving remarkable and trustworthy performance. Our framework can also easily generalize to other types of related tasks in online games, such as explainable recommender systems, explainable churn prediction, etc.
D’Alterio, P., Garibaldi, J. M., John, R. I..  2020.  Constrained Interval Type-2 Fuzzy Classification Systems for Explainable AI (XAI). 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
In recent year, there has been a growing need for intelligent systems that not only are able to provide reliable classifications but can also produce explanations for the decisions they make. The demand for increased explainability has led to the emergence of explainable artificial intelligence (XAI) as a specific research field. In this context, fuzzy logic systems represent a promising tool thanks to their inherently interpretable structure. The use of a rule-base and linguistic terms, in fact, have allowed researchers to create models that are able to produce explanations in natural language for each of the classifications they make. So far, however, designing systems that make use of interval type-2 (IT2) fuzzy logic and also give explanations for their outputs has been very challenging, partially due to the presence of the type-reduction step. In this paper, it will be shown how constrained interval type-2 (CIT2) fuzzy sets represent a valid alternative to conventional interval type-2 sets in order to address this issue. Through the analysis of two case studies from the medical domain, it is shown how explainable CIT2 classifiers are produced. These systems can explain which rules contributed to the creation of each of the endpoints of the output interval centroid, while showing (in these examples) the same level of accuracy as their IT2 counterpart.
Houzé, É, Diaconescu, A., Dessalles, J.-L., Mengay, D., Schumann, M..  2020.  A Decentralized Approach to Explanatory Artificial Intelligence for Autonomic Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :115–120.
While Explanatory AI (XAI) is attracting increasing interest from academic research, most AI-based solutions still rely on black box methods. This is unsuitable for certain domains, such as smart homes, where transparency is key to gaining user trust and solution adoption. Moreover, smart homes are challenging environments for XAI, as they are decentralized systems that undergo runtime changes. We aim to develop an XAI solution for addressing problems that an autonomic management system either could not resolve or resolved in a surprising manner. This implies situations where the current state of affairs is not what the user expected, hence requiring an explanation. The objective is to solve the apparent conflict between expectation and observation through understandable logical steps, thus generating an argumentative dialogue. While focusing on the smart home domain, our approach is intended to be generic and transferable to other cyber-physical systems offering similar challenges. This position paper focuses on proposing a decentralized algorithm, called D-CAN, and its corresponding generic decentralized architecture. This approach is particularly suited for SISSY systems, as it enables XAI functions to be extended and updated when devices join and leave the managed system dynamically. We illustrate our proposal via several representative case studies from the smart home domain.
Meskauskas, Z., Jasinevicius, R., Kazanavicius, E., Petrauskas, V..  2020.  XAI-Based Fuzzy SWOT Maps for Analysis of Complex Systems. 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
The classical SWOT methodology and many of the tools based on it used so far are very static, used for one stable project and lacking dynamics [1]. This paper proposes the idea of combining several SWOT analyses enriched with computing with words (CWW) paradigm into a single network. In this network, individual analysis of the situation is treated as the node. The whole structure is based on fuzzy cognitive maps (FCM) that have forward and backward chaining, so it is called fuzzy SWOT maps. Fuzzy SWOT maps methodology newly introduces the dynamics that projects are interacting, what exists in a real dynamic environment. The whole fuzzy SWOT maps network structure has explainable artificial intelligence (XAI) traits because each node in this network is a "white box"-all the reasoning chain can be tracked and checked why a particular decision has been made, which increases explainability by being able to check the rules to determine why a particular decision was made or why and how one project affects another. To confirm the vitality of the approach, a case with three interacting projects has been analyzed with a developed prototypical software tool and results are delivered.
Kuppa, A., Le-Khac, N.-A..  2020.  Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.

2021-02-22
Martinelli, F., Marulli, F., Mercaldo, F., Marrone, S., Santone, A..  2020.  Enhanced Privacy and Data Protection using Natural Language Processing and Artificial Intelligence. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Artificial Intelligence systems have enabled significant benefits for users and society, but whilst the data for their feeding are always increasing, a side to privacy and security leaks is offered. The severe vulnerabilities to the right to privacy obliged governments to enact specific regulations to ensure privacy preservation in any kind of transaction involving sensitive information. In the case of digital and/or physical documents comprising sensitive information, the right to privacy can be preserved by data obfuscation procedures. The capability of recognizing sensitive information for obfuscation is typically entrusted to the experience of human experts, who are over-whelmed by the ever increasing amount of documents to process. Artificial intelligence could proficiently mitigate the effort of the human officers and speed up processes. Anyway, until enough knowledge won't be available in a machine readable format, automatic and effectively working systems can't be developed. In this work we propose a methodology for transferring and leveraging general knowledge across specific-domain tasks. We built, from scratch, specific-domain knowledge data sets, for training artificial intelligence models supporting human experts in privacy preserving tasks. We exploited a mixture of natural language processing techniques applied to unlabeled domain-specific documents corpora for automatically obtain labeled documents, where sensitive information are recognized and tagged. We performed preliminary tests just over 10.000 documents from the healthcare and justice domains. Human experts supported us during the validation. Results we obtained, estimated in terms of precision, recall and F1-score metrics across these two domains, were promising and encouraged us to further investigations.

2021-02-16
Kowalski, P., Zocholl, M., Jousselme, A.-L..  2020.  Explainability in threat assessment with evidential networks and sensitivity spaces. 2020 IEEE 23rd International Conference on Information Fusion (FUSION). :1—8.
One of the main threats to the underwater communication cables identified in the recent years is possible tampering or damage by malicious actors. This paper proposes a solution with explanation abilities to detect and investigate this kind of threat within the evidence theory framework. The reasoning scheme implements the traditional “opportunity-capability-intent” threat model to assess a degree to which a given vessel may pose a threat. The scenario discussed considers a variety of possible pieces of information available from different sources. A source quality model is used to reason with the partially reliable sources and the impact of this meta-information on the overall assessment is illustrated. Examples of uncertain relationships between the relevant variables are modelled and the constructed model is used to investigate the probability of threat of four vessels of different types. One of these cases is discussed in more detail to demonstrate the explanation abilities. Explanations about inference are provided thanks to sensitivity spaces in which the impact of the different pieces of information on the reasoning are compared.
Hongbin, Z., Wei, W., Wengdong, S..  2020.  Safety and Damage Assessment Method of Transmission Line Tower in Goaf Based on Artificial Intelligence. 2020 IEEE/IAS Industrial and Commercial Power System Asia (I CPS Asia). :1474—1479.
The transmission line tower is affected by the surface subsidence in the mined out area of coal mine, which will appear the phenomenon of subsidence, inclination and even tower collapse, threatening the operation safety of the transmission line tower in the mined out area. Therefore, a Safety and Damage Assessment Method of Transmission Line Tower in Goaf Based on Artificial Intelligence is proposed. Firstly, the geometric model of the coal seam in the goaf and the structural reliability model of the transmission line tower are constructed to evaluate the safety. Then, the random forest algorithm in artificial intelligence is used to evaluate the damage of the tower, so as to take protective measures in time. Finally, a finite element simulation model of tower foundation interaction is built, and its safety (force) and damage identification are experimentally analyzed. The results show that the proposed method can ensure high accuracy of damage assessment and reliable judgment of transmission line tower safety within the allowable error.
2021-02-10
Mishra, P., Gupta, C..  2020.  Cookies in a Cross-site scripting: Type, Utilization, Detection, Protection and Remediation. 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). :1056—1059.
In accordance to the annual report by the Cisco 2018, web applications are exposed to several security vulnerabilities that are exploited by hackers in various ways. It is becoming more and more frequent, specific and sophisticated. Of all the vulnerabilities, more than 40% of attempts are performed via cross-site scripting (XSS). A number of methods have been postulated to examine such vulnerabilities. Therefore, this paper attempted to address an overview of one such vulnerability: the cookies in the XSS. The objective is to present an overview of the cookies, it's type, vulnerability, policies, discovering, protecting and their mitigation via different tools/methods and via cryptography, artificial intelligence techniques etc. While some future issues, directions, challenges and future research challenges were also being discussed.
2021-02-03
Bellas, A., Perrin, S., Malone, B., Rogers, K., Lucas, G., Phillips, E., Tossell, C., Visser, E. d.  2020.  Rapport Building with Social Robots as a Method for Improving Mission Debriefing in Human-Robot Teams. 2020 Systems and Information Engineering Design Symposium (SIEDS). :160—163.

Conflicts may arise at any time during military debriefing meetings, especially in high intensity deployed settings. When such conflicts arise, it takes time to get everyone back into a receptive state of mind so that they engage in reflective discussion rather than unproductive arguing. It has been proposed by some that the use of social robots equipped with social abilities such as emotion regulation through rapport building may help to deescalate these situations to facilitate critical operational decisions. However, in military settings, the same AI agent used in the pre-brief of a mission may not be the same one used in the debrief. The purpose of this study was to determine whether a brief rapport-building session with a social robot could create a connection between a human and a robot agent, and whether consistency in the embodiment of the robot agent was necessary for maintaining this connection once formed. We report the results of a pilot study conducted at the United States Air Force Academy which simulated a military mission (i.e., Gravity and Strike). Participants' connection with the agent, sense of trust, and overall likeability revealed that early rapport building can be beneficial for military missions.

Xu, J., Howard, A..  2020.  How much do you Trust your Self-Driving Car? Exploring Human-Robot Trust in High-Risk Scenarios 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4273—4280.

Trust is an important characteristic of successful interactions between humans and agents in many scenarios. Self-driving scenarios are of particular relevance when discussing the issue of trust due to the high-risk nature of erroneous decisions being made. The present study aims to investigate decision-making and aspects of trust in a realistic driving scenario in which an autonomous agent provides guidance to humans. To this end, a simulated driving environment based on a college campus was developed and presented. An online and an in-person experiment were conducted to examine the impacts of mistakes made by the self-driving AI agent on participants’ decisions and trust. During the experiments, participants were asked to complete a series of driving tasks and make a sequence of decisions in a time-limited situation. Behavior analysis indicated a similar relative trend in the decisions across these two experiments. Survey results revealed that a mistake made by the self-driving AI agent at the beginning had a significant impact on participants’ trust. In addition, similar overall experience and feelings across the two experimental conditions were reported. The findings in this study add to our understanding of trust in human-robot interaction scenarios and provide valuable insights for future research work in the field of human-robot trust.

Aliman, N.-M., Kester, L..  2020.  Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses. 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). :130—137.

Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.

2021-02-01
Ajenaghughrure, I. B., Sousa, S. C. da Costa, Lamas, D..  2020.  Risk and Trust in artificial intelligence technologies: A case study of Autonomous Vehicles. 2020 13th International Conference on Human System Interaction (HSI). :118–123.
This study investigates how risk influences users' trust before and after interactions with technologies such as autonomous vehicles (AVs'). Also, the psychophysiological correlates of users' trust from users” eletrodermal activity responses. Eighteen (18) carefully selected participants embark on a hypothetical trip playing an autonomous vehicle driving game. In order to stay safe, throughout the drive experience under four risk conditions (very high risk, high risk, low risk and no risk) that are based on automotive safety and integrity levels (ASIL D, C, B, A), participants exhibit either high or low trust by evaluating the AVs' to be highly or less trustworthy and consequently relying on the Artificial intelligence or the joystick to control the vehicle. The result of the experiment shows that there is significant increase in users' trust and user's delegation of controls to AVs' as risk decreases and vice-versa. In addition, there was a significant difference between user's initial trust before and after interacting with AVs' under varying risk conditions. Finally, there was a significant correlation in users' psychophysiological responses (electrodermal activity) when exhibiting higher and lower trust levels towards AVs'. The implications of these results and future research opportunities are discussed.
Wickramasinghe, C. S., Marino, D. L., Grandio, J., Manic, M..  2020.  Trustworthy AI Development Guidelines for Human System Interaction. 2020 13th International Conference on Human System Interaction (HSI). :130–136.
Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.
2021-01-28
Lin, G., Zhao, H., Zhao, L., Gan, X., Yao, Z..  2020.  Differential Privacy Information Publishing Algorithm based on Cluster Anonymity. 2020 International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). :226—233.

With the development of Internet technology, the attacker gets more and more complex background knowledge, which makes the anonymous model susceptible to background attack. Although the differential privacy model can resist the background attack, it reduces the versatility of the data. In this paper, this paper proposes a differential privacy information publishing algorithm based on clustering anonymity. The algorithm uses the cluster anonymous algorithm based on KD tree to cluster the original data sets and gets anonymous tables by anonymous operation. Finally, the algorithm adds noise to the anonymous table to satisfy the definition of differential privacy. The algorithm is compared with the DCMDP (Density-Based Clustering Mechanism with Differential Privacy, DCMDP) algorithm under different privacy budgets. The experiments show that as the privacy budget increases, the algorithm reduces the information loss by about 80% of the published data.

2021-01-15
Đorđević, M., Milivojević, M., Gavrovska, A..  2019.  DeepFake Video Analysis using SIFT Features. 2019 27th Telecommunications Forum (℡FOR). :1—4.
Recent advantages in changing faces using DeepFake algorithms, which replace a face of one person with a face of another, truly represent what artificial intelligence and deep learning are capable of. Deepfakes in still images or video clips represent forgeries and tampered visual information. They are becoming increasingly successful and even difficult to notice in some cases. In this paper we analyze deepfakes using SIFT (Scale-Invariant Feature Transform) features. The experimental results show that in deepfake analysis using SIFT keypoints can be considered valuable.
2021-01-11
Amrutha, C. V., Jyotsna, C., Amudha, J..  2020.  Deep Learning Approach for Suspicious Activity Detection from Surveillance Video. 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA). :335—339.

Video Surveillance plays a pivotal role in today's world. The technologies have been advanced too much when artificial intelligence, machine learning and deep learning pitched into the system. Using above combinations, different systems are in place which helps to differentiate various suspicious behaviors from the live tracking of footages. The most unpredictable one is human behaviour and it is very difficult to find whether it is suspicious or normal. Deep learning approach is used to detect suspicious or normal activity in an academic environment, and which sends an alert message to the corresponding authority, in case of predicting a suspicious activity. Monitoring is often performed through consecutive frames which are extracted from the video. The entire framework is divided into two parts. In the first part, the features are computed from video frames and in second part, based on the obtained features classifier predict the class as suspicious or normal.

Whyte, C..  2020.  Problems of Poison: New Paradigms and "Agreed" Competition in the Era of AI-Enabled Cyber Operations. 2020 12th International Conference on Cyber Conflict (CyCon). 1300:215–232.
Few developments seem as poised to alter the characteristics of security in the digital age as the advent of artificial intelligence (AI) technologies. For national defense establishments, the emergence of AI techniques is particularly worrisome, not least because prototype applications already exist. Cyber attacks augmented by AI portend the tailored manipulation of human vectors within the attack surface of important societal systems at great scale, as well as opportunities for calamity resulting from the secondment of technical skill from the hacker to the algorithm. Arguably most important, however, is the fact that AI-enabled cyber campaigns contain great potential for operational obfuscation and strategic misdirection. At the operational level, techniques for piggybacking onto routine activities and for adaptive evasion of security protocols add uncertainty, complicating the defensive mission particularly where adversarial learning tools are employed in offense. Strategically, AI-enabled cyber operations offer distinct attempts to persistently shape the spectrum of cyber contention may be able to pursue conflict outcomes beyond the expected scope of adversary operation. On the other, AI-augmented cyber defenses incorporated into national defense postures are likely to be vulnerable to "poisoning" attacks that predict, manipulate and subvert the functionality of defensive algorithms. This article takes on two primary tasks. First, it considers and categorizes the primary ways in which AI technologies are likely to augment offensive cyber operations, including the shape of cyber activities designed to target AI systems. Then, it frames a discussion of implications for deterrence in cyberspace by referring to the policy of persistent engagement, agreed competition and forward defense promulgated in 2018 by the United States. Here, it is argued that the centrality of cyberspace to the deployment and operation of soon-to-be-ubiquitous AI systems implies new motivations for operation within the domain, complicating numerous assumptions that underlie current approaches. In particular, AI cyber operations pose unique measurement issues for the policy regime.
2020-12-28
Meng, C., Zhou, L..  2020.  Big Data Encryption Technology Based on ASCII And Application On Credit Supervision. 2020 International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). :79—82.

Big Data Platform provides business units with data platforms, data products and data services by integrating all data to fully analyze and exploit the intrinsic value of data. Data accessed by big data platforms may include many users' privacy and sensitive information, such as the user's hotel stay history, user payment information, etc., which is at risk of leakage. This paper first analyzes the risks of data leakage, then introduces in detail the theoretical basis and common methods of data desensitization technology, and finally puts forward a set of effective market subject credit supervision application based on asccii, which is committed to solving the problems of insufficient breadth and depth of data utilization for enterprises involved, the problems of lagging regulatory laws and standards, the problems of separating credit construction and market supervision business, and the credit constraints of data governance.

2020-12-21
Enkhtaivan, B., Inoue, A..  2020.  Mediating Data Trustworthiness by Using Trusted Hardware between IoT Devices and Blockchain. 2020 IEEE International Conference on Smart Internet of Things (SmartIoT). :314–318.
In recent years, with the progress of data analysis methods utilizing artificial intelligence (AI) technology, concepts of smart cities collecting data from IoT devices and creating values by analyzing it have been proposed. However, making sure that the data is not tampered with is of the utmost importance. One way to do this is to utilize blockchain technology to record and trace the history of the data. Park and Kim proposed ensuring the trustworthiness of the data by utilizing an IoT device with a trusted execution environment (TEE). Also, Guan et al. proposed authenticating an IoT device and mediating data using a TEE. For the authentication, they use the physically unclonable function of the IoT device. Usually, IoT devices suffer from the lack of resources necessary for creating transactions for the blockchain ledger. In this paper, we present a secure protocol in which a TEE acts as a proxy to the IoT devices and creates the necessary transactions for the blockchain. We use an authenticated encryption method on the data transmission between the IoT device and TEE to authenticate the device and ensure the integrity and confidentiality of the data generated by the IoT devices.
2020-12-14
Lee, M.-F. R., Chien, T.-W..  2020.  Artificial Intelligence and Internet of Things for Robotic Disaster Response. 2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS). :1–6.
After the Fukushima nuclear disaster and the Wenchuan earthquake, the relevant government agencies recognized the urgency of disaster-straining robots. There are many natural or man-made disasters in Taiwan, and it is usually impossible to dispatch relevant personnel to search or explore immediately. The project proposes to use the architecture of Intelligent Internet of Things (AIoT) (Artificial Intelligence + Internet of Things) to coordinate with ground, surface and aerial and underwater robots, and apply them to disaster response, ground, surface and aerial and underwater swarm robots to collect environmental big data from the disaster site, and then through the Internet of Things. From the field workstation to the cloud for “training” deep learning model and “model verification”, the trained deep learning model is transmitted to the field workstation via the Internet of Things, and then transmitted to the ground, surface and aerial and underwater swarm robots for on-site continuing objects classification. Continuously verify the “identification” with the environment and make the best decisions for the response. The related tasks include monitoring, search and rescue of the target.
Willcox, G., Rosenberg, L., Domnauer, C..  2020.  Analysis of Human Behaviors in Real-Time Swarms. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0104–0109.
Many species reach group decisions by deliberating in real-time systems. This natural process, known as Swarm Intelligence (SI), has been studied extensively in a range of social organisms, from schools of fish to swarms of bees. A new technique called Artificial Swarm Intelligence (ASI) has enabled networked human groups to reach decisions in systems modeled after natural swarms. The present research seeks to understand the behavioral dynamics of such “human swarms.” Data was collected from ten human groups, each having between 21 and 25 members. The groups were tasked with answering a set of 25 ordered ranking questions on a 1-5 scale, first independently by survey and then collaboratively as a real-time swarm. We found that groups reached significantly different answers, on average, by swarm versus survey ( p=0.02). Initially, the distribution of individual responses in each swarm was little different than the distribution of survey responses, but through the process of real-time deliberation, the swarm's average answer changed significantly ( ). We discuss possible interpretations of this dynamic behavior. Importantly, the we find that swarm's answer is not simply the arithmetic mean of initial individual “votes” ( ) as in a survey, suggesting a more complex mechanism is at play-one that relies on the time-varying behaviors of the participants in swarms. Finally, we publish a set of data that enables other researchers to analyze human behaviors in real-time swarms.
Willcox, G., Rosenberg, L., Burgman, M., Marcoci, A..  2020.  Prioritizing Policy Objectives in Polarized Groups using Artificial Swarm Intelligence. 2020 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA). :1–9.
Groups often struggle to reach decisions, especially when populations are strongly divided by conflicting views. Traditional methods for collective decision-making involve polling individuals and aggregating results. In recent years, a new method called Artificial Swarm Intelligence (ASI) has been developed that enables networked human groups to deliberate in real-time systems, moderated by artificial intelligence algorithms. While traditional voting methods aggregate input provided by isolated participants, Swarm-based methods enable participants to influence each other and converge on solutions together. In this study we compare the output of traditional methods such as Majority vote and Borda count to the Swarm method on a set of divisive policy issues. We find that the rankings generated using ASI and the Borda Count methods are often rated as significantly more satisfactory than those generated by the Majority vote system (p\textbackslashtextless; 0.05). This result held for both the population that generated the rankings (the “in-group”) and the population that did not (the “out-group”): the in-group ranked the Swarm prioritizations as 9.6% more satisfactory than the Majority prioritizations, while the out-group ranked the Swarm prioritizations as 6.5% more satisfactory than the Majority prioritizations. This effect also held even when the out-group was subject to a demographic sampling bias of 10% (i.e. the out-group was composed of 10% more Labour voters than the in-group). The Swarm method was the only method to be perceived as more satisfactory to the “out-group” than the voting group.