Visible to the public Biblio

Found 2387 results

Filters: Keyword is human factors  [Clear All Filters]
2021-12-02
Gupta, Praveen Kumar, Singh, Neeraj Kumar, Mahajan, Vasundhara.  2020.  Monitoring of Cyber Intrusion in Wireless Smart Grid Network Using Weight Reduction Technique. 2020 International Conference on Electrical and Electronics Engineering (ICE3). :136–139.
The dependency of the smart grid is higher in terms of Wireless Sensors (WS) for flexible monitoring and control. Sensor nodes are required to sense, collect and process the real-time data and transfer it to the monitoring stations. Mostly, it is deployed in extremely rural areas where human access is limited making it vulnerable to cyber intrusion. In this paper, an easy, efficient and low memory usage program is proposed to detect False Data Injection Cyber Attack (FDICA) in very little time to protect the smart grid network. Each bus of the IEEE test system is represented by a connected graph node having a weight equal to 1. During FDICA the weight of the node changes and triggers the alarm if the change is below the predefined threshold value. MATLAB software is used to evaluate the performance of the proposed method under different conditions. Simulation results indicate that the proposed method detects the FDICA in minimal time increasing the resilience capability of the smart grid.
Rao, Poojith U., Sodhi, Balwinder, Sodhi, Ranjana.  2020.  Cyber Security Enhancement of Smart Grids Via Machine Learning - A Review. 2020 21st National Power Systems Conference (NPSC). :1–6.
The evolution of power system as a smart grid (SG) not only has enhanced the monitoring and control capabilities of the power grid, but also raised its security concerns and vulnerabilities. With a boom in Internet of Things (IoT), a lot a sensors are being deployed across the grid. This has resulted in huge amount of data available for processing and analysis. Machine learning (ML) and deep learning (DL) algorithms are being widely used to extract useful information from this data. In this context, this paper presents a comprehensive literature survey of different ML and DL techniques that have been used in the smart grid cyber security area. The survey summarizes different type of cyber threats which today's SGs are prone to, followed by various ML and DL-assisted defense strategies. The effectiveness of the ML based methods in enhancing the cyber security of SGs is also demonstrated with the help of a case study.
Ravikumar, Gelli, Nicklaus, Alex, Govindarasu, Manimaran.  2020.  Cyber-Physical Smart Light Control System Integration with Smart Grid Using Zigbee. 2020 IEEE Power Energy Society Innovative Smart Grid Technologies Conference (ISGT). :1–5.
This paper presents a hardware-in-the-loop cyber-physical system architecture design to monitor and control smart lights connected to the active distribution grid. The architecture uses Zigbee-based (IEEE 802.15.4) wireless sensor networks and publish-subscribe architecture to exchange monitoring and control signals between smart-light actuators (SLAs) and a smart-light central controller (SLCC). Each SLA integrated into a smart light consists of a Zigbee-based endpoint module to send and receive signals to and from the SLCC. The SLCC consists of a Zigbee-based coordinator module, which further exchanges the monitoring and control signals with the active distribution management system over the TCP/IP communication network. The monitoring signals from the SLAs include light status, brightness level, voltage, current, and power data, whereas, the control signals to the SLAs include light intensity, turn ON, turn OFF, standby, and default settings. We have used our existing hardware-in-the-loop (HIL) cyber-physical system (CPS) security SCADA testbed to process signals received from the SLCC and respond suitable control signals based on the smart light schedule requirements, system operation, and active distribution grid dynamic characteristics. We have integrated the proposed cyber-physical smart light control system (CPSLCS) testbed to our existing HIL CPS SCADA testbed. We use the integrated testbed to demonstrate the efficacy of the proposed algorithm by real-time performance and latency between the SLCC and SLAs. The experiments demonstrated significant results by 100% realtime performance and low latency while exchanging data between the SLCC and SLAs.
2021-11-29
Albó, Laia, Beardsley, Marc, Amarasinghe, Ishari, Hernández-Leo, Davinia.  2020.  Individual versus Computer-Supported Collaborative Self-Explanations: How Do Their Writing Analytics Differ? 2020 IEEE 20th International Conference on Advanced Learning Technologies (ICALT). :132–134.
Researchers have demonstrated the effectiveness of self-explanations (SE) as an instructional practice and study strategy. However, there is a lack of work studying the characteristics of SE responses prompted by collaborative activities. In this paper, we use writing analytics to investigate differences between SE text responses resulting from individual versus collaborative learning activities. A Coh-Metrix analysis suggests that students in the collaborative SE activity demonstrated a higher level of comprehension. Future research should explore how writing analytics can be incorporated into CSCL systems to support student performance of SE activities.
Yin, Yifei, Zulkernine, Farhana, Dahan, Samuel.  2020.  Determining Worker Type from Legal Text Data Using Machine Learning. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :444–450.
This project addresses a classic employment law question in Canada and elsewhere using machine learning approach: how do we know whether a worker is an employee or an independent contractor? This is a central issue for self-represented litigants insofar as these two legal categories entail very different rights and employment protections. In this interdisciplinary research study, we collaborated with the Conflict Analytics Lab to develop machine learning models aimed at determining whether a worker is an employee or an independent contractor. We present a number of supervised learning models including a neural network model that we implemented using data labeled by law researchers and compared the accuracy of the models. Our neural network model achieved an accuracy rate of 91.5%. A critical discussion follows to identify the key features in the data that influence the accuracy of our models and provide insights about the case outcomes.
Hu, Shengze, He, Chunhui, Ge, Bin, Liu, Fang.  2020.  Enhanced Word Embedding Method in Text Classification. 2020 6th International Conference on Big Data and Information Analytics (BigDIA). :18–22.
For the task of natural language processing (NLP), Word embedding technology has a certain impact on the accuracy of deep neural network algorithms. Considering that the current word embedding method cannot realize the coexistence of words and phrases in the same vector space. Therefore, we propose an enhanced word embedding (EWE) method. Before completing the word embedding, this method introduces a unique sentence reorganization technology to rewrite all the sentences in the original training corpus. Then, all the original corpus and the reorganized corpus are merged together as the training corpus of the distributed word embedding model, so as to realize the coexistence problem of words and phrases in the same vector space. We carried out experiment to demonstrate the effectiveness of the EWE algorithm on three classic benchmark datasets. The results show that the EWE method can significantly improve the classification performance of the CNN model.
Jamieson, Laura, Moreno-Garcia, Carlos Francisco, Elyan, Eyad.  2020.  Deep Learning for Text Detection and Recognition in Complex Engineering Diagrams. 2020 International Joint Conference on Neural Networks (IJCNN). :1–7.
Engineering drawings such as Piping and Instrumentation Diagrams contain a vast amount of text data which is essential to identify shapes, pipeline activities, tags, amongst others. These diagrams are often stored in undigitised format, such as paper copy, meaning the information contained within the diagrams is not readily accessible to inspect and use for further data analytics. In this paper, we make use of the benefits of recent deep learning advances by selecting models for both text detection and text recognition, and apply them to the digitisation of text from within real world complex engineering diagrams. Results show that 90% of text strings were detected including vertical text strings, however certain non text diagram elements were detected as text. Text strings were obtained by the text recognition method for 86% of detected text instances. The findings show that whilst the chosen Deep Learning methods were able to detect and recognise text which occurred in simple scenarios, more complex representations of text including those text strings located in close proximity to other drawing elements were highlighted as a remaining challenge.
Piazza, Nancirose.  2020.  Classification Between Machine Translated Text and Original Text By Part Of Speech Tagging Representation. 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA). :739–740.
Classification between machine-translated text and original text are often tokenized on vocabulary of the corpi. With N-grams larger than uni-gram, one can create a model that estimates a decision boundary based on word frequency probability distribution; however, this approach is exponentially expensive because of high dimensionality and sparsity. Instead, we let samples of the corpi be represented by part-of-speech tagging which is significantly less vocabulary. With less trigram permutations, we can create a model with its tri-gram frequency probability distribution. In this paper, we explore less conventional ways of approaching techniques for handling documents, dictionaries, and the likes.
Zhang, Qiang, Chai, Bo, Song, Bochuan, Zhao, Jingpeng.  2020.  A Hierarchical Fine-Tuning Based Approach for Multi-Label Text Classification. 2020 IEEE 5th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA). :51–54.
Hierarchical Text classification has recently become increasingly challenging with the growing number of classification labels. In this paper, we propose a hierarchical fine-tuning based approach for hierarchical text classification. We use the ordered neurons LSTM (ONLSTM) model by combining the embedding of text and parent category for hierarchical text classification with a large number of categories, which makes full use of the connection between the upper-level and lower-level labels. Extensive experiments show that our model outperforms the state-of-the-art hierarchical model at a lower computation cost.
Baker, Oras, Thien, Chuong Nguyen.  2020.  A New Approach to Use Big Data Tools to Substitute Unstructured Data Warehouse. 2020 IEEE Conference on Big Data and Analytics (ICBDA). :26–31.
Data warehouse and big data have become the trend to help organise data effectively. Business data are originating in various kinds of sources with different forms from conventional structured data to unstructured data, it is the input for producing useful information essential for business sustainability. This research will navigate through the complicated designs of the common big data and data warehousing technologies to propose an effective approach to use these technologies for designing and building an unstructured textual data warehouse, a crucial and essential tool for most enterprises nowadays for decision making and gaining business competitive advantages. In this research, we utilised the IBM BigInsights Text Analytics, PostgreSQL, and Pentaho tools, an unstructured data warehouse is implemented and worked excellently with the unstructured text from Amazon review datasets, the new proposed approach creates a practical solution for building an unstructured data warehouse.
Gupta, Hritvik, Patel, Mayank.  2020.  Study of Extractive Text Summarizer Using The Elmo Embedding. 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). :829–834.
In recent times, data excessiveness has become a major problem in the field of education, news, blogs, social media, etc. Due to an increase in such a vast amount of text data, it became challenging for a human to extract only the valuable amount of data in a concise form. In other words, summarizing the text, enables human to retrieves the relevant and useful texts, Text summarizing is extracting the data from the document and generating the short or concise text of the document. One of the major approaches that are used widely is Automatic Text summarizer. Automatic text summarizer analyzes the large textual data and summarizes it into the short summaries containing valuable information of the data. Automatic text summarizer further divided into two types 1) Extractive text summarizer, 2) Abstractive Text summarizer. In this article, the extractive text summarizer approach is being looked for. Extractive text summarization is the approach in which model generates the concise summary of the text by picking up the most relevant sentences from the text document. This paper focuses on retrieving the valuable amount of data using the Elmo embedding in Extractive text summarization. Elmo embedding is a contextual embedding that had been used previously by many researchers in abstractive text summarization techniques, but this paper focus on using it in extractive text summarizer.
Nazemi, Kawa, Klepsch, Maike J., Burkhardt, Dirk, Kaupp, Lukas.  2020.  Comparison of Full-Text Articles and Abstracts for Visual Trend Analytics through Natural Language Processing. 2020 24th International Conference Information Visualisation (IV). :360–367.
Scientific publications are an essential resource for detecting emerging trends and innovations in a very early stage, by far earlier than patents may allow. Thereby Visual Analytics systems enable a deep analysis by applying commonly unsupervised machine learning methods and investigating a mass amount of data. A main question from the Visual Analytics viewpoint in this context is, do abstracts of scientific publications provide a similar analysis capability compared to their corresponding full-texts? This would allow to extract a mass amount of text documents in a much faster manner. We compare in this paper the topic extraction methods LSI and LDA by using full text articles and their corresponding abstracts to obtain which method and which data are better suited for a Visual Analytics system for Technology and Corporate Foresight. Based on a easy replicable natural language processing approach, we further investigate the impact of lemmatization for LDA and LSI. The comparison will be performed qualitative and quantitative to gather both, the human perception in visual systems and coherence values. Based on an application scenario a visual trend analytics system illustrates the outcomes.
Somsakul, Supawit, Prom-on, Santitham.  2020.  On the Network and Topological Analyses of Legal Documents Using Text Mining Approach. 2020 1st International Conference on Big Data Analytics and Practices (IBDAP). :1–6.
This paper presents a computational study of Thai legal documents using text mining and network analytic approach. Thai legal systems rely much on the existing judicial rulings. Thus, legal documents contain complex relationships and require careful examination. The objective of this study is to use text mining to model relationships between these legal documents and draw useful insights. A structure of document relationship was found as a result of the study in forms of a network that is related to the meaningful relations of legal documents. This can potentially be developed further into a document retrieval system based on how documents are related in the network.
Yau, Stephen S., Patel, Jinal S..  2020.  A Blockchain-Based Testing Approach for Collaborative Software Development. 2020 IEEE International Conference on Blockchain (Blockchain). :98–105.
Development of large-scale and complex software systems requires multiple teams, including software development teams, domain experts, user representatives, and other project stakeholders, to work collaboratively to achieve software development goals. These teams rely on the use of agreed software development processes, knowledge management tools, and communication channels collaboratively in the software development project. Software testing is an important and complicated process due to reasons such as difficulties in achieving testing goals with the given time constraint, absence of efficient data sharing policies, vague testing acceptance criteria at various levels of testing, and lack of trusted coordination among the teams involved in software testing. The efficiency of the software testing relies on efficient, reliable, and trusted information sharing among these teams. Existing approaches to software testing for collaborative software development use centralized or decentralize tools for software testing, knowledge management, and communication channels. Existing approaches have the limitations of centralized authority, a single point of failure/compromise, lack of automatic requirement compliance checking and transparency in information sharing, and lack of unified data sharing policy, and reliable knowledge management repositories for sharing and storing past software testing artifacts and data. In this paper, a software testing approach for collaborative software development using private blockchain is presented, and the desirable properties of private blockchain, such as distributed data management, tamper-resistance, auditability and automatic requirement compliance checking, are incorporated to greatly improve the quality of software testing for collaborative software development.
Fujita, Kentaro, Zhang, Yuanyu, Sasabe, Masahiro, Kasahara, Shoji.  2020.  Mining Pool Selection Problem in the Presence of Block Withholding Attack. 2020 IEEE International Conference on Blockchain (Blockchain). :321–326.
Mining, the process where multiple miners compete to add blocks to Proof-of-Work (PoW) blockchains, is of great importance to maintain the tamper-resistance feature of blockchains. In current blockchain networks, miners usually form groups, called mining pools, to improve their revenues. When multiple pools exist, a fundamental mining pool selection problem arises: which pool should each miner join to maximize its revenue? In addition, the existence of mining pools also leads to another critical issue, i.e., Block WithHolding (BWH) attack, where a pool sends some of its miners as spies to another pool to gain extra revenues without contributing to the mining of the infiltrated pool. This paper therefore aims to investigate the mining pool selection issue (i.e., the stable population distribution of miners in the pools) in the presence of BWH attack from the perspective of evolutionary game theory. We first derive the expected revenue density of each pool to determine the expected payoff of miners in that pool. Based on the expected payoffs, we formulate replicator dynamics to represent the growth rates of the populations in all pools. Using the replicator dynamics, we obtain the rest points of the growth rates and discuss their stability to identify the Evolutionarily Stable States (ESSs) (i.e., stable population distributions) of the game. Simulation and numerical results are also provided to corroborate our analysis and to illustrate the theoretical findings.
N, Sivaselvan, Bhat K, Vivekananda, Rajarajan, Muttukrishnan.  2020.  Blockchain-Based Scheme for Authentication and Capability-Based Access Control in IoT Environment. 2020 11th IEEE Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :0323–0330.
Authentication and access control techniques are fundamental security elements to restrict access to critical resources in IoT environment. In the current state-of-the-art approaches in the literature, the architectures do not address the security features of authentication and access control together. Besides, they don't completely fulfill the key Internet-of-Things (IoT) features such as usability, scalability, interoperability and security. In this paper, we introduce a novel blockchain-based architecture for authentication and capability-based access control for IoT environment. A capability is a token which contains the access rights authorized to the device holding it. The architecture uses blockchain technology to carry out all the operations in the scheme. It does not embed blockchain technology into the resource-constrained IoT devices for the purpose of authentication and access control of the devices. However, the IoT devices and blockchain are connected by means of interfaces through which the essential communications are established. The authenticity of such interfaces are verified before any communication is made. Consequently, the architecture satisfies usability, scalability, interoperability and security features. We carried out security evaluation for the scheme. It exhibits strong resistance to threats like spoofing, tampering, repudiation, information disclosure, and Denial-of-Service (DoS). We also developed a proof of concept implementation where cost and storage overhead of blockchain transactions are studied.
Takemoto, Shu, Shibagaki, Kazuya, Nozaki, Yusuke, Yoshikawa, Masaya.  2020.  Deep Learning Based Attack for AI Oriented Authentication Module. 2020 35th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC). :5–8.
Neural Network Physical Unclonable Function (NN-PUF) has been proposed for the secure implementation of Edge AI. This study evaluates the tamper resistance of NN-PUF against machine learning attacks. The machine learning attack in this study learns CPRs using deep learning. As a result of the evaluation experiment, the machine learning attack predicted about 82% for CRPs. Therefore, this study revealed that NN-PUF is vulnerable to machine learning attacks.
Gwee, Bah-Hwee.  2020.  Hardware Attack and Assurance with Machine Learning: A Security Threat to Circuits and Systems. 2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS). :i–i.
Summary form only given, as follows. The complete presentation was not made available for publication as part of the conference proceedings. Banking, defence applications and cryptosystems often demand security features, including cryptography, tamper resistance, stealth, and etc., by means of hardware approaches and/or software approaches to prevent data leakages. The hardware physical attacks or commonly known as side channel attacks have been employed to extract the secret keys of the encrypted algorithms implemented in hardware devices by analyzing their physical parameters such as power dissipation, electromagnetic interference and timing information. Altered functions or unauthorized modules may be added to the circuit design during the shipping and manufacturing process, bringing in security threats to the deployed systems. In this presentation, we will discuss hardware assurance from both device level and circuit level, and present how machine learning techniques can be utilized. At the device level, we will first provide an overview of the different cryptography algorithms and present the side channel attacks, particularly the powerful Correlation Power Analysis (CPA) and Correlation Electromagnetic Analysis (CEMA) with a leakage model that can be used to reveal the secret keys of the cryptosystems. We will then discuss several countermeasure techniques and present how highly secured microchips can be designed based on these techniques. At the circuit level, we will provide an overview of manufactured IC circuit analysis through invasive IC delayering and imaging. We then present several machine learning techniques that can be efficiently applied to the retrieval of circuit contact points and connections for further netlist/functional analysis.
Song, ZHANG, Yang, Li, Gaoyang, LI, Han, YU, Baozhong, HAO, Jinwei, SONG, Jingang, FAN.  2020.  An Improved Data Provenance Framework Integrating Blockchain and PROV Model. 2020 International Conference on Computer Science and Management Technology (ICCSMT). :323–327.
Data tracing is an important topic in the era of digital economy when data are considered as one of the core factors in economic activities. However, the current data traceability systems often fail to obtain public trust due to their centralization and opaqueness. Blockchain possesses natural technical features such as data tampering resistance, anonymity, encryption security, etc., and shows great potential of improving the data tracing credibility. In this paper, we propose a blockchain-PROV-based multi-center data provenance solution in where the PROV model standardizes the data record storage and provenance on the blockchain automatically and intelligently. The solution improves the transparency and credibility of the provenance data, such as to help the efficient control and open sharing of data assets.
Gao, Yang, Wu, Weniun, Dong, Junyu, Yin, Yufeng, Si, Pengbo.  2020.  Deep Reinforcement Learning Based Node Pairing Scheme in Edge-Chain for IoT Applications. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–6.
Nowadays, the Internet of Things (IoT) is playing an important role in our life. This inevitably generates mass data and requires a more secure transmission. As blockchain technology can build trust in a distributed environment and ensure the data traceability and tamper resistance, it is a promising way to support IoT data transmission and sharing. In this paper, edge computing is considered to provide adequate resources for end users to offload computing tasks in the blockchain enabled IoT system, and the node pairing problem between end users and edge computing servers is researched with the consideration of wireless channel quality and the service quality. From the perspective of the end users, the objective optimization is designed to maximize the profits and minimize the payments for completing the tasks and ensuring the resource limits of the edge servers at the same time. The deep reinforcement learning (DRL) method is utilized to train an intelligent strategy, and the policy gradient based node pairing (PG-NP) algorithm is proposed. Through a deep neural network, the well-trained policy matched the system states to the optimal actions. The REINFORCE algorithm with baseline is applied to train the policy network. According to the training results, as the comparison strategies are max-credit, max-SINR, random and max-resource, the PG-NP algorithm performs about 57% better than the second-best method. And testing results show that PGNP also has a good generalization ability which is negatively correlated with the training performance to a certain extend.
Hou, Xiaolu, Breier, Jakub, Jap, Dirmanto, Ma, Lei, Bhasin, Shivam, Liu, Yang.  2020.  Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection. 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA). :1–6.
Deep learning is becoming a basis of decision making systems in many application domains, such as autonomous vehicles, health systems, etc., where the risk of misclassification can lead to serious consequences. It is necessary to know to which extent are Deep Neural Networks (DNNs) robust against various types of adversarial conditions. In this paper, we experimentally evaluate DNNs implemented in embedded device by using laser fault injection, a physical attack technique that is mostly used in security and reliability communities to test robustness of various systems. We show practical results on four activation functions, ReLu, softmax, sigmoid, and tanh. Our results point out the misclassification possibilities for DNNs achieved by injecting faults into the hidden layers of the network. We evaluate DNNs by using several different attack strategies to show which are the most efficient in terms of misclassification success rates. Outcomes of this work should be taken into account when deploying devices running DNNs in environments where malicious attacker could tamper with the environmental parameters that would bring the device into unstable conditions. resulting into faults.
Ferdous Khan, M. Fahim, Sakamura, Ken.  2020.  A Context-Policy-Based Approach to Access Control for Healthcare Data Protection. 2020 International Computer Symposium (ICS). :420–425.
Fueled by the emergence of IoT-enabled medical sensors and big data analytics, nations all over the world are widely adopting digitalization of healthcare systems. This is certainly a positive trend for improving the entire spectrum of quality of care, but this convenience is also posing a huge challenge on the security of healthcare data. For ensuring privacy and protection of healthcare data, access control is regarded as one of the first-line-of-defense mechanisms. As none of the traditional enterprise access control models can completely cater to the need of the healthcare domain which includes a myriad of contexts, in this paper, we present a context-policy-based access control scheme. Our scheme relies on the eTRON cybersecurity architecture for tamper-resistance and cryptographic functions, and leverages a context-specific blend of classical discretionary and role-based access models for incorporation into legacy systems. Moreover, our scheme adheres to key recommendations of prominent statutory and technical guidelines including HIPAA and HL7. The protocols involved in the proposed access control system have been delineated, and a proof-of-concept implementation has been carried out - along with a comparison with other systems, which clearly suggests that our approach is more responsive to different contexts for protecting healthcare data.
Egorova, Anna, Fedoseev, Victor.  2020.  An ROI-Based Watermarking Technique for Image Content Recovery Robust Against JPEG. 2020 International Conference on Information Technology and Nanotechnology (ITNT). :1–6.
The paper proposes a method for image content recovery based on digital watermarking. Existing image watermarking systems detect the tampering and can identify the exact positions of tampered regions, but only a few systems can recover the original image content. In this paper, we suggest a method for recovering the regions of interest (ROIs). It embeds the semi-fragile watermark resistant to JPEG compression (for the quality parameter values greater than or equal to the predefined threshold) and such local tamperings as splicing, copy-move, and retouching, whereas is destroyed by any other image modifications. In the experimental part, the performance of the method is shown on the road traffic JPEG images where the ROIs correspond to car license plates. The method is proven to be an efficient tool for recovering the original ROIs and can be integrated into any JPEG semi-fragile watermarking system.
Sagar, Subhash, Mahmood, Adnan, Sheng, Quan Z., Zhang, Wei Emma.  2020.  Trust Computational Heuristic for Social Internet of Things: A Machine Learning-Based Approach. ICC 2020 - 2020 IEEE International Conference on Communications (ICC). :1–6.
The Internet of Things (IoT) is an evolving network of billions of interconnected physical objects, such as, numerous sensors, smartphones, wearables, and embedded devices. These physical objects, generally referred to as the smart objects, when deployed in real-world aggregates useful information from their surrounding environment. As-of-late, this notion of IoT has been extended to incorporate the social networking facets which have led to the promising paradigm of the `Social Internet of Things' (SIoT). In SIoT, the devices operate as an autonomous agent and provide an exchange of information and services discovery in an intelligent manner by establishing social relationships among them with respect to their owners. Trust plays an important role in establishing trustworthy relationships among the physical objects and reduces probable risks in the decision making process. In this paper, a trust computational model is proposed to extract individual trust features in a SIoT environment. Furthermore, a machine learning-based heuristic is used to aggregate all the trust features in order to ascertain an aggregate trust score. Simulation results illustrate that the proposed trust-based model isolates the trustworthy and untrustworthy nodes within the network in an efficient manner.
Gao, Hongjun, Liu, Youbo, Liu, Zhenyu, Xu, Song, Wang, Renjun, Xiang, Enmin, Yang, Jie, Qi, Mohan, Zhao, Yinbo, Pan, Hongjin et al..  2020.  Optimal Planning of Distribution Network Based on K-Means Clustering. 2020 IEEE 4th Conference on Energy Internet and Energy System Integration (EI2). :2135–2139.
The reform of electricity marketization has bred multiple market agents. In order to maximize the total social benefits on the premise of ensuring the security of the system and taking into account the interests of multiple market agents, a bi-level optimal allocation model of distribution network with multiple agents participating is proposed. The upper level model considers the economic benefits of energy and service providers, which are mainly distributed power investors, energy storage operators and distribution companies. The lower level model considers end-user side economy and actively responds to demand management to ensure the highest user satisfaction. The K-means multi scenario analysis method is used to describe the time series characteristics of wind power, photovoltaic power and load. The particle swarm optimization (PSO) algorithm is used to solve the bi-level model, and IEEE33 node system is used to verify that the model can effectively consider the interests of multiple agents while ensuring the security of the system.