Visible to the public Biblio

Found 958 results

Filters: First Letter Of Last Name is X  [Clear All Filters]
2020-12-11
Huang, N., Xu, M., Zheng, N., Qiao, T., Choo, K. R..  2019.  Deep Android Malware Classification with API-Based Feature Graph. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :296—303.

The rapid growth of Android malware apps poses a great security threat to users thus it is very important and urgent to detect Android malware effectively. What's more, the increasing unknown malware and evasion technique also call for novel detection method. In this paper, we focus on API feature and develop a novel method to detect Android malware. First, we propose a novel selection method for API feature related with the malware class. However, such API also has a legitimate use in benign app thus causing FP problem (misclassify benign as malware). Second, we further explore structure relationships between these APIs and map to a matrix interpreted as the hand-refined API-based feature graph. Third, a CNN-based classifier is developed for the API-based feature graph classification. Evaluations of a real-world dataset containing 3,697 malware apps and 3,312 benign apps demonstrate that selected API feature is effective for Android malware classification, just top 20 APIs can achieve high F1 of 94.3% under Random Forest classifier. When the available API features are few, classification performance including FPR indicator can achieve effective improvement effectively by complementing our further work.

2020-12-07
Xia, H., Xiao, F., Zhang, S., Hu, C., Cheng, X..  2019.  Trustworthiness Inference Framework in the Social Internet of Things: A Context-Aware Approach. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. :838–846.
The concept of social networking is integrated into Internet of things (IoT) to socialize smart objects by mimicking human behaviors, leading to a new paradigm of Social Internet of Things (SIoT). A crucial problem that needs to be solved is how to establish reliable relationships autonomously among objects, i.e., building trust. This paper focuses on exploring an efficient context-aware trustworthiness inference framework to address this issue. Based on the sociological and psychological principles of trust generation between human beings, the proposed framework divides trust into two types: familiarity trust and similarity trust. The familiarity trust can be calculated by direct trust and recommendation trust, while the similarity trust can be calculated based on external similarity trust and internal similarity trust. We subsequently present concrete methods for the calculation of different trust elements. In particular, we design a kernel-based nonlinear multivariate grey prediction model to predict the direct trust of a specific object, which acts as the core module of the entire framework. Besides, considering the fuzziness and uncertainty in the concept of trust, we introduce the fuzzy logic method to synthesize these trust elements. The experimental results verify the validity of the core module and the resistance to attacks of this framework.
Xu, M., Huber, M., Sun, Z., England, P., Peinado, M., Lee, S., Marochko, A., Mattoon, D., Spiger, R., Thom, S..  2019.  Dominance as a New Trusted Computing Primitive for the Internet of Things. 2019 IEEE Symposium on Security and Privacy (SP). :1415–1430.
The Internet of Things (IoT) is rapidly emerging as one of the dominant computing paradigms of this decade. Applications range from in-home entertainment to large-scale industrial deployments such as controlling assembly lines and monitoring traffic. While IoT devices are in many respects similar to traditional computers, user expectations and deployment scenarios as well as cost and hardware constraints are sufficiently different to create new security challenges as well as new opportunities. This is especially true for large-scale IoT deployments in which a central entity deploys and controls a large number of IoT devices with minimal human interaction. Like traditional computers, IoT devices are subject to attack and compromise. Large IoT deployments consisting of many nearly identical devices are especially attractive targets. At the same time, recovery from root compromise by conventional means becomes costly and slow, even more so if the devices are dispersed over a large geographical area. In the worst case, technicians have to travel to all devices and manually recover them. Data center solutions such as the Intelligent Platform Management Interface (IPMI) which rely on separate service processors and network connections are not only not supported by existing IoT hardware, but are unlikely to be in the foreseeable future due to the cost constraints of mainstream IoT devices. This paper presents CIDER, a system that can recover IoT devices within a short amount of time, even if attackers have taken root control of every device in a large deployment. The recovery requires minimal manual intervention. After the administrator has identified the compromise and produced an updated firmware image, he/she can instruct CIDER to force the devices to reset and to install the patched firmware on the devices. We demonstrate the universality and practicality of CIDER by implementing it on three popular IoT platforms (HummingBoard Edge, Raspberry Pi Compute Module 3 and Nucleo-L476RG) spanning the range from high to low end. Our evaluation shows that the performance overhead of CIDER is generally negligible.
2020-12-02
Wang, Q., Zhao, W., Yang, J., Wu, J., Hu, W., Xing, Q..  2019.  DeepTrust: A Deep User Model of Homophily Effect for Trust Prediction. 2019 IEEE International Conference on Data Mining (ICDM). :618—627.

Trust prediction in online social networks is crucial for information dissemination, product promotion, and decision making. Existing work on trust prediction mainly utilizes the network structure or the low-rank approximation of a trust network. These approaches can suffer from the problem of data sparsity and prediction accuracy. Inspired by the homophily theory, which shows a pervasive feature of social and economic networks that trust relations tend to be developed among similar people, we propose a novel deep user model for trust prediction based on user similarity measurement. It is a comprehensive data sparsity insensitive model that combines a user review behavior and the item characteristics that this user is interested in. With this user model, we firstly generate a user's latent features mined from user review behavior and the item properties that the user cares. Then we develop a pair-wise deep neural network to further learn and represent these user features. Finally, we measure the trust relations between a pair of people by calculating the user feature vector cosine similarity. Extensive experiments are conducted on two real-world datasets, which demonstrate the superior performance of the proposed approach over the representative baseline works.

Wang, W., Xuan, S., Yang, W., Chen, Y..  2019.  User Credibility Assessment Based on Trust Propagation in Microblog. 2019 Computing, Communications and IoT Applications (ComComAp). :270—275.

Nowadays, Microblog has become an important online social networking platform, and a large number of users share information through Microblog. Many malicious users have released various false news driven by various interests, which seriously affects the availability of Microblog platform. Therefore, the evaluation of Microblog user credibility has become an important research issue. This paper proposes a microblog user credibility evaluation algorithm based on trust propagation. In view of the high consumption and low precision caused by malicious users' attacking algorithms and manual selection of seed sets by establishing false social relationships, this paper proposes two optimization strategies: pruning algorithm based on social activity and similarity and based on The seed node selection algorithm of clustering. The pruning algorithm can trim off the attack edges established by malicious users and normal users. The seed node selection algorithm can efficiently select the highly available seed node set, and finally use the user social relationship graph to perform the two-way propagation trust scoring, so that the low trusted user has a lower trusted score and thus identifies the malicious user. The related experiments verify the effectiveness of the trustworthiness-based user credibility evaluation algorithm in the evaluation of Microblog user credibility.

Ye, J., Liu, R., Xie, Z., Feng, L., Liu, S..  2019.  EMPTCP: An ECN Based Approach to Detect Shared Bottleneck in MPTCP. 2019 28th International Conference on Computer Communication and Networks (ICCCN). :1—10.

The major challenge of Real Time Protocol is to balance efficiency and fairness over limited bandwidth. MPTCP has proved to be effective for multimedia and real time networks. Ideally, an MPTCP sender should couple the subflows sharing the bottleneck link to provide TCP friendliness. However, existing shared bottleneck detection scheme either utilize end-to-end delay without consideration of multiple bottleneck scenario, or identify subflows on switch at the expense of operation overhead. In this paper, we propose a lightweight yet accurate approach, EMPTCP, to detect shared bottleneck. EMPTCP uses the widely deployed ECN scheme to capture the real congestion state of shared bottleneck, while at the same time can be transparently utilized by various enhanced MPTCP protocols. Through theory analysis, simulation test and real network experiment, we show that EMPTCP achieves higher than 90% accuracy in shared bottleneck detection, thus improving the network efficiency and fairness.

Mukaidani, H., Saravanakumar, R., Xu, H., Zhuang, W..  2019.  Robust Nash Static Output Feedback Strategy for Uncertain Markov Jump Delay Stochastic Systems. 2019 IEEE 58th Conference on Decision and Control (CDC). :5826—5831.

In this paper, we propose a robust Nash strategy for a class of uncertain Markov jump delay stochastic systems (UMJDSSs) via static output feedback (SOF). After establishing the extended bounded real lemma for UMJDSS, the conditions for the existence of a robust Nash strategy set are determined by means of cross coupled stochastic matrix inequalities (CCSMIs). In order to solve the SOF problem, an heuristic algorithm is developed based on the algebraic equations and the linear matrix inequalities (LMIs). In particular, it is shown that robust convergence is guaranteed under a new convergence condition. Finally, a practical numerical example based on the congestion control for active queue management is provided to demonstrate the reliability and usefulness of the proposed design scheme.

Jie, Y., Zhou, L., Ming, N., Yusheng, X., Xinli, S., Yongqiang, Z..  2018.  Integrated Reliability Analysis of Control and Information Flow in Energy Internet. 2018 2nd IEEE Conference on Energy Internet and Energy System Integration (EI2). :1—9.
In this paper, according to the electricity business process including collecting and transmitting power information and sending control instructions, a coupling model of control-communication flow is built which is composed of three main matrices: control-communication, communication-communication, communication-control incidence matrices. Furthermore, the effective path change between two communication nodes is analyzed and a calculation method of connectivity probability for information network is proposed when considering a breakdown in communication links. Then, based on Bayesian conditional probability theory, the effect of the communication interruption on the energy Internet is analyzed and the metric matrix of controllability is given under communication congestion. Several cases are given in the final of paper to verify the effectiveness of the proposed method for calculating controllability matrix by considering different link interruption scenarios. This probability index can be regarded as a quantitative measure of the controllability of the power service based on the communication transmission instructions, which can be used in the power business decision-making in order to improve the control reliability of the energy Internet.
2020-12-01
Xu, W., Peng, Y..  2018.  SharaBLE: A Software Framework for Shared Usage of BLE Devices over the Internet. 2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). :381—385.

With the development of Internet of Things, numerous IoT devices have been brought into our daily lives. Bluetooth Low Energy (BLE), due to the low energy consumption and generic service stack, has become one of the most popular wireless communication technologies for IoT. However, because of the short communication range and exclusive connection pattern, a BLE-equipped device can only be used by a single user near the device. To fully explore the benefits of BLE and make BLE-equipped devices truly accessible over the Internet as IoT devices, in this paper, we propose a cloud-based software framework that can enable multiple users to interact with various BLE IoT devices over the Internet. This framework includes an agent program, a suite of services hosting in cloud, and a set of RESTful APIs exposed to Internet users. Given the availability of this framework, the access to BLE devices can be extended from local to the Internet scale without any software or hardware changes to BLE devices, and more importantly, shared usage of remote BLE devices over the Internet is also made available.

Yang, R., Ouyang, X., Chen, Y., Townend, P., Xu, J..  2018.  Intelligent Resource Scheduling at Scale: A Machine Learning Perspective. 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE). :132—141.

Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.

Xu, J., Bryant, D. G., Howard, A..  2018.  Would You Trust a Robot Therapist? Validating the Equivalency of Trust in Human-Robot Healthcare Scenarios 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :442—447.

With the recent advances in computing, artificial intelligence (AI) is quickly becoming a key component in the future of advanced applications. In one application in particular, AI has played a major role - that of revolutionizing traditional healthcare assistance. Using embodied interactive agents, or interactive robots, in healthcare scenarios has emerged as an innovative way to interact with patients. As an essential factor for interpersonal interaction, trust plays a crucial role in establishing and maintaining a patient-agent relationship. In this paper, we discuss a study related to healthcare in which we examine aspects of trust between humans and interactive robots during a therapy intervention in which the agent provides corrective feedback. A total of twenty participants were randomly assigned to receive corrective feedback from either a robotic agent or a human agent. Survey results indicate trust in a therapy intervention coupled with a robotic agent is comparable to that of trust in an intervention coupled with a human agent. Results also show a trend that the agent condition has a medium-sized effect on trust. In addition, we found that participants in the robot therapist condition are 3.5 times likely to have trust involved in their decision than the participants in the human therapist condition. These results indicate that the deployment of interactive robot agents in healthcare scenarios has the potential to maintain quality of health for future generations.

Xu, J., Howard, A..  2018.  The Impact of First Impressions on Human- Robot Trust During Problem-Solving Scenarios. 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :435—441.

With recent advances in robotics, it is expected that robots will become increasingly common in human environments, such as in the home and workplaces. Robots will assist and collaborate with humans on a variety of tasks. During these collaborations, it is inevitable that disagreements in decisions would occur between humans and robots. Among factors that lead to which decision a human should ultimately follow, theirs or the robot, trust is a critical factor to consider. This study aims to investigate individuals' behaviors and aspects of trust in a problem-solving situation in which a decision must be made in a bounded amount of time. A between-subject experiment was conducted with 100 participants. With the assistance of a humanoid robot, participants were requested to tackle a cognitive-based task within a given time frame. Each participant was randomly assigned to one of the following initial conditions: 1) a working robot in which the robot provided a correct answer or 2) a faulty robot in which the robot provided an incorrect answer. Impacts of the faulty robot behavior on participant's decision to follow the robot's suggested answer were analyzed. Survey responses about trust were collected after interacting with the robot. Results indicated that the first impression has a significant impact on participant's behavior of trusting a robot's advice during a disagreement. In addition, this study discovered evidence supporting that individuals still have trust in a malfunctioning robot even after they have observed a robot's faulty behavior.

Xie, Y., Bodala, I. P., Ong, D. C., Hsu, D., Soh, H..  2019.  Robot Capability and Intention in Trust-Based Decisions Across Tasks. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :39—47.

In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UAV agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multi-faceted mental models when collaborating with robots across multiple contexts.

2020-11-30
Guan, L., Lin, J., Ma, Z., Luo, B., Xia, L., Jing, J..  2018.  Copker: A Cryptographic Engine Against Cold-Boot Attacks. IEEE Transactions on Dependable and Secure Computing. 15:742–754.
Cryptosystems are essential for computer and communication security, e.g., RSA or ECDSA in PGP Email clients and AES in full disk encryption. In practice, the cryptographic keys are loaded and stored in RAM as plain-text, and therefore vulnerable to cold-boot attacks exploiting the remanence effect of RAM chips to directly read memory data. To tackle this problem, we propose Copker, a cryptographic engine that implements asymmetric cryptosystems entirely within the CPU, without storing any plain-text sensitive data in RAM. Copker supports the popular asymmetric cryptosystems (i.e., RSA and ECDSA), and deterministic random bit generators (DRBGs) used in ECDSA signing. In its active mode, Copker stores kilobytes of sensitive data, including the private key, the DRBG seed and intermediate states, only in on-chip CPU caches (and registers). Decryption/signing operations are performed without storing any sensitive information in RAM. In the suspend mode, Copker stores symmetrically-encrypted private keys and DRBG seeds in memory, while employs existing solutions to keep the key-encryption key securely in CPU registers. Hence, Copker releases the system resources in the suspend mode. We implement Copker with the support of multiple private keys. With security analyses and intensive experiments, we demonstrate that Copker provides cryptographic services that are secure against cold-boot attacks and introduce reasonable overhead.
Peng, Y., Yue, M., Li, H., Li, Y., Li, C., Xu, H., Wu, Q., Xi, W..  2018.  The Effect of Easy Axis Deviations on the Magnetization Reversal of Co Nanowire. IEEE Transactions on Magnetics. 54:1–5.
Macroscopic hysteresis loops and microscopic magnetic moment distributions have been determined by 3-D model for Co nanowire with various easy axis deviations from applied field. It is found that both the coercivity and the remanence decrease monotonously with the increase of easy axis deviation as well as the maximum magnetic product, indicating the large impact of the easy axis orientation on the magnetic performance. Moreover, the calculated angular distributions and the evolution of magnetic moments have been shown to explain the magnetic reversal process. It is demonstrated that the large demagnetization field in the two ends of the nanowire makes the occurrence of reversal domain nucleation easier, hence the magnetic reversal. In addition, the magnetic reversal was illustrated in terms of the analysis of the energy evolution.
Pan, T., Xu, C., Lv, J., Shi, Q., Li, Q., Jia, C., Huang, T., Lin, X..  2019.  LD-ICN: Towards Latency Deterministic Information-Centric Networking. 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :973–980.
Deterministic latency is the key challenge that must be addressed in numerous 5G applications such as AR/VR. However, it is difficult to make customized end-to-end resource reservation across multiple ISPs using IP-based QoS mechanisms. Information-Centric Networking (ICN) provides scalable and efficient content distribution at the Internet scale due to its in-network caching and native multicast capabilities, and the deterministic latency can promisingly be guaranteed by caching the relevant content objects in appropriate locations. Existing proposals formulate the ICN cache placement problem into numerous theoretical models. However, the underlying mechanisms to support such cache coordination are not discussed in detail. Especially, how to efficiently make cache reservation, how to avoid route oscillation when content cache is updated and how to conduct the real-time latency measurement? In this work, we propose Latency Deterministic Information-Centric Networking (LD-ICN). LD-ICN relies on source routing-based latency telemetry and leverages an on-path caching technique to avoid frequent route oscillation while still achieve the optimal cache placement under the SDN architecture. Extensive evaluation shows that under LD-ICN, 90.04% of the content requests are satisfied within the hard latency requirements.
Xu, Y., Chen, H., Zhao, Y., Zhang, W., Shen, Q., Zhang, X., Ma, Z..  2019.  Neural Adaptive Transport Framework for Internet-scale Interactive Media Streaming Services. 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). :1–6.
Network dynamics, such as bandwidth fluctuation and unexpected latency, hurt users' quality of experience (QoE) greatly for media services over the Internet. In this work, we propose a neural adaptive transport (NAT) framework to tackle the network dynamics for Internet-scale interactive media services. The entire NAT system has three major components: a learning based cloud overlay routing (COR) scheme for the best delivery path to bypass the network bottlenecks while offering the minimal end-to-end latency simultaneously; a residual neural network based collaborative video processing (CVP) system to trade the computational capability at client-end for QoE improvement via learned resolution scaling; and a deep reinforcement learning (DRL) based adaptive real-time streaming (ARS) strategy to select the appropriate video bitrate for maximal QoE. We have demonstrated that COR could improve the user satisfaction from 5% to 43%, CVP could reduce the bandwidth consumption more than 30% at the same quality, and DRL-based ARS can maintain the smooth streaming with \textbackslashtextless; 50% QoE improvement, respectively.
2020-11-23
Li, W., Zhu, H., Zhou, X., Shimizu, S., Xin, M., Jin, Q..  2018.  A Novel Personalized Recommendation Algorithm Based on Trust Relevancy Degree. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :418–422.
The rapid development of the Internet and ecommerce has brought a lot of convenience to people's life. Personalized recommendation technology provides users with services that they may be interested according to users' information such as personal characteristics and historical behaviors. The research of personalized recommendation has been a hot point of data mining and social networks. In this paper, we focus on resolving the problem of data sparsity based on users' rating data and social network information, introduce a set of new measures for social trust and propose a novel personalized recommendation algorithm based on matrix factorization combining trust relevancy. Our experiments were performed on the Dianping datasets. The results show that our algorithm outperforms traditional approaches in terms of accuracy and stability.
2020-11-20
Zhu, S., Chen, H., Xi, W., Chen, M., Fan, L., Feng, D..  2019.  A Worst-Case Entropy Estimation of Oscillator-Based Entropy Sources: When the Adversaries Have Access to the History Outputs. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :152—159.
Entropy sources are designed to provide unpredictable random numbers for cryptographic systems. As an assessment of the sources, Shannon entropy is usually adopted to quantitatively measure the unpredictability of the outputs. In several related works about the entropy evaluation of ring oscillator-based (RO-based) entropy sources, authors evaluated the unpredictability with the average conditional Shannon entropy (ACE) of the source, moreover provided a lower bound of the ACE (LBoACE). However, in this paper, we have demonstrated that when the adversaries have access to the history outputs of the entropy source, for example, by some intrusive attacks, the LBoACE may overestimate the actual unpredictability of the next output for the adversaries. In this situation, we suggest to adopt the specific conditional Shannon entropy (SCE) which exactly measures the unpredictability of the future output with the knowledge of previous output sequences and so is more consistent with the reality than the ACE. In particular, to be conservative, we propose to take the lower bound of the SCE (LBoSCE) as an estimation of the worst-case entropy of the sources. We put forward a detailed method to estimate this worst-case entropy of RO-based entropy sources, which we have also verified by experiment on an FPGA device. We recommend to adopt this method to provide a conservative assessment of the unpredictability when the entropy source works in a vulnerable environment and the adversaries might obtain the previous outputs.
2020-11-17
Zhou, Z., Qian, L., Xu, H..  2019.  Intelligent Decentralized Dynamic Power Allocation in MANET at Tactical Edge based on Mean-Field Game Theory. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :604—609.

In this paper, decentralized dynamic power allocation problem has been investigated for mobile ad hoc network (MANET) at tactical edge. Due to the mobility and self-organizing features in MANET and environmental uncertainties in the battlefield, many existing optimal power allocation algorithms are neither efficient nor practical. Furthermore, the continuously increasing large scale of the wireless connection population in emerging Internet of Battlefield Things (IoBT) introduces additional challenges for optimal power allocation due to the “Curse of Dimensionality”. In order to address these challenges, a novel Actor-Critic-Mass algorithm is proposed by integrating the emerging Mean Field game theory with online reinforcement learning. The proposed approach is able to not only learn the optimal power allocation for IoBT in a decentralized manner, but also effectively handle uncertainties from harsh environment at tactical edge. In the developed scheme, each agent in IoBT has three neural networks (NN), i.e., 1) Critic NN learns the optimal cost function that minimizes the Signal-to-interference-plus-noise ratio (SINR), 2) Actor NN estimates the optimal transmitter power adjustment rate, and 3) Mass NN learns the probability density function of all agents' transmitting power in IoBT. The three NNs are tuned based on the Fokker-Planck-Kolmogorov (FPK) and Hamiltonian-Jacobian-Bellman (HJB) equation given in the Mean Field game theory. An IoBT wireless network has been simulated to evaluate the effectiveness of the proposed algorithm. The results demonstrate that the actor-critic-mass algorithm can effectively approximate the probability distribution of all agents' transmission power and converge to the target SINR. Moreover, the optimal decentralized power allocation is obtained through integrated mean-field game theory with reinforcement learning.

2020-11-16
Zhang, C., Xu, C., Xu, J., Tang, Y., Choi, B..  2019.  GEMˆ2-Tree: A Gas-Efficient Structure for Authenticated Range Queries in Blockchain. 2019 IEEE 35th International Conference on Data Engineering (ICDE). :842–853.
Blockchain technology has attracted much attention due to the great success of the cryptocurrencies. Owing to its immutability property and consensus protocol, blockchain offers a new solution for trusted storage and computation services. To scale up the services, prior research has suggested a hybrid storage architecture, where only small meta-data are stored onchain and the raw data are outsourced to off-chain storage. To protect data integrity, a cryptographic proof can be constructed online for queries over the data stored in the system. However, the previous schemes only support simple key-value queries. In this paper, we take the first step toward studying authenticated range queries in the hybrid-storage blockchain. The key challenge lies in how to design an authenticated data structure (ADS) that can be efficiently maintained by the blockchain, in which a unique gas cost model is employed. By analyzing the performance of the existing techniques, we propose a novel ADS, called GEM2-tree, which is not only gas-efficient but also effective in supporting authenticated queries. To further reduce the ADS maintenance cost without sacrificing much the query performance, we also propose an optimized structure, GEM2*-tree, by designing a two-level index structure. Theoretical analysis and empirical evaluation validate the performance of the proposed ADSs.
2020-11-09
Zhu, L., Zhang, Z., Xia, G., Jiang, C..  2019.  Research on Vulnerability Ontology Model. 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). :657–661.
In order to standardize and describe vulnerability information in detail as far as possible and realize knowledge sharing, reuse and extension at the semantic level, a vulnerability ontology is constructed based on the information security public databases such as CVE, CWE and CAPEC and industry public standards like CVSS. By analyzing the relationship between vulnerability class and weakness class, inference rules are defined to realize knowledge inference from vulnerability instance to its consequence and from one vulnerability instance to another vulnerability instance. The experimental results show that this model can analyze the causal and congeneric relationships between vulnerability instances, which is helpful to repair vulnerabilities and predict attacks.
2020-11-04
Yuan, X., Zhang, T., Shama, A. A., Xu, J., Yang, L., Ellis, J., He, W., Waters, C..  2019.  Teaching Cybersecurity Using Guided Inquiry Collaborative Learning. 2019 IEEE Frontiers in Education Conference (FIE). :1—6.

This Innovate Practice Full Paper describes our experience with teaching cybersecurity topics using guided inquiry collaborative learning. The goal is to not only develop the students' in-depth technical knowledge, but also “soft skills” such as communication, attitude, team work, networking, problem-solving and critical thinking. This paper reports our experience with developing and using the Guided Inquiry Collaborative Learning materials on the topics of firewall and IPsec. Pre- and post-surveys were conducted to access the effectiveness of the developed materials and teaching methods in terms of learning outcome, attitudes, learning experience and motivation. Analysis of the survey data shows that students had increased learning outcome, participation in class, and interest with Guided Inquiry Collaborative Learning.

Thomas, L. J., Balders, M., Countney, Z., Zhong, C., Yao, J., Xu, C..  2019.  Cybersecurity Education: From Beginners to Advanced Players in Cybersecurity Competitions. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :149—151.

Cybersecurity competitions have been shown to be an effective approach for promoting student engagement through active learning in cybersecurity. Players can gain hands-on experience in puzzle-based or capture-the-flag type tasks that promote learning. However, novice players with limited prior knowledge in cybersecurity usually found difficult to have a clue to solve a problem and get frustrated at the early stage. To enhance student engagement, it is important to study the experiences of novices to better understand their learning needs. To achieve this goal, we conducted a 4-month longitudinal case study which involves 11 undergraduate students participating in a college-level cybersecurity competition, National Cyber League (NCL) competition. The competition includes two individual games and one team game. Questionnaires and in-person interviews were conducted before and after each game to collect the players' feedback on their experience, learning challenges and needs, and information about their motivation, interests and confidence level. The collected data demonstrate that the primary concern going into these competitions stemmed from a lack of knowledge regarding cybersecurity concepts and tools. Players' interests and confidence can be increased by going through systematic training.

Zong, P., Wang, Y., Xie, F..  2018.  Embedded Software Fault Prediction Based on Back Propagation Neural Network. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :553—558.

Predicting software faults before software testing activities can help rational distribution of time and resources. Software metrics are used for software fault prediction due to their close relationship with software faults. Thanks to the non-linear fitting ability, Neural networks are increasingly used in the prediction model. We first filter metric set of the embedded software by statistical methods to reduce the dimensions of model input. Then we build a back propagation neural network with simple structure but good performance and apply it to two practical embedded software projects. The verification results show that the model has good ability to predict software faults.