Visible to the public Biblio

Found 1918 results

Filters: First Letter Of Last Name is T  [Clear All Filters]
2021-06-30
Wang, Chenguang, Pan, Kaikai, Tindemans, Simon, Palensky, Peter.  2020.  Training Strategies for Autoencoder-based Detection of False Data Injection Attacks. 2020 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe). :1—5.
The security of energy supply in a power grid critically depends on the ability to accurately estimate the state of the system. However, manipulated power flow measurements can potentially hide overloads and bypass the bad data detection scheme to interfere the validity of estimated states. In this paper, we use an autoencoder neural network to detect anomalous system states and investigate the impact of hyperparameters on the detection performance for false data injection attacks that target power flows. Experimental results on the IEEE 118 bus system indicate that the proposed mechanism has the ability to achieve satisfactory learning efficiency and detection accuracy.
Wang, Chenguang, Tindemans, Simon, Pan, Kaikai, Palensky, Peter.  2020.  Detection of False Data Injection Attacks Using the Autoencoder Approach. 2020 International Conference on Probabilistic Methods Applied to Power Systems (PMAPS). :1—6.
State estimation is of considerable significance for the power system operation and control. However, well-designed false data injection attacks can utilize blind spots in conventional residual-based bad data detection methods to manipulate measurements in a coordinated manner and thus affect the secure operation and economic dispatch of grids. In this paper, we propose a detection approach based on an autoencoder neural network. By training the network on the dependencies intrinsic in `normal' operation data, it effectively overcomes the challenge of unbalanced training data that is inherent in power system attack detection. To evaluate the detection performance of the proposed mechanism, we conduct a series of experiments on the IEEE 118-bus power system. The experiments demonstrate that the proposed autoencoder detector displays robust detection performance under a variety of attack scenarios.
2021-06-28
Roshan, Rishu, Matam, Rakesh, Mukherjee, Mithun, Lloret, Jaime, Tripathy, Somanath.  2020.  A secure task-offloading framework for cooperative fog computing environment. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–6.
Fog computing architecture allows the end-user devices of an Internet of Things (IoT) application to meet their latency and computation requirements by offloading tasks to a fog node in proximity. This fog node in turn may offload the task to a neighboring fog node or the cloud-based on an optimal node selection policy. Several such node selection policies have been proposed that facilitate the selection of an optimal node, minimizing delay and energy consumption. However, one crucial assumption of these schemes is that all the networked fog nodes are authorized part of the fog network. This assumption is not valid, especially in a cooperative fog computing environment like a smart city, where fog nodes of multiple applications cooperate to meet their latency and computation requirements. In this paper, we propose a secure task-offloading framework for a distributed fog computing environment based on smart-contracts on the blockchain. The proposed framework allows a fog-node to securely offload tasks to a neighboring fog node, even if no prior trust-relation exists. The security analysis of the proposed framework shows how non-authenticated fog nodes are prevented from taking up offloading tasks.
Wei, Wenqi, Liu, Ling, Loper, Margaret, Chow, Ka-Ho, Gursoy, Mehmet Emre, Truex, Stacey, Wu, Yanzhao.  2020.  Adversarial Deception in Deep Learning: Analysis and Mitigation. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :236–245.
The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.
2021-06-24
Wesemeyer, Stephan, Boureanu, Ioana, Smith, Zach, Treharne, Helen.  2020.  Extensive Security Verification of the LoRaWAN Key-Establishment: Insecurities Patches. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :425–444.
LoRaWAN (Low-power Wide-Area Networks) is the main specification for application-level IoT (Internet of Things). The current version, published in October 2017, is LoRaWAN 1.1, with its 1.0 precursor still being the main specification supported by commercial devices such as PyCom LoRa transceivers. Prior (semi)-formal investigations into the security of the LoRaWAN protocols are scarce, especially for Lo-RaWAN 1.1. Moreover, amongst these few, the current encodings [4], [9] of LoRaWAN into verification tools unfortunately rely on much-simplified versions of the LoRaWAN protocols, undermining the relevance of the results in practice. In this paper, we fill in some of these gaps. Whilst we briefly discuss the most recent cryptographic-orientated works [5] that looked at LoRaWAN 1.1, our true focus is on producing formal analyses of the security and correctness of LoRaWAN, mechanised inside automated tools. To this end, we use the state-of-the-art prover, Tamarin. Importantly, our Tamarin models are a faithful and precise rendering of the LoRaWAN specifications. For example, we model the bespoke nonce-generation mechanisms newly introduced in LoRaWAN 1.1, as well as the “classical” but shortdomain nonces in LoRaWAN 1.0 and make recommendations regarding these. Whilst we include small parts on device-commissioning and application-level traffic, we primarily scrutinise the Join Procedure of LoRaWAN, and focus on version 1.1 of the specification, but also include an analysis of Lo-RaWAN 1.0. To this end, we consider three increasingly strong threat models, resting on a Dolev-Yao attacker acting modulo different requirements made on various channels (e.g., secure/insecure) and the level of trust placed on entities (e.g., honest/corruptible network servers). Importantly, one of these threat models is exactly in line with the LoRaWAN specification, yet it unfortunately still leads to attacks. In response to the exhibited attacks, we propose a minimal patch of the LoRaWAN 1.1 Join Procedure, which is as backwards-compatible as possible with the current version. We analyse and prove this patch secure in the strongest threat model mentioned above. This work has been responsibly disclosed to the LoRa Alliance, and we are liaising with the Security Working Group of the LoRa Alliance, in order to improve the clarity of the LoRaWAN 1.1 specifications in light of our findings, but also by using formal analysis as part of a feedback-loop of future and current specification writing.
Teplyuk, P.A., Yakunin, A.G., Sharlaev, E.V..  2020.  Study of Security Flaws in the Linux Kernel by Fuzzing. 2020 International Multi-Conference on Industrial Engineering and Modern Technologies (FarEastCon). :1–5.
An exceptional feature of the development of modern operating systems based on the Linux kernel is their leading use in cloud technologies, mobile devices and the Internet of things, which is accompanied by the emergence of more and more security threats at the kernel level. In order to improve the security of existing and future Linux distributions, it is necessary to analyze the existing approaches and tools for automated vulnerability detection and to conduct experimental security testing of some current versions of the kernel. The research is based on fuzzing - a software testing technique, which consists in the automated detection of implementation errors by sending deliberately incorrect data to the input of the fuzzer and analyzing the program's response at its output. Using the Syzkaller software tool, which implements a code coverage approach, vulnerabilities of the Linux kernel level were identified in stable versions used in modern distributions. The direction of this research is relevant and requires further development in order to detect zero-day vulnerabilities in new versions of the kernel, which is an important and necessary link in increasing the security of the Linux operating system family.
Dang, Tran Khanh, Truong, Phat T. Tran, Tran, Pi To.  2020.  Data Poisoning Attack on Deep Neural Network and Some Defense Methods. 2020 International Conference on Advanced Computing and Applications (ACOMP). :15–22.
In recent years, Artificial Intelligence has disruptively changed information technology and software engineering with a proliferation of technologies and applications based-on it. However, recent researches show that AI models in general and the most greatest invention since sliced bread - Deep Learning models in particular, are vulnerable to being hacked and can be misused for bad purposes. In this paper, we carry out a brief review of data poisoning attack - one of the two recently dangerous emerging attacks - and the state-of-the-art defense methods for this problem. Finally, we discuss current challenges and future developments.
Tsaknakis, Ioannis, Hong, Mingyi, Liu, Sijia.  2020.  Decentralized Min-Max Optimization: Formulations, Algorithms and Applications in Network Poisoning Attack. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :5755–5759.
This paper discusses formulations and algorithms which allow a number of agents to collectively solve problems involving both (non-convex) minimization and (concave) maximization operations. These problems have a number of interesting applications in information processing and machine learning, and in particular can be used to model an adversary learning problem called network data poisoning. We develop a number of algorithms to efficiently solve these non-convex min-max optimization problems, by combining techniques such as gradient tracking in the decentralized optimization literature and gradient descent-ascent schemes in the min-max optimization literature. Also, we establish convergence to a first order stationary point under certain conditions. Finally, we perform experiments to demonstrate that the proposed algorithms are effective in the data poisoning attack.
Wu, Chongke, Shao, Sicong, Tunc, Cihan, Hariri, Salim.  2020.  Video Anomaly Detection using Pre-Trained Deep Convolutional Neural Nets and Context Mining. 2020 IEEE/ACS 17th International Conference on Computer Systems and Applications (AICCSA). :1—8.
Anomaly detection is critically important for intelligent surveillance systems to detect in a timely manner any malicious activities. Many video anomaly detection approaches using deep learning methods focus on a single camera video stream with a fixed scenario. These deep learning methods use large-scale training data with large complexity. As a solution, in this paper, we show how to use pre-trained convolutional neural net models to perform feature extraction and context mining, and then use denoising autoencoder with relatively low model complexity to provide efficient and accurate surveillance anomaly detection, which can be useful for the resource-constrained devices such as edge devices of the Internet of Things (IoT). Our anomaly detection model makes decisions based on the high-level features derived from the selected embedded computer vision models such as object classification and object detection. Additionally, we derive contextual properties from the high-level features to further improve the performance of our video anomaly detection method. We use two UCSD datasets to demonstrate that our approach with relatively low model complexity can achieve comparable performance compared to the state-of-the-art approaches.
2021-06-02
Gohari, Parham, Hale, Matthew, Topcu, Ufuk.  2020.  Privacy-Preserving Policy Synthesis in Markov Decision Processes. 2020 59th IEEE Conference on Decision and Control (CDC). :6266—6271.
In decision-making problems, the actions of an agent may reveal sensitive information that drives its decisions. For instance, a corporation's investment decisions may reveal its sensitive knowledge about market dynamics. To prevent this type of information leakage, we introduce a policy synthesis algorithm that protects the privacy of the transition probabilities in a Markov decision process. We use differential privacy as the mathematical definition of privacy. The algorithm first perturbs the transition probabilities using a mechanism that provides differential privacy. Then, based on the privatized transition probabilities, we synthesize a policy using dynamic programming. Our main contribution is to bound the "cost of privacy," i.e., the difference between the expected total rewards with privacy and the expected total rewards without privacy. We also show that computing the cost of privacy has time complexity that is polynomial in the parameters of the problem. Moreover, we establish that the cost of privacy increases with the strength of differential privacy protections, and we quantify this increase. Finally, numerical experiments on two example environments validate the established relationship between the cost of privacy and the strength of data privacy protections.
Wang, Lei, Manchester, Ian R., Trumpf, Jochen, Shi, Guodong.  2020.  Initial-Value Privacy of Linear Dynamical Systems. 2020 59th IEEE Conference on Decision and Control (CDC). :3108—3113.
This paper studies initial-value privacy problems of linear dynamical systems. We consider a standard linear time-invariant system with random process and measurement noises. For such a system, eavesdroppers having access to system output trajectories may infer the system initial states, leading to initial-value privacy risks. When a finite number of output trajectories are eavesdropped, we consider a requirement that any guess about the initial values can be plausibly denied. When an infinite number of output trajectories are eavesdropped, we consider a requirement that the initial values should not be uniquely recoverable. In view of these two privacy requirements, we define differential initial-value privacy and intrinsic initial-value privacy, respectively, for the system as metrics of privacy risks. First of all, we prove that the intrinsic initial-value privacy is equivalent to unobservability, while the differential initial-value privacy can be achieved for a privacy budget depending on an extended observability matrix of the system and the covariance of the noises. Next, the inherent network nature of the considered linear system is explored, where each individual state corresponds to a node and the state and output matrices induce interaction and sensing graphs, leading to a network system. Under this network system perspective, we allow the initial states at some nodes to be public, and investigate the resulting intrinsic initial- value privacy of each individual node. We establish necessary and sufficient conditions for such individual node initial-value privacy, and also prove that the intrinsic initial-value privacy of individual nodes is generically determined by the network structure.
2021-06-01
Junchao, CHEN, Baorong, ZHAI, Yibing, DONG, Tao, WU, Kai, YOU.  2020.  Design Of TT C Resource Automatic Scheduling Interface Middleware With High Concurrency and Security. 2020 International Conference on Information Science, Parallel and Distributed Systems (ISPDS). :171—176.
In order to significantly improve the reliable interaction and fast processing when TT&C(Tracking, Telemetry and Command) Resource Scheduling and Management System (TRSMS) communicate with external systems which are diverse, multiple directional and high concurrent, this paper designs and implements a highly concurrent and secure middleware for TT&C Resource Automatic Scheduling Interface (TRASI). The middleware designs memory pool, data pool, thread pool and task pool to improve the efficiency of concurrent processing, uses the rule dictionary, communication handshake and wait retransmission mechanism to ensure the data interaction security and reliability. This middleware can effectively meet the requirements of TRASI for data exchange with external users and system, significantly improve the data processing speed and efficiency, and promote the information technology and automation level of Aerospace TT&C Network Management Center (TNMC).
Naderi, Pooria Taghizadeh, Taghiyareh, Fattaneh.  2020.  LookLike: Similarity-based Trust Prediction in Weighted Sign Networks. 2020 6th International Conference on Web Research (ICWR). :294–298.
Trust network is widely considered to be one of the most important aspects of social networks. It has many applications in the field of recommender systems and opinion formation. Few researchers have addressed the problem of trust/distrust prediction and, it has not yet been established whether the similarity measures can do trust prediction. The present paper aims to validate that similar users have related trust relationships. To predict trust relations between two users, the LookLike algorithm was introduced. Then we used the LookLike algorithm results as new features for supervised classifiers to predict the trust/distrust label. We chose a list of similarity measures to examined our claim on four real-world trust network datasets. The results demonstrated that there is a strong correlation between users' similarity and their opinion on trust networks. Due to the tight relation between trust prediction and truth discovery, we believe that our similarity-based algorithm could be a promising solution in their challenging domains.
Thakare, Vaishali Ravindra, Singh, K. John, Prabhu, C S R, Priya, M..  2020.  Trust Evaluation Model for Cloud Security Using Fuzzy Theory. 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE). :1–4.
Cloud computing is a new kind of computing model which allows users to effectively rent virtualized computing resources on pay as you go model. It offers many advantages over traditional models in IT industries and healthcare as well. However, there is lack of trust between CSUs and CSPs to prevent the extensive implementation of cloud technologies amongst industries. Different models are developed to overcome the uncertainty and complexity between CSP and CSU regarding suitability. Several researchers focused on resource optimization, scheduling and service dependability in cloud computing by using fuzzy logic. But, data storage and security using fuzzy logic have been ignored. In this paper, a trust evaluation model is proposed for cloud computing security using fuzzy theory. Authors evaluates how fuzzy logic increases efficiency in trust evaluation. To validate the effectiveness of proposed FTEM, authors presents a case study of healthcare organization.
2021-05-25
Murguia, Carlos, Tabuada, Paulo.  2020.  Privacy Against Adversarial Classification in Cyber-Physical Systems. 2020 59th IEEE Conference on Decision and Control (CDC). :5483–5488.
For a class of Cyber-Physical Systems (CPSs), we address the problem of performing computations over the cloud without revealing private information about the structure and operation of the system. We model CPSs as a collection of input-output dynamical systems (the system operation modes). Depending on the mode the system is operating on, the output trajectory is generated by one of these systems in response to driving inputs. Output measurements and driving inputs are sent to the cloud for processing purposes. We capture this "processing" through some function (of the input-output trajectory) that we require the cloud to compute accurately - referred here as the trajectory utility. However, for privacy reasons, we would like to keep the mode private, i.e., we do not want the cloud to correctly identify what mode of the CPS produced a given trajectory. To this end, we distort trajectories before transmission and send the corrupted data to the cloud. We provide mathematical tools (based on output-regulation techniques) to properly design distorting mechanisms so that: 1) the original and distorted trajectories lead to the same utility; and the distorted data leads the cloud to misclassify the mode.
Tian, Nianfeng, Guo, Qinglai, Sun, Hongbin, Huang, Jianye.  2020.  A Synchronous Iterative Method of Power Flow in Inter-Connected Power Grids Considering Privacy Preservation: A CPS Perspective. 2020 IEEE 4th Conference on Energy Internet and Energy System Integration (EI2). :782–787.
The increasing development of smart grid facilitates that modern power grids inter-connect with each other and form a large power system, making it possible and advantageous to conduct coordinated power flow among several grids. The communication burden and privacy issue are the prominent challenges in the application of synchronous iteration power flow method. In this paper, a synchronous iterative method of power flow in inter-connected power grid considering privacy preservation is proposed. By establishing the masked model of power flow for each sub-grid, the synchronous iteration is conducted by gathering the masked model of sub-grids in the coordination center and solving the masked correction equation in a concentration manner at each step. Generally, the proposed method can concentrate the major calculation of power flow on the coordination center, reduce the communication burden and guarantee the privacy preservation of sub-grids. A case study on IEEE 118-bus test system demonstrate the feasibility and effectiveness of the proposed methodology.
Tashev, Komil, Rustamova, Sanobar.  2020.  Analysis of Subject Recognition Algorithms based on Neural Networks. 2020 International Conference on Information Science and Communications Technologies (ICISCT). :1—4.
This article describes the principles of construction, training and use of neural networks. The features of the neural network approach are indicated, as well as the range of tasks for which it is most preferable. Algorithms of functioning, software implementation and results of work of an artificial neural network are presented.
Karimov, Madjit, Tashev, Komil, Rustamova, Sanobar.  2020.  Application of the Aho-Corasick algorithm to create a network intrusion detection system. 2020 International Conference on Information Science and Communications Technologies (ICISCT). :1—5.
One of the main goals of studying pattern matching techniques is their significant role in real-world applications, such as the intrusion detection systems branch. The purpose of the network attack detection systems NIDS is to protect the infocommunication network from unauthorized access. This article provides an analysis of the exact match and fuzzy matching methods, and discusses a new implementation of the classic Aho-Korasik pattern matching algorithm at the hardware level. The proposed approach to the implementation of the Aho-Korasik algorithm can make it possible to ensure the efficient use of resources, such as memory and energy.
Ahmedova, Oydin, Mardiyev, Ulugbek, Tursunov, Otabek.  2020.  Generation and Distribution Secret Encryption Keys with Parameter. 2020 International Conference on Information Science and Communications Technologies (ICISCT). :1—4.
This article describes a new way to generate and distribute secret encryption keys, in which the processes of generating a public key and formicating a secret encryption key are performed in algebra with a parameter, the secrecy of which provides increased durability of the key.
Taha, Mohammad Bany, Chowdhury, Rasel.  2020.  GALB: Load Balancing Algorithm for CP-ABE Encryption Tasks in E-Health Environment. 2020 Fifth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN). :165–170.
Security of personal data in the e-healthcare has always been challenging issue. The embedded and wearable devices used to collect these personal and critical data of the patients and users are sensitive in nature. Attribute-Based Encryption is believed to provide access control along with data security for distributed data among multiple parties. These resources limited devices do have the capabilities to secure the data while sending to the cloud but instead it increases the overhead and latency of running the encryption algorithm. On the top of if confidentiality is required, which will add more latency. In order to reduce latency and overhead, we propose a new load balancing algorithm that will distribute the data to nearby devices with available resources to encrypt the data and send it to the cloud. In this article, we are proposing a load balancing algorithm for E-Health system called (GALB). Our algorithm is based on Genetic Algorithm (GA). Our algorithm (GALB) distribute the tasks that received to the main gateway between the devices on E-health environment. The distribution strategy is based on the available resources in the devices, the distance between the gateway and the those devices, and the complexity of the task (size) and CP-ABE encryption policy length. In order to evaluate our algorithm performance, we compare the near optimal solution proposed by GALB with the optimal solution proposed by LP.
Laato, Samuli, Farooq, Ali, Tenhunen, Henri, Pitkamaki, Tinja, Hakkala, Antti, Airola, Antti.  2020.  AI in Cybersecurity Education- A Systematic Literature Review of Studies on Cybersecurity MOOCs. 2020 IEEE 20th International Conference on Advanced Learning Technologies (ICALT). :6—10.

Machine learning (ML) techniques are changing both the offensive and defensive aspects of cybersecurity. The implications are especially strong for privacy, as ML approaches provide unprecedented opportunities to make use of collected data. Thus, education on cybersecurity and AI is needed. To investigate how AI and cybersecurity should be taught together, we look at previous studies on cybersecurity MOOCs by conducting a systematic literature review. The initial search resulted in 72 items and after screening for only peer-reviewed publications on cybersecurity online courses, 15 studies remained. Three of the studies concerned multiple cybersecurity MOOCs whereas 12 focused on individual courses. The number of published work evaluating specific cybersecurity MOOCs was found to be small compared to all available cybersecurity MOOCs. Analysis of the studies revealed that cybersecurity education is, in almost all cases, organised based on the topic instead of used tools, making it difficult for learners to find focused information on AI applications in cybersecurity. Furthermore, there is a gab in academic literature on how AI applications in cybersecurity should be taught in online courses.

Addae, Joyce, Radenkovic, Milena, Sun, Xu, Towey, Dave.  2016.  An extended perspective on cybersecurity education. 2016 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE). :367—369.
The current trend of ubiquitous device use whereby computing is becoming increasingly context-aware and personal, has created a growing concern for the protection of personal privacy. Privacy is an essential component of security, and there is a need to be able to secure personal computers and networks to minimize privacy depreciation within cyberspace. Human error has been recognized as playing a major role in security breaches: Hence technological solutions alone cannot adequately address the emerging security and privacy threats. Home users are particularly vulnerable to cybersecurity threats for a number of reasons, including a particularly important one that our research seeks to address: The lack of cybersecurity education. We argue that research seeking to address the human element of cybersecurity should not be limited only to the design of more usable technical security mechanisms, but should be extended and applied to offering appropriate training to all stakeholders within cyberspace.
Wei, Wenqi, Liu, Ling, Loper, Margaret, Chow, Ka-Ho, Gursoy, Emre, Truex, Stacey, Wu, Yanzhao.  2020.  Cross-Layer Strategic Ensemble Defense Against Adversarial Examples. 2020 International Conference on Computing, Networking and Communications (ICNC). :456—460.

Deep neural network (DNN) has demonstrated its success in multiple domains. However, DNN models are inherently vulnerable to adversarial examples, which are generated by adding adversarial perturbations to benign inputs to fool the DNN model to misclassify. In this paper, we present a cross-layer strategic ensemble framework and a suite of robust defense algorithms, which are attack-independent, and capable of auto-repairing and auto-verifying the target model being attacked. Our strategic ensemble approach makes three original contributions. First, we employ input-transformation diversity to design the input-layer strategic transformation ensemble algorithms. Second, we utilize model-disagreement diversity to develop the output-layer strategic model ensemble algorithms. Finally, we create an input-output cross-layer strategic ensemble defense that strengthens the defensibility by combining diverse input transformation based model ensembles with diverse output verification model ensembles. Evaluated over 10 attacks on ImageNet dataset, we show that our strategic ensemble defense algorithms can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false negative rates, compared to existing representative defenses.

Baccari, Sihem, Touati, Haifa, Hadded, Mohamed, Muhlethaler, Paul.  2020.  Performance Impact Analysis of Security Attacks on Cross-Layer Routing Protocols in Vehicular Ad hoc Networks. 2020 International Conference on Software, Telecommunications and Computer Networks (SoftCOM). :1—6.

Recently, several cross-layer protocols have been designed for vehicular networks to optimize data dissemination by ensuring internal communications between routing and MAC layers. In this context, a cross-layer protocol, called TDMA-aware Routing Protocol for Multi-hop communications (TRPM), was proposed in order to efficiently select a relay node based on time slot scheduling information obtained from the MAC layer. However, due to the constant evolution of cyber-attacks on the routing and MAC layers, data dissemination in vehicular networks is vulnerable to several types of attack. In this paper, we identify the different attack models that can disrupt the cross-layer operation of the TRPM protocol and assess their impact on performance through simulation. Several new vulnerabilities related to the MAC slot scheduling process are identified. Exploiting of these vulnerabilities would lead to severe channel capacity wastage where up to half of the free slots could not be reserved.

Zhao, Zhao, Hou, Yanzhao, Tang, Xiaosheng, Tao, Xiaofeng.  2020.  Demo Abstract: Cross-layer Authentication Based on Physical Channel Information using OpenAirInterface. IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1334—1335.

The time-varying properties of the wireless channel are a powerful source of information that can complement and enhance traditional security mechanisms. Therefore, we propose a cross-layer authentication mechanism that combines physical layer channel information and traditional authentication mechanism in LTE. To verify the feasibility of the proposed mechanism, we build a cross-layer authentication system that extracts the phase shift information of a typical UE and use the ensemble learning method to train the fingerprint map based on OAI LTE. Experimental results show that our cross-layer authentication mechanism can effectively prompt the security of LTE system.