Visible to the public Biblio

Filters: Keyword is telecommunication computing  [Clear All Filters]
2020-07-16
Ayub, Md. Ahsan, Smith, Steven, Siraj, Ambareen.  2019.  A Protocol Independent Approach in Network Covert Channel Detection. 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). :165—170.

Network covert channels are used in various cyberattacks, including disclosure of sensitive information and enabling stealth tunnels for botnet commands. With time and technology, covert channels are becoming more prevalent, complex, and difficult to detect. The current methods for detection are protocol and pattern specific. This requires the investment of significant time and resources into application of various techniques to catch the different types of covert channels. This paper reviews several patterns of network storage covert channels, describes generation of network traffic dataset with covert channels, and proposes a generic, protocol-independent approach for the detection of network storage covert channels using a supervised machine learning technique. The implementation of the proposed generic detection model can lead to a reduction of necessary techniques to prevent covert channel communication in network traffic. The datasets we have generated for experimentation represent storage covert channels in the IP, TCP, and DNS protocols and are available upon request for future research in this area.

2020-07-10
Muñoz, Jordi Zayuelas i, Suárez-Varela, José, Barlet-Ros, Pere.  2019.  Detecting cryptocurrency miners with NetFlow/IPFIX network measurements. 2019 IEEE International Symposium on Measurements Networking (M N). :1—6.

In the last few years, cryptocurrency mining has become more and more important on the Internet activity and nowadays is even having a noticeable impact on the global economy. This has motivated the emergence of a new malicious activity called cryptojacking, which consists of compromising other machines connected to the Internet and leverage their resources to mine cryptocurrencies. In this context, it is of particular interest for network administrators to detect possible cryptocurrency miners using network resources without permission. Currently, it is possible to detect them using IP address lists from known mining pools, processing information from DNS traffic, or directly performing Deep Packet Inspection (DPI) over all the traffic. However, all these methods are still ineffective to detect miners using unknown mining servers or result too expensive to be deployed in real-world networks with large traffic volume. In this paper, we present a machine learning-based method able to detect cryptocurrency miners using NetFlow/IPFIX network measurements. Our method does not require to inspect the packets' payload; as a result, it achieves cost-efficient miner detection with similar accuracy than DPI-based techniques.

2020-05-26
Hamamreh, Rushdi A., Ayyad, Mohammad, Jamoos, Mohammad.  2019.  RAD: Reinforcement Authentication DYMO Protocol for MANET. 2019 International Conference on Promising Electronic Technologies (ICPET). :136–141.
Mobile ad hoc network (MANET) does not have fixed infrastructure centralized server which manage the connections between the nodes. Rather, the nodes in MANET move randomly. Thus, it is risky to exchange data between nodes because there is a high possibility of having malicious node in the path. In this paper, we will describe a new authentication technique using message digest 5 (MD5), hashing for dynamic MANET on demand protocol (DYMO) based on reinforcement learning. In addition, we will describe an encryption technique that can be used without the need for a third party to distribute a secret key. After implementing the suggested model, results showed a remarkable enhancement in securing the path by increasing the packet delivery ratio and average throughput. On the other hand, there was an increase in end to end delay due to time spent in cryptographic operations.
2020-05-11
Vashist, Abhishek, Keats, Andrew, Pudukotai Dinakarrao, Sai Manoj, Ganguly, Amlan.  2019.  Securing a Wireless Network-on-Chip Against Jamming Based Denial-of-Service Attacks. 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :320–325.
Wireless Networks-on-Chips (NoCs) have emerged as a panacea to the non-scalable multi-hop data transmission paths in traditional wired NoC architectures. Using low-power transceivers in NoC switches, novel Wireless NoC (WiNoC) architectures have been shown to achieve higher energy efficiency with improved peak bandwidth and reduced on-chip data transfer latency. However, using wireless interconnects for data transfer within a chip makes the on-chip communications vulnerable to various security threats from either external attackers or internal hardware Trojans (HTs). In this work, we propose a mechanism to make the wireless communication in a WiNoC secure against persistent jamming based Denial-of-Service attacks from both external and internal attackers. Persistent jamming attacks on the on-chip wireless medium will cause interference in data transfer over the duration of the attack resulting in errors in contiguous bits, known as burst errors. Therefore, we use a burst error correction code to monitor the rate of burst errors received over the wireless medium and deploy a Machine Learning (ML) classifier to detect the persistent jamming attack and distinguish it from random burst errors. In the event of jamming attack, alternate routing strategies are proposed to avoid the DoS attack over the wireless medium, so that a secure data transfer can be sustained even in the presence of jamming. We evaluate the proposed technique on a secure WiNoC in the presence of DoS attacks. It has been observed that with the proposed defense mechanisms, WiNoC can outperform a wired NoC even in presence of attacks in terms of performance and security. On an average, 99.87% attack detection was achieved with the chosen ML Classifiers. A bandwidth degradation of \textbackslashtextless;3% is experienced in the event of internal attack, while the wireless interconnects are disabled in the presence of an external attacker.
2020-05-04
Steinke, Michael, Adam, Iris, Hommel, Wolfgang.  2018.  Multi-Tenancy-Capable Correlation of Security Events in 5G Networks. 2018 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN). :1–6.
The concept of network slicing in 5G mobile networks introduces new challenges for security management: Given the combination of Infrastructure-as-a-Service cloud providers, mobile network operators as Software-as-a-Service providers, and the various verticals as customers, multi-layer and multi-tenancy-capable management architectures are required. This paper addresses the challenges for correlation of security events in such 5G scenarios with a focus on event processing at telecommunication service providers. After an analysis of the specific demand for network-slice-centric security event correlation in 5G networks, ongoing standardization efforts, and related research, we propose a multi-tenancy-capable event correlation architecture along with a scalable information model. The event processing, alerting, and correlation workflow is discussed and has been implemented in a network and security management system prototype, leading to a demonstration of first results acquired in a lab setup.
2020-04-13
Wang, Shaoyang, Lv, Tiejun, Zhang, Xuewei.  2019.  Multi-Agent Reinforcement Learning-Based User Pairing in Multi-Carrier NOMA Systems. 2019 IEEE International Conference on Communications Workshops (ICC Workshops). :1–6.
This paper investigates the problem of user pairing in multi-carrier non-orthogonal multiple access (MC-NOMA) systems. Firstly, the hard channel capacity and soft channel capacity are presented. The former depicts the transmission capability of the system that depends on the channel conditions, and the latter refers to the effective throughput of the system that is determined by the actual user demands. Then, two optimization problems to maximize the hard and soft channel capacities are established, respectively. Inspired by the multiagent deep reinforcement learning (MADRL) and convolutional neural network, the user paring network (UP-Net), based on the cooperative game and deep deterministic policy gradient, is designed for solving the optimization problems. Simulation results demonstrate that the performance of the designed UP-Net is comparable to that obtained from the exhaustive search method via the end-to-end low complexity method, which is superior to the common method, and corroborate that the UP-Net focuses more on the actual user demands to improve the soft channel capacity. Additionally and more importantly, the paper makes a useful exploration on the use of MADRL to solve the resource allocation problems in communication systems. Meanwhile, the design method has strong universality and can be easily extended to other issues.
2020-04-03
Bello-Ogunu, Emmanuel, Shehab, Mohamed, Miazi, Nazmus Sakib.  2019.  Privacy Is The Best Policy: A Framework for BLE Beacon Privacy Management. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:823—832.
Bluetooth Low Energy (BLE) beacons are an emerging type of technology in the Internet-of-Things (IoT) realm, which use BLE signals to broadcast a unique identifier that is detected by a compatible device to determine the location of nearby users. Beacons can be used to provide a tailored user experience with each encounter, yet can also constitute an invasion of privacy, due to their covertness and ability to track user behavior. Therefore, we hypothesize that user-driven privacy policy configuration is key to enabling effective and trustworthy privacy management during beacon encounters. We developed a framework for beacon privacy management that provides a policy configuration platform. Through an empirical analysis with 90 users, we evaluated this framework through a proof-of-concept app called Beacon Privacy Manager (BPM), which focused on the user experience of such a tool. Using BPM, we provided users with the ability to create privacy policies for beacons, testing different configuration schemes to refine the framework and then offer recommendations for future research.
2020-03-02
Ranaweera, Pasika, Jurcut, Anca Delia, Liyanage, Madhusanka.  2019.  Realizing Multi-Access Edge Computing Feasibility: Security Perspective. 2019 IEEE Conference on Standards for Communications and Networking (CSCN). :1–7.
Internet of Things (IoT) and 5G are emerging technologies that prompt a mobile service platform capable of provisioning billions of communication devices which enable ubiquitous computing and ambient intelligence. These novel approaches are guaranteeing gigabit-level bandwidth, ultra-low latency and ultra-high storage capacity for their subscribers. To achieve these limitations, ETSI has introduced the paradigm of Multi-Access Edge Computing (MEC) for creating efficient data processing architecture extending the cloud computing capabilities in the Radio Access Network (RAN). Despite the gained enhancements to the mobile network, MEC is subjected to security challenges raised from the heterogeneity of IoT services, intricacies in integrating virtualization technologies, and maintaining the performance guarantees of the mobile networks (i.e. 5G). In this paper, we are identifying the probable threat vectors in a typical MEC deployment scenario that comply with the ETSI standards. We analyse the identified threat vectors and propose solutions to mitigate them.
2020-02-17
Murudkar, Chetana V., Gitlin, Richard D..  2019.  QoE-Driven Anomaly Detection in Self-Organizing Mobile Networks Using Machine Learning. 2019 Wireless Telecommunications Symposium (WTS). :1–5.
Current procedures for anomaly detection in self-organizing mobile communication networks use network-centric approaches to identify dysfunctional serving nodes. In this paper, a user-centric approach and a novel methodology for anomaly detection is proposed, where the Quality of Experience (QoE) metric is used to evaluate the end-user experience. The system model demonstrates how dysfunctional serving eNodeBs are successfully detected by implementing a parametric QoE model using machine learning for prediction of user QoE in a network scenario created by the ns-3 network simulator. This approach can play a vital role in the future ultra-dense and green mobile communication networks that are expected to be both self- organizing and self-healing.
2019-12-09
Alemán, Concepción Sánchez, Pissinou, Niki, Alemany, Sheila, Boroojeni, Kianoosh, Miller, Jerry, Ding, Ziqian.  2018.  Context-Aware Data Cleaning for Mobile Wireless Sensor Networks: A Diversified Trust Approach. 2018 International Conference on Computing, Networking and Communications (ICNC). :226–230.

In mobile wireless sensor networks (MWSN), data imprecision is a common problem. Decision making in real time applications may be greatly affected by a minor error. Even though there are many existing techniques that take advantage of the spatio-temporal characteristics exhibited in mobile environments, few measure the trustworthiness of sensor data accuracy. We propose a unique online context-aware data cleaning method that measures trustworthiness by employing an initial candidate reduction through the analysis of trust parameters used in financial markets theory. Sensors with similar trajectory behaviors are assigned trust scores estimated through the calculation of “betas” for finding the most accurate data to trust. Instead of devoting all the trust into a single candidate sensor's data to perform the cleaning, a Diversified Trust Portfolio (DTP) is generated based on the selected set of spatially autocorrelated candidate sensors. Our results show that samples cleaned by the proposed method exhibit lower percent error when compared to two well-known and effective data cleaning algorithms in tested outdoor and indoor scenarios.

2019-12-05
Yu, Yiding, Wang, Taotao, Liew, Soung Chang.  2018.  Deep-Reinforcement Learning Multiple Access for Heterogeneous Wireless Networks. 2018 IEEE International Conference on Communications (ICC). :1-7.

This paper investigates the use of deep reinforcement learning (DRL) in the design of a "universal" MAC protocol referred to as Deep-reinforcement Learning Multiple Access (DLMA). The design framework is partially inspired by the vision of DARPA SC2, a 3-year competition whereby competitors are to come up with a clean-slate design that "best share spectrum with any network(s), in any environment, without prior knowledge, leveraging on machine-learning technique". While the scope of DARPA SC2 is broad and involves the redesign of PHY, MAC, and Network layers, this paper's focus is narrower and only involves the MAC design. In particular, we consider the problem of sharing time slots among a multiple of time-slotted networks that adopt different MAC protocols. One of the MAC protocols is DLMA. The other two are TDMA and ALOHA. The DRL agents of DLMA do not know that the other two MAC protocols are TDMA and ALOHA. Yet, by a series of observations of the environment, its own actions, and the rewards - in accordance with the DRL algorithmic framework - a DRL agent can learn the optimal MAC strategy for harmonious co-existence with TDMA and ALOHA nodes. In particular, the use of neural networks in DRL (as opposed to traditional reinforcement learning) allows for fast convergence to optimal solutions and robustness against perturbation in hyper- parameter settings, two essential properties for practical deployment of DLMA in real wireless networks.

2019-06-10
Eziama, E., Jaimes, L. M. S., James, A., Nwizege, K. S., Balador, A., Tepe, K..  2018.  Machine Learning-Based Recommendation Trust Model for Machine-to-Machine Communication. 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). :1-6.

The Machine Type Communication Devices (MTCDs) are usually based on Internet Protocol (IP), which can cause billions of connected objects to be part of the Internet. The enormous amount of data coming from these devices are quite heterogeneous in nature, which can lead to security issues, such as injection attacks, ballot stuffing, and bad mouthing. Consequently, this work considers machine learning trust evaluation as an effective and accurate option for solving the issues associate with security threats. In this paper, a comparative analysis is carried out with five different machine learning approaches: Naive Bayes (NB), Decision Tree (DT), Linear and Radial Support Vector Machine (SVM), KNearest Neighbor (KNN), and Random Forest (RF). As a critical element of the research, the recommendations consider different Machine-to-Machine (M2M) communication nodes with regard to their ability to identify malicious and honest information. To validate the performances of these models, two trust computation measures were used: Receiver Operating Characteristics (ROCs), Precision and Recall. The malicious data was formulated in Matlab. A scenario was created where 50% of the information were modified to be malicious. The malicious nodes were varied in the ranges of 10%, 20%, 30%, 40%, and the results were carefully analyzed.

2019-05-01
Lu, X., Wan, X., Xiao, L., Tang, Y., Zhuang, W..  2018.  Learning-Based Rogue Edge Detection in VANETs with Ambient Radio Signals. 2018 IEEE International Conference on Communications (ICC). :1-6.
Edge computing for mobile devices in vehicular ad hoc networks (VANETs) has to address rogue edge attacks, in which a rogue edge node claims to be the serving edge in the vehicle to steal user secrets and help launch other attacks such as man-in-the-middle attacks. Rogue edge detection in VANETs is more challenging than the spoofing detection in indoor wireless networks due to the high mobility of onboard units (OBUs) and the large-scale network infrastructure with roadside units (RSUs). In this paper, we propose a physical (PHY)- layer rogue edge detection scheme for VANETs according to the shared ambient radio signals observed during the same moving trace of the mobile device and the serving edge in the same vehicle. In this scheme, the edge node under test has to send the physical properties of the ambient radio signals, including the received signal strength indicator (RSSI) of the ambient signals with the corresponding source media access control (MAC) address during a given time slot. The mobile device can choose to compare the received ambient signal properties and its own record or apply the RSSI of the received signals to detect rogue edge attacks, and determines test threshold in the detection. We adopt a reinforcement learning technique to enable the mobile device to achieve the optimal detection policy in the dynamic VANET without being aware of the VANET model and the attack model. Simulation results show that the Q-learning based detection scheme can significantly reduce the detection error rate and increase the utility compared with existing schemes.
2019-03-25
Ali-Tolppa, J., Kocsis, S., Schultz, B., Bodrog, L., Kajo, M..  2018.  SELF-HEALING AND RESILIENCE IN FUTURE 5G COGNITIVE AUTONOMOUS NETWORKS. 2018 ITU Kaleidoscope: Machine Learning for a 5G Future (ITU K). :1–8.
In the Self-Organizing Networks (SON) concept, self-healing functions are used to detect, diagnose and correct degraded states in the managed network functions or other resources. Such methods are increasingly important in future network deployments, since ultra-high reliability is one of the key requirements for the future 5G mobile networks, e.g. in critical machine-type communication. In this paper, we discuss the considerations for improving the resiliency of future cognitive autonomous mobile networks. In particular, we present an automated anomaly detection and diagnosis function for SON self-healing based on multi-dimensional statistical methods, case-based reasoning and active learning techniques. Insights from both the human expert and sophisticated machine learning methods are combined in an iterative way. Additionally, we present how a more holistic view on mobile network self-healing can improve its performance.
2019-01-21
Leal, A. G., Teixeira, Í C..  2018.  Development of a suite of IPv6 vulnerability scanning tests using the TTCN-3 language. 2018 International Symposium on Networks, Computers and Communications (ISNCC). :1–6.

With the transition from IPv4 IPv6 protocol to improve network communications, there are concerns about devices and applications' security that must be dealt at the beginning of implementation or during its lifecycle. Automate the vulnerability assessment process reduces management overhead, enabling better management of risks and control of the vulnerabilities. Consequently, it reduces the effort needed for each test and it allows the increase of the frequency of application, improving time management to perform all the other complicated tasks necessary to support a secure network. There are several researchers involved in tests of vulnerability in IPv6 networks, exploiting addressing mechanisms, extension headers, fragmentation, tunnelling or dual-stack networks (using both IPv4 and IPv6 at the same time). Most existing tools use the programming languages C, Java, and Python instead of a language designed specifically to create a suite of tests, which reduces maintainability and extensibility of the tests. This paper presents a solution for IPv6 vulnerabilities scan tests, based on attack simulations, combining passive analysis (observing the manifestation of behaviours of the system under test) and an active one (stimulating the system to become symptomatic). Also, it describes a prototype that simulates and detects denial-of-service attacks on the ICMPv6 Protocol from IPv6. Also, a detailed report is created with the identified vulnerability and the possible existing solutions to mitigate such a gap, thus assisting the process of vulnerability management.

2018-09-05
King, Z., Yu, Shucheng.  2017.  Investigating and securing communications in the Controller Area Network (CAN). 2017 International Conference on Computing, Networking and Communications (ICNC). :814–818.
The Controller Area Network (CAN) is a broadcast communications network invented by Robert Bosch GmbH in 1986. CAN is the standard communication network found in automobiles, industry equipment, and many space applications. To be used in these environments, CAN is designed for efficiency and reliability, rather than security. This research paper closely examines the security risks within the CAN protocol and proposes a feasible solution. In this research, we investigate the problems with implementing certain security features in the CAN protocol, such as message authentication and protections against replay and denial-of-service (DoS) attacks. We identify the restrictions of the CAN bus, and we demonstrate how our proposed implementation meets these restrictions. Many previously proposed solutions lack security, feasibility, and/or efficiency; however, a solution must not drastically hinder the real-time operation speed of the network. The solution proposed in this research is tested with a simulative CAN environment. This paper proposes an alteration to the standard CAN bus nodes and the CAN protocol to better protect automobiles and other CAN-related systems from attacks.
2018-07-06
Du, Xiaojiang.  2004.  Using k-nearest neighbor method to identify poison message failure. IEEE Global Telecommunications Conference, 2004. GLOBECOM '04. 4:2113–2117Vol.4.

Poison message failure is a mechanism that has been responsible for large scale failures in both telecommunications and IP networks. The poison message failure can propagate in the network and cause an unstable network. We apply a machine learning, data mining technique in the network fault management area. We use the k-nearest neighbor method to identity the poison message failure. We also propose a "probabilistic" k-nearest neighbor method which outputs a probability distribution about the poison message. Through extensive simulations, we show that the k-nearest neighbor method is very effective in identifying the responsible message type.

2018-03-05
Mfula, H., Nurminen, J. K..  2017.  Adaptive Root Cause Analysis for Self-Healing in 5G Networks. 2017 International Conference on High Performance Computing Simulation (HPCS). :136–143.

Root cause analysis (RCA) is a common and recurring task performed by operators of cellular networks. It is done mainly to keep customers satisfied with the quality of offered services and to maximize return on investment (ROI) by minimizing and where possible eliminating the root causes of faults in cellular networks. Currently, the actual detection and diagnosis of faults or potential faults is still a manual and slow process often carried out by network experts who manually analyze and correlate various pieces of network data such as, alarms, call traces, configuration management (CM) and key performance indicator (KPI) data in order to come up with the most probable root cause of a given network fault. In this paper, we propose an automated fault detection and diagnosis solution called adaptive root cause analysis (ARCA). The solution uses measurements and other network data together with Bayesian network theory to perform automated evidence based RCA. Compared to the current common practice, our solution is faster due to automation of the entire RCA process. The solution is also cheaper because it needs fewer or no personnel in order to operate and it improves efficiency through domain knowledge reuse during adaptive learning. As it uses a probabilistic Bayesian classifier, it can work with incomplete data and it can handle large datasets with complex probability combinations. Experimental results from stratified synthesized data affirmatively validate the feasibility of using such a solution as a key part of self-healing (SH) especially in emerging self-organizing network (SON) based solutions in LTE Advanced (LTE-A) and 5G.

2018-02-21
Pak, W., Choi, Y. J..  2017.  High Performance and High Scalable Packet Classification Algorithm for Network Security Systems. IEEE Transactions on Dependable and Secure Computing. 14:37–49.

Packet classification is a core function in network and security systems; hence, hardware-based solutions, such as packet classification accelerator chips or Ternary Content Addressable Memory (T-CAM), have been widely adopted for high-performance systems. With the rapid improvement of general hardware architectures and growing popularity of multi-core multi-threaded processors, software-based packet classification algorithms are attracting considerable attention, owing to their high flexibility in satisfying various industrial requirements for security and network systems. For high classification speed, these algorithms internally use large tables, whose size increases exponentially with the ruleset size; consequently, they cannot be used with a large rulesets. To overcome this problem, we propose a new software-based packet classification algorithm that simultaneously supports high scalability and fast classification performance by merging partition decision trees in a search table. While most partitioning-based packet classification algorithms show good scalability at the cost of low classification speed, our algorithm shows very high classification speed, irrespective of the number of rules, with small tables and short table building time. Our test results confirm that the proposed algorithm enables network and security systems to support heavy traffic in the most effective manner.

2018-02-15
Hufstetler, W. A., Ramos, M. J. H., Wang, S..  2017.  NFC Unlock: Secure Two-Factor Computer Authentication Using NFC. 2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). :507–510.

Our project, NFC Unlock, implements a secure multifactor authentication system for computers using Near Field Communication technology. The application is written in C\# with pGina. It implements an NFC authentication which replaces the standard Windows credentials to allow the use of an NFC tag and a passcode to authenticate the user. Unlike the most prevalent multifactor authentication methods, NFC authentication does not require a user wait for an SMS code to type into the computer. A user enters a passcode and scans the NFC tag to log in. In order to prevent the data from being hacked, the system encrypts the NFC tag ID and the passcode with Advanced Encryption Standard. Users can easily register an NFC tag and link it to their computer account. The program also has several extra features including text alerts, record keeping of all login and login attempts, and a user-friendly configuration menu. Initial tests show that the NFC-based multifactor authentication system has the advantage of improved security with a simplified login process.

2018-01-16
He, Z., Zhang, T., Lee, R. B..  2017.  Machine Learning Based DDoS Attack Detection from Source Side in Cloud. 2017 IEEE 4th International Conference on Cyber Security and Cloud Computing (CSCloud). :114–120.

Denial of service (DOS) attacks are a serious threat to network security. These attacks are often sourced from virtual machines in the cloud, rather than from the attacker's own machine, to achieve anonymity and higher network bandwidth. Past research focused on analyzing traffic on the destination (victim's) side with predefined thresholds. These approaches have significant disadvantages. They are only passive defenses after the attack, they cannot use the outbound statistical features of attacks, and it is hard to trace back to the attacker with these approaches. In this paper, we propose a DOS attack detection system on the source side in the cloud, based on machine learning techniques. This system leverages statistical information from both the cloud server's hypervisor and the virtual machines, to prevent network packages from being sent out to the outside network. We evaluate nine machine learning algorithms and carefully compare their performance. Our experimental results show that more than 99.7% of four kinds of DOS attacks are successfully detected. Our approach does not degrade performance and can be easily extended to broader DOS attacks.

2017-12-20
Wang, M., Li, Z., Lin, Y..  2017.  A Distributed Intrusion Detection System for Cognitive Radio Networks Based on Evidence Theory. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :226–232.

Reliable detection of intrusion is the basis of safety in cognitive radio networks (CRNs). So far, few scholars applied intrusion detection systems (IDSs) to combat intrusion against CRNs. In order to improve the performance of intrusion detection in CRNs, a distributed intrusion detection scheme has been proposed. In this paper, a method base on Dempster-Shafer's (D-S) evidence theory to detect intrusion in CRNs is put forward, in which the detection data and credibility of different local IDS Agent is combined by D-S in the cooperative detection center, so that different local detection decisions are taken into consideration in the final decision. The effectiveness of the proposed scheme is verified by simulation, and the results reflect a noticeable performance improvement between the proposed scheme and the traditional method.

2017-11-13
Nakamura, Y., Louvel, M., Nishi, H..  2016.  Coordination middleware for secure wireless sensor networks. IECON 2016 - 42nd Annual Conference of the IEEE Industrial Electronics Society. :6931–6936.

Wireless sensor networks (WSNs) are implemented in various Internet-of-Things applications such as energy management systems. As the applications may involve personal information, they must be protected from attackers attempting to read information or control network devices. Research on WSN security is essential to protect WSNs from attacks. Studies in such research domains propose solutions against the attacks. However, they focus mainly on the security measures rather than on their ease in implementation in WSNs. In this paper, we propose a coordination middleware that provides an environment for constructing updatable WSNs for security. The middleware is based on LINC, a rule-based coordination middleware. The proposed approach allows the development of WSNs and attaches or detaches security modules when required. We implemented three security modules on LINC and on a real network, as case studies. Moreover, we evaluated the implementation costs while comparing the case studies.

2017-03-08
Kannouf, N., Douzi, Y., Benabdellah, M., Azizi, A..  2015.  Security on RFID technology. 2015 International Conference on Cloud Technologies and Applications (CloudTech). :1–5.

RFID (Radio Frequency Identification) systems are emerging as one of the most pervasive computing technologies in history due to their low cost and their broad applicability. Latest technologies have brought costs down and standards are being developed. Actually, RFID is mostly used as a medium for numerous tasks including managing supply chains, tracking livestock, preventing counterfeiting, controlling building access, and supporting automated checkout. The use of RFID is limited by security concerns and delays in standardization. This paper presents some research done on RFID, the RFID applications and RFID data security.

2017-02-14
B. C. M. Cappers, J. J. van Wijk.  2015.  "SNAPS: Semantic network traffic analysis through projection and selection". 2015 IEEE Symposium on Visualization for Cyber Security (VizSec). :1-8.

Most network traffic analysis applications are designed to discover malicious activity by only relying on high-level flow-based message properties. However, to detect security breaches that are specifically designed to target one network (e.g., Advanced Persistent Threats), deep packet inspection and anomaly detection are indispensible. In this paper, we focus on how we can support experts in discovering whether anomalies at message level imply a security risk at network level. In SNAPS (Semantic Network traffic Analysis through Projection and Selection), we provide a bottom-up pixel-oriented approach for network traffic analysis where the expert starts with low-level anomalies and iteratively gains insight in higher level events through the creation of multiple selections of interest in parallel. The tight integration between visualization and machine learning enables the expert to iteratively refine anomaly scores, making the approach suitable for both post-traffic analysis and online monitoring tasks. To illustrate the effectiveness of this approach, we present example explorations on two real-world data sets for the detection and understanding of potential Advanced Persistent Threats in progress.