Visible to the public Biblio

Found 1474 results

Filters: First Letter Of Title is D  [Clear All Filters]
2019-05-01
Ramdani, Mohamed, Benmohammed, Mohamed, Benblidia, Nadjia.  2018.  Distributed Solution of Scalar Multiplication on Elliptic Curves over Fp for Resource-constrained Networks. Proceedings of the 2Nd International Conference on Future Networks and Distributed Systems. :63:1–63:6.
Elliptic curve cryptography (ECC) is an approach to public-key cryptography used for data protection to be unintelligible to any unauthorized device or entity. The encryption/decryption algorithm is publicly known and its security relies on the discrete logarithm problem. ECC is ideal for weak devices with small resources such as phones, smart cards, embedded systems and wireless sensor networks (WSN), largely deployed in different applications. The advantage of ECC is the shorter key length to provide same level of security than other cryptosystems like RSA. However, cryptographic computations such as the multiplication of an elliptic curve point by a scalar value are computationally expensive and involve point additions and doublings on elliptic curves over finite fields. Much works are done to optimize their costs. Based on the result of these works, including parallel processing, we propose two new efficient distributed algorithms to reduce the computations in resource-constrained networks having as feature the cooperative processing of data. Our results are conclusive and can provide up to 125% of reduction of consumed energy by each device in a data exchange operation.
Rayavel, P., Rathnavel, P., Bharathi, M., Kumar, T. Siva.  2018.  Dynamic Traffic Control System Using Edge Detection Algorithm. 2018 International Conference on Soft-Computing and Network Security (ICSNS). :1-5.

As the traffic congestion increases on the transport network, Payable on the road to slower speeds, longer falter times, as a consequence bigger vehicular queuing, it's necessary to introduce smart way to reduce traffic. We are already edging closer to ``smart city-smart travel''. Today, a large number of smart phone applications and connected sat-naves will help get you to your destination in the quickest and easiest manner possible due to real-time data and communication from a host of sources. In present situation, traffic lights are used in each phase. The other way is to use electronic sensors and magnetic coils that detect the congestion frequency and monitor traffic, but found to be more expensive. Hence we propose a traffic control system using image processing techniques like edge detection. The vehicles will be detected using images instead of sensors. The cameras are installed alongside of the road and it will capture image sequence for every 40 seconds. The digital image processing techniques will be applied to analyse and process the image and according to that the traffic signal lights will be controlled.

Pratama, R. F., Suwastika, N. A., Nugroho, M. A..  2018.  Design and Implementation Adaptive Intrusion Prevention System (IPS) for Attack Prevention in Software-Defined Network (SDN) Architecture. 2018 6th International Conference on Information and Communication Technology (ICoICT). :299-304.

Intrusion Prevention System (IPS) is a tool for securing networks from any malicious packet that could be sent from specific host. IPS can be installed on SDN network that has centralized logic architecture, so that IPS doesnt need to be installed on lots of nodes instead it has to be installed alongside the controller as center of logic network. IPS still has a flaw and that is the block duration would remain the same no matter how often a specific host attacks. For this reason, writer would like to make a system that not only integrates IPS on the SDN, but also designs an adaptive IPS by utilizing a fuzzy logic that can decide how long blocks are based on the frequency variable and type of attacks. From the results of tests that have been done, SDN network that has been equipped with adaptive IPS has the ability to detect attacks and can block the attacker host with the duration based on the frequency and type of attacks. The final result obtained is to make the SDN network safer by adding 0.228 milliseconds as the execute time required for the fuzzy algorithm in one process.

Sowah, R., Ofoli, A., Koumadi, K., Osae, G., Nortey, G., Bempong, A. M., Agyarkwa, B., Apeadu, K. O..  2018.  Design and Implementation of a Fire Detection andControl System with Enhanced Security and Safety for Automobiles Using Neuro-Fuzzy Logic. 2018 IEEE 7th International Conference on Adaptive Science Technology (ICAST). :1-8.

Automobiles provide comfort and mobility to owners. While they make life more meaningful they also pose challenges and risks in their safety and security mechanisms. Some modern automobiles are equipped with anti-theft systems and enhanced safety measures to safeguard its drivers. But at times, these mechanisms for safety and secured operation of automobiles are insufficient due to various mechanisms used by intruders and car thieves to defeat them. Drunk drivers cause accidents on our roads and thus the need to safeguard the driver when he is intoxicated and render the car to be incapable of being driven. These issues merit an integrated approach to safety and security of automobiles. In the light of these challenges, an integrated microcontroller-based hardware and software system for safety and security of automobiles to be fixed into existing vehicle architecture, was designed, developed and deployed. The system submodules are: (1) Two-step ignition for automobiles, namely: (a) biometric ignition and (b) alcohol detection with engine control, (2) Global Positioning System (GPS) based vehicle tracking and (3) Multisensor-based fire detection using neuro-fuzzy logic. All submodules of the system were implemented using one microcontroller, the Arduino Mega 2560, as the central control unit. The microcontroller was programmed using C++11. The developed system performed quite well with the tests performed on it. Given the right conditions, the alcohol detection subsystem operated with a 92% efficiency. The biometric ignition subsystem operated with about 80% efficiency. The fire detection subsystem operated with a 95% efficiency in locations registered with the neuro-fuzzy system. The vehicle tracking subsystem operated with an efficiency of 90%.

Shirsat, S. D..  2018.  Demonstrating Different Phishing Attacks Using Fuzzy Logic. 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT). :57-61.

Phishing has increased tremendously over last few years and it has become a serious threat to global security and economy. Existing literature dealing with the problem of phishing is scarce. Phishing is a deception technique that uses a combination of technology and social engineering to acquire sensitive information such as online banking passwords, credit card or bank account details [2]. Phishing can be done through emails and websites to collect confidential information. Phishers design fraudulent websites which look similar to the legitimate websites and lure the user to visit the malicious website. Therefore, the users must be aware of malicious websites to protect their sensitive data [1]. But it is very difficult to distinguish between legitimate and fake website especially for nontechnical users [4]. Moreover, phishing sites are growing rapidly. The aim of this paper is to demonstrate phishing detection using fuzzy logic and interpreting results using different defuzzification methods.

Chen, Yudong, Su, Lili, Xu, Jiaming.  2018.  Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent. Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems. :96-96.

We consider the distributed statistical learning problem over decentralized systems that are prone to adversarial attacks. This setup arises in many practical applications, including Google's Federated Learning. Formally, we focus on a decentralized system that consists of a parameter server and m working machines; each working machine keeps N/m data samples, where N is the total number of samples. In each iteration, up to q of the m working machines suffer Byzantine faults – a faulty machine in the given iteration behaves arbitrarily badly against the system and has complete knowledge of the system. Additionally, the sets of faulty machines may be different across iterations. Our goal is to design robust algorithms such that the system can learn the underlying true parameter, which is of dimension d, despite the interruption of the Byzantine attacks. In this paper, based on the geometric median of means of the gradients, we propose a simple variant of the classical gradient descent method. We show that our method can tolerate q Byzantine failures up to 2(1+$ε$)q łe m for an arbitrarily small but fixed constant $ε$0. The parameter estimate converges in O(łog N) rounds with an estimation error on the order of max $\surd$dq/N, \textasciitilde$\surd$d/N , which is larger than the minimax-optimal error rate $\surd$d/N in the centralized and failure-free setting by at most a factor of $\surd$q . The total computational complexity of our algorithm is of O((Nd/m) log N) at each working machine and O(md + kd log 3 N) at the central server, and the total communication cost is of O(m d log N). We further provide an application of our general results to the linear regression problem. A key challenge arises in the above problem is that Byzantine failures create arbitrary and unspecified dependency among the iterations and the aggregated gradients. To handle this issue in the analysis, we prove that the aggregated gradient, as a function of model parameter, converges uniformly to the true gradient function.

Tsunashima, Hideki, Hoshi, Taisei, Chen, Qiu.  2018.  DzGAN: Improved Conditional Generative Adversarial Nets Using Divided Z-Vector. Proceedings of the 2018 International Conference on Computing and Big Data. :52-55.

Conditional Generative Adversarial Nets [1](cGAN) was recently proposed as a novel conditional learning method by feeding some extra information into the network. In this paper we propose an improved conditional GANs which use divided z-vector (DzGAN). The computation amount will be reduced because DzGAN can implement conditional learning using not images but one-hot vector by dividing the range of z-vector (e.g. -1\textasciitilde1 to -1\textasciitilde0 and 0\textasciitilde1). In the DzGAN, the discriminator is fed by the images with label using one-hot vector and the generator is fed by divided z-vector (e.g. there are 10 classes In MNIST dataset, the divided z-vector will be z1\textasciitildez10 accordingly) with corresponding label fed into the discriminator, thus we can implement conditional learning. In this paper we use conditional Deep Convolutional Generative Adversarial Networks (cDCGAN) [7] instead of cGAN because cDCGAN can generate clear image better than cGAN. Heuristic experiments of conditional learning which compare the computation amount demonstrate that DzGAN is superior than cDCGAN.

2019-04-29
Kar, Diptendu Mohan, Ray, Indrajit, Gallegos, Jenna, Peccoud, Jean.  2018.  Digital Signatures to Ensure the Authenticity and Integrity of Synthetic DNA Molecules. Proceedings of the New Security Paradigms Workshop. :110–122.

DNA synthesis has become increasingly common, and many synthetic DNA molecules are licensed intellectual property (IP). DNA samples are shared between academic labs, ordered from DNA synthesis companies and manipulated for a variety of different purposes, mostly to study their properties and improve upon them. However, it is not uncommon for a sample to change hands many times with very little accompanying information and no proof of origin. This poses significant challenges to the original inventor of a DNA molecule, trying to protect her IP rights. More importantly, following the anthrax attacks of 2001, there is an increased urgency to employ microbial forensic technologies to trace and track agent inventories. However, attribution of physical samples is next to impossible with existing technologies. In this paper, we describe our efforts to solve this problem by embedding digital signatures in DNA molecules synthesized in the laboratory. We encounter several challenges that we do not face in the digital world. These challenges arise primarily from the fact that changes to a physical DNA molecule can affect its properties, random mutations can accumulate in DNA samples over time, DNA sequencers can sequence (read) DNA erroneously and DNA sequencing is still relatively expensive (which means that laboratories would prefer not to read and re-read their DNA samples to get error-free sequences). We address these challenges and present a digital signature technology that can be applied to synthetic DNA molecules in living cells.

2019-04-05
Konorski, J..  2018.  Double-Blind Reputation vs. Intelligent Fake VIP Attacks in Cloud-Assisted Interactions. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1637-1641.

We consider a generic model of Client-Server interactions in the presence of Sender and Relay, conceptual agents acting on behalf of Client and Server, respectively, and modeling cloud service providers in the envisaged "QoS as a Service paradigm". Client generates objects which Sender tags with demanded QoS level, whereas Relay assigns the QoS level to be provided at Server. To verify an object's right to a QoS level, Relay detects its signature that neither Client nor Sender can modify. Since signature detection is costly, Relay tends to occasionally skip it and trust an object; this prompts Sender to occasionally launch a Fake VIP attack, i.e., demand undue QoS level. In a Stackelberg game setting, Relay employs a trust strategy in the form of a double-blind reputation scheme so as to minimize the signature detection cost and undue QoS provision, anticipating a best-response Fake VIP attack strategy on the part of Sender. We ask whether the double-blind reputation scheme, previously proved resilient to a probabilistic Fake VIP attack strategy, is equally resilient to more intelligent Sender behavior. Two intelligent attack strategies are proposed and analyzed using two-dimensional Markov chains.

Wu, C., Kuo, M., Lee, K..  2018.  A Dynamic-Key Secure Scan Structure Against Scan-Based Side Channel and Memory Cold Boot Attacks. 2018 IEEE 27th Asian Test Symposium (ATS). :48-53.

Scan design is a universal design for test (DFT) technology to increase the observability and controllability of the circuits under test by using scan chains. However, it also leads to a potential security problem that attackers can use scan design as a backdoor to extract confidential information. Researchers have tried to address this problem by using secure scan structures that usually have some keys to confirm the identities of users. However, the traditional methods to store intermediate data or keys in memory are also under high risk of being attacked. In this paper, we propose a dynamic-key secure DFT structure that can defend scan-based and memory attacks without decreasing the system performance and the testability. The main idea is to build a scan design key generator that can generate the keys dynamically instead of storing and using keys in the circuit statically. Only specific patterns derived from the original test patterns are valid to construct the keys and hence the attackers cannot shift in any other patterns to extract correct internal response from the scan chains or retrieve the keys from memory. Analysis results show that the proposed method can achieve a very high security level and the security level will not decrease no matter how many guess rounds the attackers have tried due to the dynamic nature of our method.

2019-04-01
Ledbetter, W., Glisson, W., McDonald, T., Andel, T., Grispos, G., Choo, K..  2018.  Digital Blues: An Investigation Into the Use of Bluetooth Protocols. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :498–503.
The proliferation of Bluetooth mobile device communications into all aspects of modern society raises security questions by both academicians and practitioners. This environment prompted an investigation into the real-world use of Bluetooth protocols along with an analysis of documented security attacks. The experiment discussed in this paper collected data for one week in a local coffee shop. The data collection took about an hour each day and identified 478 distinct devices. The contribution of this research is two-fold. First, it provides insight into real-world Bluetooth protocols that are being utilized by the general public. Second, it provides foundational research that is necessary for future Bluetooth penetration testing research.
Imran, Laiqa Binte, Farhan, Muhammad, Latif, Rana M. Amir, Rafiq, Ahsan.  2018.  Design of an IoT Based Warfare Car Robot Using Sensor Network Connectivity. Proceedings of the 2Nd International Conference on Future Networks and Distributed Systems. :55:1–55:8.
Robots remain the focus of researchers and developers, and now they are moving towards IoT based devices and mobile robots to take advantage of the different sensor enables facilities. A robot is a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer. A robot can be controlled by a human and can be modified by its functionality at runtime by the operator. From past few decades, researchers are contributing towards Robotics. There is no end of technology, creativity, and innovation. The project is designed to develop a robot using android application for remote operation attached to the wireless camera for monitoring purpose. Surveillance using the camera can help the soldier team to make strategies at run-time. This kind of robot can be helpful for spying purpose in war fields. The android application loaded on mobile devices can connect to the security system and easy to use GUI and visualization of the Warfield. The security system then acts on these commands and responds to the user. The camera and the motion detector are attached to the system for remote surveillance using wireless protocol 802.11, ZigBee and Bluetooth protocols. This robot is having the functionality of mines detection, object detection, GPS used for location and navigation and a gun to fire the enemy at the runtime.
Celosia, Guillaume, Cunche, Mathieu.  2018.  Detecting Smartphone State Changes Through a Bluetooth Based Timing Attack. Proceedings of the 11th ACM Conference on Security & Privacy in Wireless and Mobile Networks. :154–159.
Bluetooth is a popular wireless communication technology that is available on most mobile devices. Although Bluetooth includes security and privacy preserving mechanisms, we show that a Bluetooth harmless inherent request-response mechanism can taint users privacy. More specifically, we introduce a timing attack that can be triggered by a remote attacker in order to infer information about a Bluetooth device state. By observing the L2CAP layer ping mechanism timing variations, it is possible to detect device state changes, for instance when the device goes in or out of the locked state. Our experimental results show that change point detection analysis of the timing allows to detect device state changes with a high accuracy. Finally, we discuss applications and countermeasures.
2019-03-28
Sahabandu, D., Xiao, B., Clark, A., Lee, S., Lee, W., Poovendran, R..  2018.  DIFT Games: Dynamic Information Flow Tracking Games for Advanced Persistent Threats. 2018 IEEE Conference on Decision and Control (CDC). :1136-1143.
Dynamic Information Flow Tracking (DIFT) has been proposed to detect stealthy and persistent cyber attacks that evade existing defenses such as firewalls and signature-based antivirus systems. A DIFT defense taints and tracks suspicious information flows across the network in order to identify possible attacks, at the cost of additional memory overhead for tracking non-adversarial information flows. In this paper, we present the first analytical model that describes the interaction between DIFT and adversarial information flows, including the probability that the adversary evades detection and the performance overhead of the defense. Our analytical model consists of a multi-stage game, in which each stage represents a system process through which the information flow passes. We characterize the optimal strategies for both the defense and adversary, and derive efficient algorithms for computing the strategies. Our results are evaluated on a realworld attack dataset obtained using the Refinable Attack Investigation (RAIN) framework, enabling us to draw conclusions on the optimal adversary and defense strategies, as well as the effect of valid information flows on the interaction between adversary and defense.
2019-03-25
Shehu, Yahaya Isah, James, Anne, Palade, Vasile.  2018.  Detecting an Alteration in Biometric Fingerprint Databases. Proceedings of the 2Nd International Conference on Digital Signal Processing. :6–11.
Assuring the integrity of biometric fingerprint templates in fingerprint databases is of paramount importance. Fingerprint templates contain a set of fingerprint minutiae which are various points of interest in a fingerprint. Most times, it is assumed that the stored biometric fingerprint templates are well protected and, as such, researchers are more concerned with improving/developing biometric systems that will not suffer from an unacceptable rate of false alarms and/or missed detections. The introduction of forensic techniques into biometrics for biometric template manipulation detection is of great importance and little research has been carried in this area. This paper investigates possible forensic techniques that could be used for stored biometric fingerprint templates tampering detection. A Support Vector Machine (SVM) classification approach is used for this task. The original and tampered templates are used to train the SVM classifier. The fingerprint datasets from the Biometrics Ideal Test (BIT) [13] are used for training and testing the classifier. Our proposed approach detects alterations with an accuracy of 90.5%.
Li, Y., Guan, Z., Xu, C..  2018.  Digital Image Self Restoration Based on Information Hiding. 2018 37th Chinese Control Conference (CCC). :4368–4372.
With the rapid development of computer networks, multimedia information is widely used, and the security of digital media has drawn much attention. The revised photo as a forensic evidence will distort the truth of the case badly tampered pictures on the social network can have a negative impact on the parties as well. In order to ensure the authenticity and integrity of digital media, self-recovery of digital images based on information hiding is studied in this paper. Jarvis half-tone change is used to compress the digital image and obtain the backup data, and then spread the backup data to generate the reference data. Hash algorithm aims at generating hash data by calling reference data and original data. Reference data and hash data together as a digital watermark scattered embedded in the digital image of the low-effective bits. When the image is maliciously tampered with, the hash bit is used to detect and locate the tampered area, and the image self-recovery is performed by extracting the reference data hidden in the whole image. In this paper, a thorough rebuild quality assessment of self-healing images is performed and better performance than the traditional DCT(Discrete Cosine Transform)quantization truncation approach is achieved. Regardless of the quality of the tampered content, a reference authentication system designed according to the principles presented in this paper allows higher-quality reconstruction to recover the original image with good quality even when the large area of the image is tampered.
2019-03-15
Inoue, T., Hasegawa, K., Kobayashi, Y., Yanagisawa, M., Togawa, N..  2018.  Designing Subspecies of Hardware Trojans and Their Detection Using Neural Network Approach. 2018 IEEE 8th International Conference on Consumer Electronics - Berlin (ICCE-Berlin). :1-4.

Due to the recent technological development, home appliances and electric devices are equipped with high-performance hardware device. Since demand of hardware devices is increased, production base become internationalized to mass-produce hardware devices with low cost and hardware vendors outsource their products to third-party vendors. Accordingly, malicious third-party vendors can easily insert malfunctions (also known as "hardware Trojans'') into their products. In this paper, we design six kinds of hardware Trojans at a gate-level netlist, and apply a neural-network (NN) based hardware-Trojan detection method to them. The designed hardware Trojans are different in trigger circuits. In addition, we insert them to normal circuits, and detect hardware Trojans using a machine-learning-based hardware-Trojan detection method with neural networks. In our experiment, we learned Trojan-infected benchmarks using NN, and performed cross validation to evaluate the learned NN. The experimental results demonstrate that the average TPR (True Positive Rate) becomes 72.9%, the average TNR (True Negative Rate) becomes 90.0%.

Nicho, M., Khan, S. N..  2018.  A Decision Matrix Model to Identify and Evaluate APT Vulnerabilities at the User Plane. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). :1155-1160.

While advances in cyber-security defensive mechanisms have substantially prevented malware from penetrating into organizational Information Systems (IS) networks, organizational users have found themselves vulnerable to threats emanating from Advanced Persistent Threat (APT) vectors, mostly in the form of spear phishing. In this respect, the question of how an organizational user can differentiate between a genuine communication and a similar looking fraudulent communication in an email/APT threat vector remains a dilemma. Therefore, identifying and evaluating the APT vector attributes and assigning relative weights to them can assist the user to make a correct decision when confronted with a scenario that may be genuine or a malicious APT vector. In this respect, we propose an APT Decision Matrix model which can be used as a lens to build multiple APT threat vector scenarios to identify threat attributes and their weights, which can lead to systems compromise.

2019-03-11
Ahmed, Alaa H., Sadri, Fereidoon.  2018.  Datafusion: Taking Source Confidences into Account. Proceedings of the 8th International Conference on Information Systems and Technologies. :9:1–9:6.
Data fusion is a form of information integration where large amounts of data mined from sources such as web sites, Twitter feeds, Facebook postings, blogs, email messages, news streams, and the like are integrated. Such data is inherently uncertain and unreliable. The sources have different degrees of accuracy and the data mining process itself incurs additional uncertainty. The main goal of data fusion is to discover the correct data among the uncertain and possibly conflicting mined data. We investigate a data fusion approach that, in addition to the accuracy of sources, incorporates the correctness (confidence) measures that most data mining approaches associate with mined data. There are a number of advantages in incorporating these confidences. First, we do not require a training set. The initial training set is obtained using the confidence measures. More importantly, a more accurate fusion can result by taking the confidences into account. We present an approach to determine the correctness threshold using users' feedback, and show it can significantly improve the accuracy of data fusion. We evaluate of the performance and accuracy of our data fusion approach for two groups of experiments. In the first group data sources contain random (unintentional) errors. In the second group data sources contain intentional falsifications.
Wagner, Paul Georg, Birnstill, Pascal, Beyerer, Jürgen.  2018.  Distributed Usage Control Enforcement Through Trusted Platform Modules and SGX Enclaves. Proceedings of the 23Nd ACM on Symposium on Access Control Models and Technologies. :85–91.
In the light of mobile and ubiquitous computing, sharing sensitive information across different computer systems has become an increasingly prominent practice. This development entails a demand of access control measures that can protect data even after it has been transferred to a remote computer system. In order to address this problem, sophisticated usage control models have been developed. These models include a client side reference monitor (CRM) that continuously enforces protection policies on foreign data. However, it is still unclear how such a CRM can be properly protected in a hostile environment. The user of the data on the client system can influence the client's state and has physical access to the system. Hence technical measures are required to protect the CRM on a system, which is legitimately used by potential attackers. Existing solutions utilize Trusted Platform Modules (TPMs) to solve this problem by establishing an attestable trust anchor on the client. However, the resulting protocols have several drawbacks that make them infeasible for practical use. This work proposes a reference monitor implementation that establishes trust by using TPMs along with Intel SGX enclaves. First we show how SGX enclaves can realize a subset of the existing usage control requirements. Then we add a TPM to establish and protect a powerful enforcement component on the client. Ultimately this allows us to technically enforce usage control policies on an untrusted remote system.
Psaras, Ioannis.  2018.  Decentralised Edge-Computing and IoT Through Distributed Trust. Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services. :505–507.
The emerging Internet of Things needs edge-computing - this is an established fact. In turn, edge computing needs infrastructure decentralisation. What is not necessarily established yet is that infrastructure decentralisation needs a distributed model of Internet governance and decentralised trust schemes. We discuss the features of a decentralised IoT and edge-computing ecosystem and list the components that need to be designed, as well the challenges that need to be addressed.
Zhang, Dajun, Yu, F. Richard, Yang, Ruizhe, Tang, Helen.  2018.  A Deep Reinforcement Learning-based Trust Management Scheme for Software-defined Vehicular Networks. Proceedings of the 8th ACM Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications. :1–7.
Vehicular ad hoc networks (VANETs) have become a promising technology in intelligent transportation systems (ITS) with rising interest of expedient, safe, and high-efficient transportation. VANETs are vulnerable to malicious nodes and result in performance degradation because of dynamicity and infrastructure-less. In this paper, we propose a trust based dueling deep reinforcement learning approach (T-DDRL) for communication of connected vehicles, we deploy a dueling network architecture into a logically centralized controller of software-defined networking (SDN). Specifically, the SDN controller is used as an agent to learn the most trusted routing path by deep neural network (DNN) in VANETs, where the trust model is designed to evaluate neighbors' behaviour of forwarding routing information. Simulation results are presented to show the effectiveness of the proposed T-DDRL framework.
2019-03-06
Khalil, Issa M., Guan, Bei, Nabeel, Mohamed, Yu, Ting.  2018.  A Domain Is Only As Good As Its Buddies: Detecting Stealthy Malicious Domains via Graph Inference. Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy. :330-341.

Inference based techniques are one of the major approaches to analyze DNS data and detect malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new approach to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak "co-IP" relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed approach not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithms are specifically designed for DNS data analysis. They are effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improve the inference efficiency, we construct a new domain-IP graph that can work well with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only a minor impact to detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.

Mito, M., Murata, K., Eguchi, D., Mori, Y., Toyonaga, M..  2018.  A Data Reconstruction Method for The Big-Data Analysis. 2018 9th International Conference on Awareness Science and Technology (iCAST). :319-323.
In recent years, the big-data approach has become important within various business operations and sales judgment tactics. Contrarily, numerous privacy problems limit the progress of their analysis technologies. To mitigate such problems, this paper proposes several privacy-preserving methods, i.e., anonymization, extreme value record elimination, fully encrypted analysis, and so on. However, privacy-cracking fears still remain that prevent the open use of big-data by other, external organizations. We propose a big-data reconstruction method that does not intrinsically use privacy data. The method uses only the statistical features of big-data, i.e., its attribute histograms and their correlation coefficients. To verify whether valuable information can be extracted using this method, we evaluate the data by using Self Organizing Map (SOM) as one of the big-data analysis tools. The results show that the same pieces of information are extracted from our data and the big-data.
Guerriero, Michele, Tamburri, Damian Andrew, Di Nitto, Elisabetta.  2018.  Defining, Enforcing and Checking Privacy Policies in Data-Intensive Applications. Proceedings of the 13th International Conference on Software Engineering for Adaptive and Self-Managing Systems. :172-182.
The rise of Big Data is leading to an increasing demand for large-scale data-intensive applications (DIAs), which have to analyse massive amounts of personal data (e.g. customers' location, cars' speed, people heartbeat, etc.), some of which can be sensitive, meaning that its confidentiality has to be protected. In this context, DIA providers are responsible for enforcing privacy policies that account for the privacy preferences of data subjects as well as for general privacy regulations. This is the case, for instance, of data brokers, i.e. companies that continuously collect and analyse data in order to provide useful analytics to their clients. Unfortunately, the enforcement of privacy policies in modern DIAs tends to become cumbersome because (i) the number of policies can easily explode, depending on the number of data subjects, (ii) policy enforcement has to autonomously adapt to the application context, thus, requiring some non-trivial runtime reasoning, and (iii) designing and developing modern DIAs is complex per se. For the above reasons, we need specific design and runtime methods enabling so called privacy-by-design in a Big Data context. In this article we propose an approach for specifying, enforcing and checking privacy policies on DIAs designed according to the Google Dataflow model and we show that the enforcement approach behaves correctly in the considered cases and introduces a performance overhead that is acceptable given the requirements of a typical DIA.