Biblio

Found 2356 results

Filters: Keyword is privacy  [Clear All Filters]
2020-04-03
Alom, Md. Zulfikar, Carminati, Barbara, Ferrari, Elena.  2019.  Adapting Users' Privacy Preferences in Smart Environments. 2019 IEEE International Congress on Internet of Things (ICIOT). :165—172.
A smart environment is a physical space where devices are connected to provide continuous support to individuals and make their life more comfortable. For this purpose, a smart environment collects, stores, and processes a massive amount of personal data. In general, service providers collect these data according to their privacy policies. To enhance the privacy control, individuals can explicitly express their privacy preferences, stating conditions on how their data have to be used and managed. Typically, privacy checking is handled through the hard matching of users' privacy preferences against service providers' privacy policies, by denying all service requests whose privacy policies do not fully match with individual's privacy preferences. However, this hard matching might be too restrictive in a smart environment because it denies the services that partially satisfy the individual's privacy preferences. To cope with this challenge, in this paper, we propose a soft privacy matching mechanism, able to relax, in a controlled way, some conditions of users' privacy preferences such to match with service providers' privacy policies. At this aim, we exploit machine learning algorithms to build a classifier, which is able to make decisions on future service requests, by learning which privacy preference components a user is prone to relax, as well as the relaxation tolerance. We test our approach on two realistic datasets, obtaining promising results.
2020-10-12
Marrone, Stefano, Sansone, Carlo.  2019.  An Adversarial Perturbation Approach Against CNN-based Soft Biometrics Detection. 2019 International Joint Conference on Neural Networks (IJCNN). :1–8.
The use of biometric-based authentication systems spread over daily life consumer electronics. Over the years, researchers' interest shifted from hard (such as fingerprints, voice and keystroke dynamics) to soft biometrics (such as age, ethnicity and gender), mainly by using the latter to improve the authentication systems effectiveness. While newer approaches are constantly being proposed by domain experts, in the last years Deep Learning has raised in many computer vision tasks, also becoming the current state-of-art for several biometric approaches. However, since the automatic processing of data rich in sensitive information could expose users to privacy threats associated to their unfair use (i.e. gender or ethnicity), in the last years researchers started to focus on the development of defensive strategies in the view of a more secure and private AI. The aim of this work is to exploit Adversarial Perturbation, namely approaches able to mislead state-of-the-art CNNs by injecting a suitable small perturbation over the input image, to protect subjects against unwanted soft biometrics-based identification by automatic means. In particular, since ethnicity is one of the most critical soft biometrics, as a case of study we will focus on the generation of adversarial stickers that, once printed, can hide subjects ethnicity in a real-world scenario.
2020-08-07
Zhu, Tianqing, Yu, Philip S..  2019.  Applying Differential Privacy Mechanism in Artificial Intelligence. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1601—1609.
Artificial Intelligence (AI) has attracted a large amount of attention in recent years. However, several new problems, such as privacy violations, security issues, or effectiveness, have been emerging. Differential privacy has several attractive properties that make it quite valuable for AI, such as privacy preservation, security, randomization, composition, and stability. Therefore, this paper presents differential privacy mechanisms for multi-agent systems, reinforcement learning, and knowledge transfer based on those properties, which proves that current AI can benefit from differential privacy mechanisms. In addition, the previous usage of differential privacy mechanisms in private machine learning, distributed machine learning, and fairness in models is discussed, bringing several possible avenues to use differential privacy mechanisms in AI. The purpose of this paper is to deliver the initial idea of how to integrate AI with differential privacy mechanisms and to explore more possibilities to improve AIs performance.
2020-01-21
Suksomboon, Kalika, Shen, Zhishu, Ueda, Kazuaki, Tagami, Atsushi.  2019.  C2P2: Content-Centric Privacy Platform for Privacy-Preserving Monitoring Services. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:252–261.
Motivated by ubiquitous surveillance cameras in a smart city, a monitoring service can be provided to citizens. However, the rise of privacy concerns may disrupt this advanced service. Yet, the existing cloud-based services have not clearly proven that they can preserve Wth-privacy in which the relationship of three types of information, i.e., who requests the service, what the target is and where the camera is, does not leak. We address this problem by proposing a content-centric privacy platform (C2P2) that enables the construction of a Wth-privacy-preserving monitoring service without cloud dependency. C2P2 uses an image classification model of a target serving as the key to access the monitoring service specific to the target. In C2P2, communication is based on information-centric networking (ICN) that enables privacy preservation to be centered on the content itself rather than relying on a centralized system. Moreover, to preserve the privacy of bystanders, C2P2 separates the sensitive information (e.g., human faces) from the non-sensitive information (e.g., image background), while the privacy-aware forwarding strategies in C2P2 enable data aggregation and prevent privacy leakage resulting from false positive of image recognition. We evaluate the privacy leakage of C2P2 compared to that of the cloud-based system. The privacy analysis shows that, compared to the cloud-based system, C2P2 achieves a lower privacy loss ratio while reducing the communication cost significantly.
2020-04-03
Lachner, Clemens, Rausch, Thomas, Dustdar, Schahram.  2019.  Context-Aware Enforcement of Privacy Policies in Edge Computing. 2019 IEEE International Congress on Big Data (BigDataCongress). :1—6.
Privacy is a fundamental concern that confronts systems dealing with sensitive data. The lack of robust solutions for defining and enforcing privacy measures continues to hinder the general acceptance and adoption of these systems. Edge computing has been recognized as a key enabler for privacy enhanced applications, and has opened new opportunities. In this paper, we propose a novel privacy model based on context-aware edge computing. Our model leverages the context of data to make decisions about how these data need to be processed and managed to achieve privacy. Based on a scenario from the eHealth domain, we show how our generalized model can be used to implement and enact complex domain-specific privacy policies. We illustrate our approach by constructing real world use cases involving a mobile Electronic Health Record that interacts with, and in different environments.
2020-02-26
Rahman, Obaid, Quraishi, Mohammad Ali Gauhar, Lung, Chung-Horng.  2019.  DDoS Attacks Detection and Mitigation in SDN Using Machine Learning. 2019 IEEE World Congress on Services (SERVICES). 2642-939X:184–189.

Software Defined Networking (SDN) is very popular due to the benefits it provides such as scalability, flexibility, monitoring, and ease of innovation. However, it needs to be properly protected from security threats. One major attack that plagues the SDN network is the distributed denial-of-service (DDoS) attack. There are several approaches to prevent the DDoS attack in an SDN network. We have evaluated a few machine learning techniques, i.e., J48, Random Forest (RF), Support Vector Machine (SVM), and K-Nearest Neighbors (K-NN), to detect and block the DDoS attack in an SDN network. The evaluation process involved training and selecting the best model for the proposed network and applying it in a mitigation and prevention script to detect and mitigate attacks. The results showed that J48 performs better than the other evaluated algorithms, especially in terms of training and testing time.

2020-04-17
Jmila, Houda, Blanc, Gregory.  2019.  Designing Security-Aware Service Requests for NFV-Enabled Networks. 2019 28th International Conference on Computer Communication and Networks (ICCCN). :1—9.

Network Function Virtualization (NFV) is a recent concept where virtualization enables the shift from network functions (e.g., routers, switches, load-balancers, proxies) on specialized hardware appliances to software images running on all-purpose, high-volume servers. The resource allocation problem in the NFV environment has received considerable attention in the past years. However, little attention was paid to the security aspects of the problem in spite of the increasing number of vulnerabilities faced by cloud-based applications. Securing the services is an urgent need to completely benefit from the advantages offered by NFV. In this paper, we show how a network service request, composed of a set of service function chains (SFC) should be modified and enriched to take into consideration the security requirements of the supported service. We examine the well-known security best practices and propose a two-step algorithm that extends the initial SFC requests to a more complex chaining model that includes the security requirements of the service.

2020-02-17
Yang, Chen, Liu, Tingting, Zuo, Lulu, Hao, Zhiyong.  2019.  An Empirical Study on the Data Security and Privacy Awareness to Use Health Care Wearable Devices. 2019 16th International Conference on Service Systems and Service Management (ICSSSM). :1–6.
Recently, several health care wearable devices which can intervene in health and collect personal health data have emerged in the medical market. Although health care wearable devices promote the integration of multi-layer medical resources and bring new ways of health applications for users, it is inevitable that some problems will be brought. This is mainly manifested in the safety protection of medical and health data and the protection of user's privacy. From the users' point of view, the irrational use of medical and health data may bring psychological and physical negative effects to users. From the government's perspective, it may be sold by private businesses in the international arena and threaten national security. The most direct precaution against the problem is users' initiative. For better understanding, a research model is designed by the following five aspects: Security knowledge (SK), Security attitude (SAT), Security practice (SP), Security awareness (SAW) and Security conduct (SC). To verify the model, structural equation analysis which is an empirical approach was applied to examine the validity and all the results showed that SK, SAT, SP, SAW and SC are important factors affecting users' data security and privacy protection awareness.
2020-03-23
Kaul, Sonam Devgan, Hatzinakos, Dimitrios.  2019.  Learning Automata Based Secure Multi Agent RFID Authentication System. 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–7.
Radio frequency identification wireless sensing technology widely adopted and developed from last decade and has been utilized for monitoring and autonomous identification of objects. However, wider utilization of RFID technologies has introduced challenges such as preserving security and privacy of sensitive data while maintaining the high quality of service. Thus, in this work, we will deliberately build up a RFID system by utilizing learning automata based multi agent intelligent system to greatly enhance and secure message transactions and to improve operational efficiency. The incorporation of these two advancements and technological developments will provide maximum benefit in terms of expertly and securely handle data in RFID scenario. In proposed work, learning automata inbuilt RFID tags or assumed players choose their optimal strategy via enlarging its own utility function to achieve long term benefit. This is possible if they transmit their utility securely to back end server and then correspondingly safely get new utility function from server to behave optimally in its environment. Hence, our proposed authentication protocol, expertly transfer utility from learning automata inbuilt tags to reader and then to server. Moreover, we verify the security and privacy of our proposed system by utilizing automatic formal prover Scyther tool.
2020-04-03
Gerl, Armin, Becher, Stefan.  2019.  Policy-Based De-Identification Test Framework. 2019 IEEE World Congress on Services (SERVICES). 2642-939X:356—357.
Protecting privacy of individuals is a basic right, which has to be considered in our data-centered society in which new technologies emerge rapidly. To preserve the privacy of individuals de-identifying technologies have been developed including pseudonymization, personal privacy anonymization, and privacy models. Each having several variations with different properties and contexts which poses the challenge for the proper selection and application of de-identification methods. We tackle this challenge proposing a policy-based de-identification test framework for a systematic approach to experimenting and evaluation of various combinations of methods and their interplay. Evaluation of the experimental results regarding performance and utility is considered within the framework. We propose a domain-specific language, expressing the required complex configuration options, including data-set, policy generator, and various de-identification methods.
2020-06-22
Adesuyi, Tosin A., Kim, Byeong Man.  2019.  Preserving Privacy in Convolutional Neural Network: An ∊-tuple Differential Privacy Approach. 2019 IEEE 2nd International Conference on Knowledge Innovation and Invention (ICKII). :570–573.
Recent breakthrough in neural network has led to the birth of Convolutional neural network (CNN) which has been found to be very efficient especially in the areas of image recognition and classification. This success is traceable to the availability of large datasets and its capability to learn salient and complex data features which subsequently produce a reusable output model (Fθ). The Fθ are often made available (e.g. on cloud as-a-service) for others (client) to train their data or do transfer learning, however, an adversary can perpetrate a model inversion attack on the model Fθ to recover training data, hence compromising the sensitivity of the model buildup data. This is possible because CNN as a variant of deep neural network does memorize most of its training data during learning. Consequently, this has pose a privacy concern especially when a medical or financial data are used as model buildup data. Existing researches that proffers privacy preserving approach however suffer from significant accuracy degradation and this has left privacy preserving model on a theoretical desk. In this paper, we proposed an ϵ-tuple differential privacy approach that is based on neuron impact factor estimation to preserve privacy of CNN model without significant accuracy degradation. We experiment our approach on two large datasets and the result shows no significant accuracy degradation.
2020-09-04
Osia, Seyed Ali, Rassouli, Borzoo, Haddadi, Hamed, Rabiee, Hamid R., Gündüz, Deniz.  2019.  Privacy Against Brute-Force Inference Attacks. 2019 IEEE International Symposium on Information Theory (ISIT). :637—641.
Privacy-preserving data release is about disclosing information about useful data while retaining the privacy of sensitive data. Assuming that the sensitive data is threatened by a brute-force adversary, we define Guessing Leakage as a measure of privacy, based on the concept of guessing. After investigating the properties of this measure, we derive the optimal utility-privacy trade-off via a linear program with any f-information adopted as the utility measure, and show that the optimal utility is a concave and piece-wise linear function of the privacy-leakage budget.
2020-04-24
Rodriguez, Manuel, Fathy, Hosam.  2019.  Self-Synchronization of Connected Vehicles in Traffic Networks: What Happens When We Think of Vehicles as Waves? 2019 American Control Conference (ACC). :2651—2657.

In this paper we consider connected and autonomous vehicles (CAV) in a traffic network as moving waves defined by their frequency and phase. This outlook allows us to develop a multi-layer decentralized control strategy that achieves the following desirable behaviors: (1) safe spacing between vehicles traveling down the same road, (2) coordinated safe crossing at intersections of conflicting flows, (3) smooth velocity profiles when traversing adjacent intersections. The approach consist of using the Kuramoto equation to synchronize the phase and frequency of agents in the network. The output of this layer serves as the reference trajectory for a back-stepping controller that interfaces the first-order dynamics of the phase-domain layer and the second order dynamics of the vehicle. We show the performance of the strategy for a single intersection and a small urban grid network. The literature has focused on solving the intersection coordination problem in both a centralized and decentralized manner. Some authors have even used the Kuramoto equation to achieve synchronization of traffic lights. Our proposed strategy falls in the rubric of a decentralized approach, but unlike previous work, it defines the vehicles as the oscillating agents, and leverages their inter-connectivity to achieve network-wide synchronization. In this way, it combines the benefits of coordinating the crossing of vehicles at individual intersections and synchronizing flow from adjacent junctions.

2020-02-17
Zamula, Alexander, Rassomakhin, Sergii, Krasnobayev, Victor, Morozov, Vladyslav.  2019.  Synthesis of Discrete Complex Nonlinear Signals with Necessary Properties of Correlation Functions. 2019 IEEE 2nd Ukraine Conference on Electrical and Computer Engineering (UKRCON). :999–1002.
The main information and communication systems (ICS) effectiveness parameters are: reliability, resiliency, network bandwidth, service quality, profitability and cost, malware protection, information security, etc. Most modern ICS refers to multiuser systems, which implement the most promising method of distributing subscribers (users), namely, the code distribution, at which, subscribers are provided with appropriate forms of discrete sequences (signatures). Since in multiuser systems, channels code division is based on signal difference, then the ICS construction and systems performance indicators are determined by the chosen signals properties. Distributed spectrum technology is the promising direction of information security for telecommunication systems. Currently used data generation and processing methods, as well as the broadband signal classes used as a physical data carrier, are not enough for the necessary level of information security (information secrecy, imitation resistance) as well as noise immunity (impedance reception, structural secrecy) of the necessary (for some ICS applications). In this case, discrete sequences (DS) that are based on nonlinear construction rules and have improved correlation, ensemble and structural properties should be used as DS that extend the spectrum (manipulate carrier frequency). In particular, with the use of such signals as the physical carrier of information or synchronization signals, the time expenditures on the disclosure of the signal structure used are increasing and the setting of "optima", in terms of the counteracting station, obstacles becomes problematic. Complex signals obtained on such sequences basis have structural properties, similar to random (pseudorandom) sequences, as well as necessary correlation and ensemble properties. For designing signals for applications applied for measuring delay time, signal detecting, synchronizing stations and etc, side-lobe levels of autocorrelation function (ACF) minimization is essential. In this paper, the problem of optimizing the synthesis of nonlinear discrete sequences, which have improved ensemble, structural and autocorrelation properties, is formulated and solved. The use of nonlinear discrete signals, which are formed on the basis of such sequences, will provide necessary values for impedance protection, structural and information secrecy of ICS operation. Increased requirements for ICS information security, formation and performance data in terms of internal and external threats (influences), determine objectively existing technical and scientific controversy to be solved is goal of this work.The paper presents the results of solving the actual problem of performance indicators improvements for information and communication systems, in particular secrecy, information security and noise immunity with interfering influences, based on the nonlinear discrete cryptographic signals (CS) new classes synthesis with the necessary properties.
2020-09-21
Arrieta, Miguel, Esnaola, Iñaki, Effros, Michelle.  2019.  Universal Privacy Guarantees for Smart Meters. 2019 IEEE International Symposium on Information Theory (ISIT). :2154–2158.
Smart meters enable improvements in electricity distribution system efficiency at some cost in customer privacy. Users with home batteries can mitigate this privacy loss by applying charging policies that mask their underlying energy use. A battery charging policy is proposed and shown to provide universal privacy guarantees subject to a constraint on energy cost. The guarantee bounds our strategy's maximal information leakage from the user to the utility provider under general stochastic models of user energy consumption. The policy construction adapts coding strategies for non-probabilistic permuting channels to this privacy problem.
2020-02-10
Sani, Abubakar Sadiq, Yuan, Dong, Bao, Wei, Yeoh, Phee Lep, Dong, Zhao Yang, Vucetic, Branka, Bertino, Elisa.  2019.  Xyreum: A High-Performance and Scalable Blockchain for IIoT Security and Privacy. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1920–1930.
As cyber attacks to Industrial Internet of Things (IIoT) remain a major challenge, blockchain has emerged as a promising technology for IIoT security due to its decentralization and immutability characteristics. Existing blockchain designs, however, introduce high computational complexity and latency challenges which are unsuitable for IIoT. This paper proposes Xyreum, a new high-performance and scalable blockchain for enhanced IIoT security and privacy. Xyreum uses a Time-based Zero-Knowledge Proof of Knowledge (T-ZKPK) with authenticated encryption to perform Mutual Multi-Factor Authentication (MMFA). T-ZKPK properties are also used to support Key Establishment (KE) for securing transactions. Our approach for reaching consensus, which is a blockchain group decision-making process, is based on lightweight cryptographic algorithms. We evaluate our scheme with respect to security, privacy, and performance, and the results show that, compared with existing relevant blockchain solutions, our scheme is secure, privacy-preserving, and achieves a significant decrease in computation complexity and latency performance with high scalability. Furthermore, we explain how to use our scheme to strengthen the security of the REMME protocol, a blockchain-based security protocol deployed in several application domains.
2020-12-11
Abusnaina, A., Khormali, A., Alasmary, H., Park, J., Anwar, A., Mohaisen, A..  2019.  Adversarial Learning Attacks on Graph-based IoT Malware Detection Systems. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1296—1305.

IoT malware detection using control flow graph (CFG)-based features and deep learning networks are widely explored. The main goal of this study is to investigate the robustness of such models against adversarial learning. We designed two approaches to craft adversarial IoT software: off-the-shelf methods and Graph Embedding and Augmentation (GEA) method. In the off-the-shelf adversarial learning attack methods, we examine eight different adversarial learning methods to force the model to misclassification. The GEA approach aims to preserve the functionality and practicality of the generated adversarial sample through a careful embedding of a benign sample to a malicious one. Intensive experiments are conducted to evaluate the performance of the proposed method, showing that off-the-shelf adversarial attack methods are able to achieve a misclassification rate of 100%. In addition, we observed that the GEA approach is able to misclassify all IoT malware samples as benign. The findings of this work highlight the essential need for more robust detection tools against adversarial learning, including features that are not easy to manipulate, unlike CFG-based features. The implications of the study are quite broad, since the approach challenged in this work is widely used for other applications using graphs.

Ge, X., Pan, Y., Fan, Y., Fang, C..  2019.  AMDroid: Android Malware Detection Using Function Call Graphs. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :71—77.

With the rapid development of the mobile Internet, Android has been the most popular mobile operating system. Due to the open nature of Android, c countless malicious applications are hidden in a large number of benign applications, which pose great threats to users. Most previous malware detection approaches mainly rely on features such as permissions, API calls, and opcode sequences. However, these approaches fail to capture structural semantics of applications. In this paper, we propose AMDroid that leverages function call graphs (FCGs) representing the behaviors of applications and applies graph kernels to automatically learn the structural semantics of applications from FCGs. We evaluate AMDroid on the Genome Project, and the experimental results show that AMDroid is effective to detect Android malware with 97.49% detection accuracy.

2020-10-29
Jiang, Jianguo, Li, Song, Yu, Min, Li, Gang, Liu, Chao, Chen, Kai, Liu, Hui, Huang, Weiqing.  2019.  Android Malware Family Classification Based on Sensitive Opcode Sequence. 2019 IEEE Symposium on Computers and Communications (ISCC). :1—7.

Android malware family classification is an advanced task in Android malware analysis, detection and forensics. Existing methods and models have achieved a certain success for Android malware detection, but the accuracy and the efficiency are still not up to the expectation, especially in the context of multiple class classification with imbalanced training data. To address those challenges, we propose an Android malware family classification model by analyzing the code's specific semantic information based on sensitive opcode sequence. In this work, we construct a sensitive semantic feature-sensitive opcode sequence using opcodes, sensitive APIs, STRs and actions, and propose to analyze the code's specific semantic information, generate a semantic related vector for Android malware family classification based on this feature. Besides, aiming at the families with minority, we adopt an oversampling technique based on the sensitive opcode sequence. Finally, we evaluate our method on Drebin dataset, and select the top 40 malware families for experiments. The experimental results show that the Total Accuracy and Average AUC (Area Under Curve, AUC) reach 99.50% and 98.86% with 45. 17s per Android malware, and even if the number of malware families increases, these results remain good.

Priyamvada Davuluru, Venkata Salini, Narayanan Narayanan, Barath, Balster, Eric J..  2019.  Convolutional Neural Networks as Classification Tools and Feature Extractors for Distinguishing Malware Programs. 2019 IEEE National Aerospace and Electronics Conference (NAECON). :273—278.

Classifying malware programs is a research area attracting great interest for Anti-Malware industry. In this research, we propose a system that visualizes malware programs as images and distinguishes those using Convolutional Neural Networks (CNNs). We study the performance of several well-established CNN based algorithms such as AlexNet, ResNet and VGG16 using transfer learning approaches. We also propose a computationally efficient CNN-based architecture for classification of malware programs. In addition, we study the performance of these CNNs as feature extractors by using Support Vector Machine (SVM) and K-nearest Neighbors (kNN) for classification purposes. We also propose fusion methods to boost the performance further. We make use of the publicly available database provided by Microsoft Malware Classification Challenge (BIG 2015) for this study. Our overall performance is 99.4% for a set of 2174 test samples comprising 9 different classes thereby setting a new benchmark.

2020-11-04
Thomas, L. J., Balders, M., Countney, Z., Zhong, C., Yao, J., Xu, C..  2019.  Cybersecurity Education: From Beginners to Advanced Players in Cybersecurity Competitions. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :149—151.

Cybersecurity competitions have been shown to be an effective approach for promoting student engagement through active learning in cybersecurity. Players can gain hands-on experience in puzzle-based or capture-the-flag type tasks that promote learning. However, novice players with limited prior knowledge in cybersecurity usually found difficult to have a clue to solve a problem and get frustrated at the early stage. To enhance student engagement, it is important to study the experiences of novices to better understand their learning needs. To achieve this goal, we conducted a 4-month longitudinal case study which involves 11 undergraduate students participating in a college-level cybersecurity competition, National Cyber League (NCL) competition. The competition includes two individual games and one team game. Questionnaires and in-person interviews were conducted before and after each game to collect the players' feedback on their experience, learning challenges and needs, and information about their motivation, interests and confidence level. The collected data demonstrate that the primary concern going into these competitions stemmed from a lack of knowledge regarding cybersecurity concepts and tools. Players' interests and confidence can be increased by going through systematic training.

Shin, S., Seto, Y., Kasai, Y., Ka, R., Kuroki, D., Toyoda, S., Hasegawa, K., Midorikawa, K..  2019.  Development of Training System and Practice Contents for Cybersecurity Education. 2019 8th International Congress on Advanced Applied Informatics (IIAI-AAI). :172—177.

In this paper, we propose a cybersecurity exercise system in a virtual computer environment. The human resource development for security fields is an urgent issue because of the threat of cyber-attacks, recently, is increasing, many incidents occurring, but there is a not enough security personnel to respond. Some universities and companies are conducting education using a commercial training system on the market. However, built and operates the training system is expensive, therefore difficult to use in higher education institutions and SMEs. However, to build and operates, the training system needs high cost, thus difficult to use in higher education institutions and SMEs. For this reason, we developed the CyExec: a cybersecurity exercise system consisting of a virtual computer environment using VirtualBox and Docker. We also implemented the WebGoat that is an OSS vulnerability diagnosis and learning program on the CyExec and developed an attack and defense exercise program.

2019-12-16
Hou, Ming, Li, Dequan, Wu, Xiongjun, Shen, Xiuyu.  2019.  Differential Privacy of Online Distributed Optimization under Adversarial Nodes. 2019 Chinese Control Conference (CCC). :2172-2177.

Nowadays, many applications involve big data and big data analysis methods appear in many fields. As a preliminary attempt to solve the challenge of big data analysis, this paper presents a distributed online learning algorithm based on differential privacy. Since online learning can effectively process sensitive data, we introduce the concept of differential privacy in distributed online learning algorithms, with the aim at ensuring data privacy during online learning to prevent adversarial nodes from inferring any important data information. In particular, for different adversary models, we consider different type graphs to tolerate a limited number of adversaries near each regular node or tolerate a global limited number of adversaries.

2020-01-28
Park, Sunnyeo, Kim, Dohyeok, Son, Sooel.  2019.  An Empirical Study of Prioritizing JavaScript Engine Crashes via Machine Learning. Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security. :646–657.

The early discovery of security bugs in JavaScript (JS) engines is crucial for protecting Internet users from adversaries abusing zero-day vulnerabilities. Browser vendors, bug bounty hunters, and security researchers have been eager to find such security bugs by leveraging state-of-the-art fuzzers as well as their domain expertise. They report a bug when observing a crash after executing their JS test since a crash is an early indicator of a potential bug. However, it is difficult to identify whether such a crash indeed invokes security bugs in JS engines. Thus, unskilled bug reporters are unable to assess the security severity of their new bugs with JS engine crashes. Today, this classification of a reported security bug is completely manual, depending on the verdicts from JS engine vendors. We investigated the feasibility of applying various machine learning classifiers to determine whether an observed crash triggers a security bug. We designed and implemented CRScope, which classifies security and non-security bugs from given crash-dump files. Our experimental results on 766 crash instances demonstrate that CRScope achieved 0.85, 0.89, and 0.93 Area Under Curve (AUC) for Chakra, V8, and SpiderMonkey crashes, respectively. CRScope also achieved 0.84, 0.89, and 0.95 precision for Chakra, V8, and SpiderMonkey crashes, respectively. This outperforms the previous study and existing tools including Exploitable and AddressSanitizer. CRScope is capable of learning domain-specific expertise from the past verdicts on reported bugs and automatically classifying JS engine security bugs, which helps improve the scalable classification of security bugs.

KADOGUCHI, Masashi, HAYASHI, Shota, HASHIMOTO, Masaki, OTSUKA, Akira.  2019.  Exploring the Dark Web for Cyber Threat Intelligence Using Machine Leaning. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :200–202.

In recent years, cyber attack techniques are increasingly sophisticated, and blocking the attack is more and more difficult, even if a kind of counter measure or another is taken. In order for a successful handling of this situation, it is crucial to have a prediction of cyber attacks, appropriate precautions, and effective utilization of cyber intelligence that enables these actions. Malicious hackers share various kinds of information through particular communities such as the dark web, indicating that a great deal of intelligence exists in cyberspace. This paper focuses on forums on the dark web and proposes an approach to extract forums which include important information or intelligence from huge amounts of forums and identify traits of each forum using methodologies such as machine learning, natural language processing and so on. This approach will allow us to grasp the emerging threats in cyberspace and take appropriate measures against malicious activities.