Biblio

Found 2356 results

Filters: Keyword is privacy  [Clear All Filters]
2020-09-28
Zhang, Xueru, Khalili, Mohammad Mahdi, Liu, Mingyan.  2018.  Recycled ADMM: Improve Privacy and Accuracy with Less Computation in Distributed Algorithms. 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :959–965.
Alternating direction method of multiplier (ADMM) is a powerful method to solve decentralized convex optimization problems. In distributed settings, each node performs computation with its local data and the local results are exchanged among neighboring nodes in an iterative fashion. During this iterative process the leakage of data privacy arises and can accumulate significantly over many iterations, making it difficult to balance the privacy-utility tradeoff. In this study we propose Recycled ADMM (R-ADMM), where a linear approximation is applied to every even iteration, its solution directly calculated using only results from the previous, odd iteration. It turns out that under such a scheme, half of the updates incur no privacy loss and require much less computation compared to the conventional ADMM. We obtain a sufficient condition for the convergence of R-ADMM and provide the privacy analysis based on objective perturbation.
2019-06-10
Alsulami, B., Mancoridis, S..  2018.  Behavioral Malware Classification Using Convolutional Recurrent Neural Networks. 2018 13th International Conference on Malicious and Unwanted Software (MALWARE). :103-111.

Behavioral malware detection aims to improve on the performance of static signature-based techniques used by anti-virus systems, which are less effective against modern polymorphic and metamorphic malware. Behavioral malware classification aims to go beyond the detection of malware by also identifying a malware's family according to a naming scheme such as the ones used by anti-virus vendors. Behavioral malware classification techniques use run-time features, such as file system or network activities, to capture the behavioral characteristic of running processes. The increasing volume of malware samples, diversity of malware families, and the variety of naming schemes given to malware samples by anti-virus vendors present challenges to behavioral malware classifiers. We describe a behavioral classifier that uses a Convolutional Recurrent Neural Network and data from Microsoft Windows Prefetch files. We demonstrate the model's improvement on the state-of-the-art using a large dataset of malware families and four major anti-virus vendor naming schemes. The model is effective in classifying malware samples that belong to common and rare malware families and can incrementally accommodate the introduction of new malware samples and families.

2020-11-04
[Anonymous].  2018.  Cloud-based Labs and Programming Assignments in Networking and Cybersecurity Courses. 2018 IEEE Frontiers in Education Conference (FIE). :1—9.

This is a full paper for innovate practice. Building a private cloud or using a public cloud is now feasible at many institutions. This paper presents the innovative design of cloudbased labs and programming assignments for a networking course and a cybersecurity course, and our experiences of innovatively using the private cloud at our institution to support these learning activities. It is shown by the instructor's observations and student survey data that our approach benefits learning and teaching. This approach makes it possible and secure to develop some learning activities that otherwise would not be allowed on physical servers. It enables the instructor to support students' desire of developing programs in their preferred programming languages. It allows students to debug and test their programs on the same platform to be used by the instructor for testing and grading. The instructor does not need to spend extra time administrating the computing environments. A majority (88% or more) of the students agree that working on those learning activities in the private cloud not only helps them achieve the course learning objectives, but also prepares them for their future careers.

Ngambeki, I., Nico, P., Dai, J., Bishop, M..  2018.  Concept Inventories in Cybersecurity Education: An Example from Secure Programming. 2018 IEEE Frontiers in Education Conference (FIE). :1—5.

This Innovative Practice Work in Progress paper makes the case for using concept inventories in cybersecurity education and presents an example of the development of a concept inventory in the field of secure programming. The secure programming concept inventory is being developed by a team of researchers from four universities. We used a Delphi study to define the content area to be covered by the concept inventory. Participants in the Delphi study included ten experts from academia, government, and industry. Based on the results, we constructed a concept map of secure programming concepts. We then compared this concept map to the Joint Task Force on Cybersecurity Education Curriculum 2017 guidelines to ensure complete coverage of secure programming concepts. Our mapping indicates a substantial match between the concept map and those guidelines.

Wu, X., Chen, Y., Li, S..  2018.  Contactless Smart Card Experiments in a Cybersecurity Course. 2018 IEEE Frontiers in Education Conference (FIE). :1—4.

This Innovate Practice Work in Progress paper is about education on Cybersecurity, which is essential in training of innovative talents in the era of the Internet. Besides knowledge and skills, it is important as well to enhance the students' awareness of cybersecurity in daily life. Considering that contactless smart cards are common and widely used in various areas, one basic and two advanced contactless smart card experiments were designed innovatively and assigned to junior students in 3-people groups in an introductory cybersecurity summer course. The experimental principles, facilities, contents and arrangement are introduced successively. Classroom tests were managed before and after the experiments, and a box and whisker plot is used to describe the distributions of the scores in both tests. The experimental output and student feedback implied the learning objectives were achieved through the problem-based, active and group learning experience during the experiments.

Švábenský, V., Vykopal, J..  2018.  Gathering Insights from Teenagers’ Hacking Experience with Authentic Cybersecurity Tools. 2018 IEEE Frontiers in Education Conference (FIE). :1—4.

This Work-In-Progress Paper for the Innovative Practice Category presents a novel experiment in active learning of cybersecurity. We introduced a new workshop on hacking for an existing science-popularizing program at our university. The workshop participants, 28 teenagers, played a cybersecurity game designed for training undergraduates and professionals in penetration testing. Unlike in learning environments that are simplified for young learners, the game features a realistic virtual network infrastructure. This allows exploring security tools in an authentic scenario, which is complemented by a background story. Our research aim is to examine how young players approach using cybersecurity tools by interacting with the professional game. A preliminary analysis of the game session showed several challenges that the workshop participants faced. Nevertheless, they reported learning about security tools and exploits, and 61% of them reported wanting to learn more about cybersecurity after the workshop. Our results support the notion that young learners should be allowed more hands-on experience with security topics, both in formal education and informal extracurricular events.

Zeng, Z., Deng, Y., Hsiao, I., Huang, D., Chung, C..  2018.  Improving student learning performance in a virtual hands-on lab system in cybersecurity education. 2018 IEEE Frontiers in Education Conference (FIE). :1—5.

This Research Work in Progress paper presents a study on improving student learning performance in a virtual hands-on lab system in cybersecurity education. As the demand for cybersecurity-trained professionals rapidly increasing, virtual hands-on lab systems have been introduced into cybersecurity education as a tool to enhance students' learning. To improve learning in a virtual hands-on lab system, instructors need to understand: what learning activities are associated with students' learning performance in this system? What relationship exists between different learning activities? What instructors can do to improve learning outcomes in this system? However, few of these questions has been studied for using virtual hands-on lab in cybersecurity education. In this research, we present our recent findings by identifying that two learning activities are positively associated with students' learning performance. Notably, the learning activity of reading lab materials (p \textbackslashtextless; 0:01) plays a more significant role in hands-on learning than the learning activity of working on lab tasks (p \textbackslashtextless; 0:05) in cybersecurity education.In addition, a student, who spends longer time on reading lab materials, may work longer time on lab tasks (p \textbackslashtextless; 0:01).

2021-05-25
Javidi, Giti, Sheybani, Ehsan.  2018.  K-12 Cybersecurity Education, Research, and Outreach. 2018 IEEE Frontiers in Education Conference (FIE). :1—5.
This research-to-practice work-in-progress addresses a new approach to cybersecurity education. The cyber security skills shortage is reaching prevalent proportions. The consensus in the STEM community is that the problem begins at k-12 schools with too few students interested in STEM subjects. One way to ensure a larger pipeline in cybersecurity is to train more high school teachers to not only teach cybersecurity in their schools or integrate cybersecurity concepts in their classrooms but also to promote IT security as an attractive career path. The proposed research will result in developing a unique and novel curriculum and scalable program in the area of cybersecurity and a set of powerful tools for a fun learning experience in cybersecurity education. In this project, we are focusing on the potential to advance research agendas in cybersecurity and train the future generation with cybersecurity skills and answer fundamental research questions that still exist in the blended learning methodologies for cybersecurity education and assessment. Leadership and entrepreneurship skills are also added to the mix to prepare students for real-world problems. Delivery methods, timing, format, pacing and outcomes alignment will all be assessed to provide a baseline for future research and additional synergy and integration with existing cybersecurity programs to expand or leverage for new cybersecurity and STEM educational research. This is a new model for cybersecurity education, leadership, and entrepreneurship and there is a possibility of a significant leap towards a more advanced cybersecurity educational methodology using this model. The project will also provide a prototype for innovation coupled with character-building and ethical leadership.
2020-08-28
Knierim, Pascal, Kiss, Francisco, Schmidt, Albrecht.  2018.  Look Inside: Understanding Thermal Flux Through Augmented Reality. 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). :170—171.
The transition from high school to university is an exciting time for students including many new challenges. Particularly in the field of science, technology, engineering, and mathematics, the university dropout rate may reach up to 40%. The studies of physics rely on many abstract concepts and quantities that are not directly visible like energy or heat. We developed a mixed reality application for education, which augments the thermal conduction of metal by overlaying a representation of temperature as false-color visualization directly onto the object. This real-time augmentation avoids attention split and overcomes the perception gap by amplifying the human eye. Augmented and Virtual Reality environments allow students to perform experiments that were impossible to conduct for security or financial reasons. With the application, we try to foster a deeper understanding of the learning material and higher engagement during the studies.
2019-06-10
Kornish, D., Geary, J., Sansing, V., Ezekiel, S., Pearlstein, L., Njilla, L..  2018.  Malware Classification Using Deep Convolutional Neural Networks. 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). :1-6.

In recent years, deep convolution neural networks (DCNNs) have won many contests in machine learning, object detection, and pattern recognition. Furthermore, deep learning techniques achieved exceptional performance in image classification, reaching accuracy levels beyond human capability. Malware variants from similar categories often contain similarities due to code reuse. Converting malware samples into images can cause these patterns to manifest as image features, which can be exploited for DCNN classification. Techniques for converting malware binaries into images for visualization and classification have been reported in the literature, and while these methods do reach a high level of classification accuracy on training datasets, they tend to be vulnerable to overfitting and perform poorly on previously unseen samples. In this paper, we explore and document a variety of techniques for representing malware binaries as images with the goal of discovering a format best suited for deep learning. We implement a database for malware binaries from several families, stored in hexadecimal format. These malware samples are converted into images using various approaches and are used to train a neural network to recognize visual patterns in the input and classify malware based on the feature vectors. Each image type is assessed using a variety of learning models, such as transfer learning with existing DCNN architectures and feature extraction for support vector machine classifier training. Each technique is evaluated in terms of classification accuracy, result consistency, and time per trial. Our preliminary results indicate that improved image representation has the potential to enable more effective classification of new malware.

2020-11-04
Bell, S., Oudshoorn, M..  2018.  Meeting the Demand: Building a Cybersecurity Degree Program With Limited Resources. 2018 IEEE Frontiers in Education Conference (FIE). :1—7.

This innovative practice paper considers the heightening awareness of the need for cybersecurity programs in light of several well publicized cyber-attacks in recent years. An examination of the academic job market reveals that a significant number of institutions are looking to hire new faculty in the area of cybersecurity. Additionally, a growing number of universities are starting to offer courses, certifications and degrees in cybersecurity. Other recent activity includes the development of a model cybersecurity curriculum and the creation of a program accreditation criteria for cybersecurity through ABET. This sudden and significant growth in demand for cybersecurity expertise has some similarities to the significant demand for networking faculty that Computer Science programs experienced in the late 1980s as a result of the rise of the Internet. This paper examines the resources necessary to respond to the demand for cybersecurity courses and programs and draws some parallels and distinctions to the demand for networking faculty over 25 years ago. Faculty and administration are faced with a plethora of questions to answer as they approach this problem: What degree and courses to offer, what certifications to consider, which curriculum to incorporate and how to deliver the material (online, faceto-face, or something in-between)? However, the most pressing question in today's fiscal climate in higher education is: what resources will it take to deliver a cybersecurity program?

Deng, Y., Lu, D., Chung, C., Huang, D., Zeng, Z..  2018.  Personalized Learning in a Virtual Hands-on Lab Platform for Computer Science Education. 2018 IEEE Frontiers in Education Conference (FIE). :1—8.

This Innovate Practice full paper presents a cloud-based personalized learning lab platform. Personalized learning is gaining popularity in online computer science education due to its characteristics of pacing the learning progress and adapting the instructional approach to each individual learner from a diverse background. Among various instructional methods in computer science education, hands-on labs have unique requirements of understanding learner's behavior and assessing learner's performance for personalization. However, it is rarely addressed in existing research. In this paper, we propose a personalized learning platform called ThoTh Lab specifically designed for computer science hands-on labs in a cloud environment. ThoTh Lab can identify the learning style from student activities and adapt learning material accordingly. With the awareness of student learning styles, instructors are able to use techniques more suitable for the specific student, and hence, improve the speed and quality of the learning process. With that in mind, ThoTh Lab also provides student performance prediction, which allows the instructors to change the learning progress and take other measurements to help the students timely. For example, instructors may provide more detailed instructions to help slow starters, while assigning more challenging labs to those quick learners in the same class. To evaluate ThoTh Lab, we conducted an experiment and collected data from an upper-division cybersecurity class for undergraduate students at Arizona State University in the US. The results show that ThoTh Lab can identify learning style with reasonable accuracy. By leveraging the personalized lab platform for a senior level cybersecurity course, our lab-use study also shows that the presented solution improves students engagement with better understanding of lab assignments, spending more effort on hands-on projects, and thus greatly enhancing learning outcomes.

2020-10-16
Liu, Liping, Piao, Chunhui, Jiang, Xuehong, Zheng, Lijuan.  2018.  Research on Governmental Data Sharing Based on Local Differential Privacy Approach. 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE). :39—45.

With the construction and implementation of the government information resources sharing mechanism, the protection of citizens' privacy has become a vital issue for government departments and the public. This paper discusses the risk of citizens' privacy disclosure related to data sharing among government departments, and analyzes the current major privacy protection models for data sharing. Aiming at the issues of low efficiency and low reliability in existing e-government applications, a statistical data sharing framework among governmental departments based on local differential privacy and blockchain is established, and its applicability and advantages are illustrated through example analysis. The characteristics of the private blockchain enhance the security, credibility and responsiveness of information sharing between departments. Local differential privacy provides better usability and security for sharing statistics. It not only keeps statistics available, but also protects the privacy of citizens.

2020-11-04
Dai, J..  2018.  Situation Awareness-Oriented Cybersecurity Education. 2018 IEEE Frontiers in Education Conference (FIE). :1—8.

This Research to Practice Full Paper presents a new methodology in cybersecurity education. In the context of the cybersecurity profession, the `isolation problem' refers to the observed isolation of different knowledge units, as well as the isolation of technical and business perspectives. Due to limitations in existing cybersecurity education, professionals entering the field are often trapped in microscopic perspectives, and struggle to extend their findings to grasp the big picture in a target network scenario. Guided by a previous developed and published framework named “cross-layer situation knowledge reference model” (SKRM), which delivers comprehensive level big picture situation awareness, our new methodology targets at developing suites of teaching modules to address the above issues. The modules, featuring interactive hands-on labs that emulate real-world multiple-step attacks, will help students form a knowledge network instead of isolated conceptual knowledge units. Students will not just be required to leverage various techniques/tools to analyze breakpoints and complete individual modules; they will be required to connect logically the outputs of these techniques/tools to infer the ground truth and gain big picture awareness of the cyber situation. The modules will be able to be used separately or as a whole in a typical network security course.

2019-01-31
Menet, Fran\c cois, Berthier, Paul, Gagnon, Michel, Fernandez, José M..  2018.  Spartan Networks: Self-Feature-Squeezing Networks for Increased Robustness in Adversarial Settings. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :2246–2248.

Deep Learning Models are vulnerable to adversarial inputs, samples modified in order to maximize error of the system. We hereby introduce Spartan Networks, Deep Learning models that are inherently more resistant to adverarial examples, without doing any input preprocessing out of the network or adversarial training. These networks have an adversarial layer within the network designed to starve the network of information, using a new activation function to discard data. This layer trains the neural network to filter-out usually-irrelevant parts of its input. These models thus have a slightly lower precision, but report a higher robustness under attack than unprotected models.

2019-06-10
Nathezhtha, T., Yaidehi, V..  2018.  Cloud Insider Attack Detection Using Machine Learning. 2018 International Conference on Recent Trends in Advance Computing (ICRTAC). :60-65.

Security has always been a major issue in cloud. Data sources are the most valuable and vulnerable information which is aimed by attackers to steal. If data is lost, then the privacy and security of every cloud user are compromised. Even though a cloud network is secured externally, the threat of an internal attacker exists. Internal attackers compromise a vulnerable user node and get access to a system. They are connected to the cloud network internally and launch attacks pretending to be trusted users. Machine learning approaches are widely used for cloud security issues. The existing machine learning based security approaches classify a node as a misbehaving node based on short-term behavioral data. These systems do not differentiate whether a misbehaving node is a malicious node or a broken node. To address this problem, this paper proposes an Improvised Long Short-Term Memory (ILSTM) model which learns the behavior of a user and automatically trains itself and stores the behavioral data. The model can easily classify the user behavior as normal or abnormal. The proposed ILSTM not only identifies an anomaly node but also finds whether a misbehaving node is a broken node or a new user node or a compromised node using the calculated trust factor. The proposed model not only detects the attack accurately but also reduces the false alarm in the cloud network.

2020-09-28
Kohli, Nitin, Laskowski, Paul.  2018.  Epsilon Voting: Mechanism Design for Parameter Selection in Differential Privacy. 2018 IEEE Symposium on Privacy-Aware Computing (PAC). :19–30.
The behavior of a differentially private system is governed by a parameter epsilon which sets a balance between protecting the privacy of individuals and returning accurate results. While a system owner may use a number of heuristics to select epsilon, existing techniques may be unresponsive to the needs of the users who's data is at risk. A promising alternative is to allow users to express their preferences for epsilon. In a system we call epsilon voting, users report the parameter values they want to a chooser mechanism, which aggregates them into a single value. We apply techniques from mechanism design to ask whether such a chooser mechanism can itself be truthful, private, anonymous, and also responsive to users. Without imposing restrictions on user preferences, the only feasible mechanisms belong to a class we call randomized dictatorships with phantoms. This is a restrictive class in which at most one user has any effect on the chosen epsilon. On the other hand, when users exhibit single-peaked preferences, a broader class of mechanisms - ones that generalize the median and other order statistics - becomes possible.
2019-06-10
Roseline, S. A., Geetha, S..  2018.  Intelligent Malware Detection Using Oblique Random Forest Paradigm. 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI). :330-336.

With the increase in the popularity of computerized online applications, the analysis, and detection of a growing number of newly discovered stealthy malware poses a significant challenge to the security community. Signature-based and behavior-based detection techniques are becoming inefficient in detecting new unknown malware. Machine learning solutions are employed to counter such intelligent malware and allow performing more comprehensive malware detection. This capability leads to an automatic analysis of malware behavior. The proposed oblique random forest ensemble learning technique is efficient for malware classification. The effectiveness of the proposed method is demonstrated with three malware classification datasets from various sources. The results are compared with other variants of decision tree learning models. The proposed system performs better than the existing system in terms of classification accuracy and false positive rate.

2018-06-20
Wang, Qinglong, Guo, Wenbo, Zhang, Kaixuan, Ororbia, II, Alexander G., Xing, Xinyu, Liu, Xue, Giles, C. Lee.  2017.  Adversary Resistant Deep Neural Networks with an Application to Malware Detection. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1145–1153.
Outside the highly publicized victories in the game of Go, there have been numerous successful applications of deep learning in the fields of information retrieval, computer vision, and speech recognition. In cybersecurity, an increasing number of companies have begun exploring the use of deep learning (DL) in a variety of security tasks with malware detection among the more popular. These companies claim that deep neural networks (DNNs) could help turn the tide in the war against malware infection. However, DNNs are vulnerable to adversarial samples, a shortcoming that plagues most, if not all, statistical and machine learning models. Recent research has demonstrated that those with malicious intent can easily circumvent deep learning-powered malware detection by exploiting this weakness. To address this problem, previous work developed defense mechanisms that are based on augmenting training data or enhancing model complexity. However, after analyzing DNN susceptibility to adversarial samples, we discover that the current defense mechanisms are limited and, more importantly, cannot provide theoretical guarantees of robustness against adversarial sampled-based attacks. As such, we propose a new adversary resistant technique that obstructs attackers from constructing impactful adversarial samples by randomly nullifying features within data vectors. Our proposed technique is evaluated on a real world dataset with 14,679 malware variants and 17,399 benign programs. We theoretically validate the robustness of our technique, and empirically show that our technique significantly boosts DNN robustness to adversarial samples while maintaining high accuracy in classification. To demonstrate the general applicability of our proposed method, we also conduct experiments using the MNIST and CIFAR-10 datasets, widely used in image recognition research.
2018-02-15
Brkan, Maja.  2017.  AI-supported Decision-making Under the General Data Protection Regulation. Proceedings of the 16th Edition of the International Conference on Articial Intelligence and Law. :3–8.
The purpose of this paper is to analyse the rules of the General Data Protection Regulation on automated decision making in the age of Big Data and to explore how to ensure transparency of such decisions, in particular those taken with the help of algorithms. The GDPR, in its Article 22, prohibits automated individual decision-making, including profiling. On the first impression, it seems that this provision strongly protects individuals and potentially even hampers the future development of AI in decision making. However, it can be argued that this prohibition, containing numerous limitations and exceptions, looks like a Swiss cheese with giant holes in it. Moreover, in case of automated decisions involving personal data of the data subject, the GDPR obliges the controller to provide the data subject with 'meaningful information about the logic involved' (Articles 13 and 14). If we link this information to the rights of data subject, we can see that the information about the logic involved needs to enable him/her to express his/her point of view and to contest the automated decision. While this requirement fits well within the broader framework of GDPR's quest for a high level of transparency, it also raises several queries particularly in cases where the decision is taken with the help of algorithms: What exactly needs to be revealed to the data subject? How can an algorithm-based decision be explained? Apart from technical obstacles, we are facing also intellectual property and state secrecy obstacles to this 'algorithmic transparency'.
2018-12-03
Zhou, Zhe, Li, Zhou, Zhang, Kehuan.  2017.  All Your VMs Are Disconnected: Attacking Hardware Virtualized Network. Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy. :249–260.
Single Root I/O Virtualization (SRIOV) allows one physical device to be used by multiple virtual machines simultaneously without the mediation from the hypervisor. Such technique significantly decreases the overhead of I/O virtualization. But according to our latest findings, in the meantime, it introduces a high-risk security issue that enables an adversary-controlled VM to cut off the connectivity of the host machine, given the limited filtering capabilities provided by the SRIOV devices. As showcase, we demonstrate two attacks against SRIOV NIC by exploiting a vulnerability in the standard network management protocol, OAM. The vulnerability surfaces because SRIOV NICs treat the packets passing through OAM as data-plane packets and allow untrusted VMs to send and receive these packets on behalf of the host. By examining several off-the-shelf SRIOV NICs and switches, we show such attack can easily turn off the network connection within a short period of time. In the end, we propose a defense mechanism which runs on the existing hardware and can be readily deployed.
2018-01-23
Margolis, Joel, Oh, Tae(Tom), Jadhav, Suyash, Jeong, Jaehoon(Paul), Kim, Young Ho, Kim, Jeong Neyo.  2017.  Analysis and Impact of IoT Malware. Proceedings of the 18th Annual Conference on Information Technology Education. :187–187.
As Internet of Things (IoT) devices become more and more prevalent, it is important for research to be done around the security and integrity of them. By doing so, consumers can make well-informed choices about the smart devices that they purchase. This poster presents information about how three different IoT-specific malware variants operate and impact newly connected devices.
2018-09-12
Doan, Khue, Quang, Minh Nguyen, Le, Bac.  2017.  Applied Cuckoo Algorithm for Association Rule Hiding Problem. Proceedings of the Eighth International Symposium on Information and Communication Technology. :26–33.
Nowadays, the database security problem is becoming significantly interesting in the data mining field. How can exploit legitimate data and avoid disclosing sensitive information. There have been many approaches in which the outstanding solution among them is privacy preservation in association rule mining to hide sensitive rules. In the recent years, a meta-heuristic algorithm is becoming effective for this goal, the algorithm is applied in the cuckoo optimization algorithm (COA4ARH). In this paper, an improved proposal of the COA4ARH to minimize the side effect of the missing non-sensitive rules will be introduced. The main contribution of this study is a new pre-process stage to determine the minimum number of necessary transactions for the process of initializing an initial habitat, thus restriction of modified operation on the original data. To evaluate the effectiveness of the proposed method, we conducted several experiments on the real datasets. The experimental results show that the improved approach has higher performance in compared to the original algorithm.
2018-05-24
Marohn, Byron, Wright, Charles V., Feng, Wu-chi, Rosulek, Mike, Bobba, Rakesh B..  2017.  Approximate Thumbnail Preserving Encryption. Proceedings of the 2017 on Multimedia Privacy and Security. :33–43.
Thumbnail preserving encryption (TPE) was suggested by Wright et al. [Information Hiding & Multimedia Security Workshop 2015] as a way to balance privacy and usability for online image sharing. The idea is to encrypt a plaintext image into a ciphertext image that has roughly the same thumbnail as well as retaining the original image format. At the same time, TPE allows users to take advantage of much of the functionality of online photo management tools, while still providing some level of privacy against the service provider. In this work we present two new approximate TPE encryption schemes. In our schemes, ciphertexts and plaintexts have perceptually similar, but not identical, thumbnails. Our constructions are the first TPE schemes designed to work well with JPEG compression. In addition, we show that they also have provable security guarantees that characterize precisely what information about the plaintext is leaked by the ciphertext image. We empirically evaluate our schemes according to the similarity of plaintext & ciphertext thumbnails, increase in file size under JPEG compression, preservation of perceptual image hashes, among other aspects. We also show how approximate TPE can be an effective tool to thwart inference attacks by machine-learning image classifiers, which have shown to be effective against other image obfuscation techniques.
2018-01-23
Taubmann, Benjamin, Kolosnjaji, Bojan.  2017.  Architecture for Resource-Aware VMI-based Cloud Malware Analysis. Proceedings of the 4th Workshop on Security in Highly Connected IT Systems. :43–48.
Virtual machine introspection (VMI) is a technology with many possible applications, such as malware analysis and intrusion detection. However, this technique is resource intensive, as inspecting program behavior includes recording of a high number of events caused by the analyzed binary and related processes. In this paper we present an architecture that leverages cloud resources for virtual machine-based malware analysis in order to train a classifier for detecting cloud-specific malware. This architecture is designed while having in mind the resource consumption when applying the VMI-based technology in production systems, in particular the overhead of tracing a large set of system calls. In order to minimize the data acquisition overhead, we use a data-driven approach from the area of resource-aware machine learning. This approach enables us to optimize the trade-off between malware detection performance and the overhead of our VMI-based tracing system.