Biblio
We consider the problem of protecting cloud services from simultaneous white-box and black-box attacks. Recent research in cryptographic program obfuscation considers the problem of protecting the confidentiality of programs and any secrets in them. In this model, a provable program obfuscation solution makes white-box attacks to the program not more useful than black-box attacks. Motivated by very recent results showing successful black-box attacks to machine learning programs run by cloud servers, we propose and study the approach of augmenting the program obfuscation solution model so to achieve, in at least some class of application scenarios, program confidentiality in the presence of both white-box and black-box attacks.We propose and formally define encrypted-input program obfuscation, where a key is shared between the entity obfuscating the program and the entity encrypting the program's inputs. We believe this model might be of interest in practical scenarios where cloud programs operate over encrypted data received by associated sensors (e.g., Internet of Things, Smart Grid).Under standard intractability assumptions, we show various results that are not known in the traditional cryptographic program obfuscation model; most notably: Yao's garbled circuit technique implies encrypted-input program obfuscation hiding all gates of an arbitrary polynomial circuit; and very efficient encrypted-input program obfuscation for range membership programs and a class of machine learning programs (i.e., decision trees). The performance of the latter solutions has only a small constant overhead over the equivalent unobfuscated program.
This study reviews the development of shared (community) solar and community choice aggregation in the U.S. states of California and New York. Both states are leaders in energy-transition policy in the U.S., but they have different trajectories for the two forms of energy decentralization. Shared solar is more advanced in New York, but community choice is more advanced in California. Using a field theory framework, the comparative review of the trajectories of energy decentralization shows how differences in restructuring and regulatory rules affect outcomes. Differences in the rules for retail competition and authority for utilities to own distributed generation assets, plus the role of civil society and the attention from elected officials, shape the intensity of conflict and outcomes. They also contribute to the development of different types of community choice in the two states. In addition to showing how institutional conditions associated with different types of restructured markets shape the opportunities for decentralized energy, the study also examines how the efforts of actors to gain support for and to legitimate their policy preferences involve reference to broad social values.
Artificial Intelligence systems have enabled significant benefits for users and society, but whilst the data for their feeding are always increasing, a side to privacy and security leaks is offered. The severe vulnerabilities to the right to privacy obliged governments to enact specific regulations to ensure privacy preservation in any kind of transaction involving sensitive information. In the case of digital and/or physical documents comprising sensitive information, the right to privacy can be preserved by data obfuscation procedures. The capability of recognizing sensitive information for obfuscation is typically entrusted to the experience of human experts, who are over-whelmed by the ever increasing amount of documents to process. Artificial intelligence could proficiently mitigate the effort of the human officers and speed up processes. Anyway, until enough knowledge won't be available in a machine readable format, automatic and effectively working systems can't be developed. In this work we propose a methodology for transferring and leveraging general knowledge across specific-domain tasks. We built, from scratch, specific-domain knowledge data sets, for training artificial intelligence models supporting human experts in privacy preserving tasks. We exploited a mixture of natural language processing techniques applied to unlabeled domain-specific documents corpora for automatically obtain labeled documents, where sensitive information are recognized and tagged. We performed preliminary tests just over 10.000 documents from the healthcare and justice domains. Human experts supported us during the validation. Results we obtained, estimated in terms of precision, recall and F1-score metrics across these two domains, were promising and encouraged us to further investigations.
Code randomization is considered as the basis of mitigation against code reuse attacks, fundamentally supporting some recent proposals such as execute-only memory (XOM) that aims at dynamic return-oriented programming (ROP) attacks. However, existing code randomization methods are hard to achieve a good balance between high-randomization entropy and semantic consistency. In particular, they always ignore code semantic consistency, incurring performance loss and incompatibility with current security schemes, e.g., control flow integrity (CFI). In this paper, we present an enhanced code randomization method termed as HCRESC, which can improve the randomization entropy significantly, meanwhile ensure the semantic consistency between variants and the original code. HCRESC reschedules instructions within the range of functions rather than basic blocks, thus producing more variants of the original code and preserving the code's semantic. We implement HCRESC on Linux platform of x86-64 architecture and demonstrate that HCRESC can increase the randomization entropy of x86-64 code over than 120% compared with existing methods while ensuring control flow and size of the code unaltered.
Experimentation focused on assessing the value of complex visualisation approaches when compared with alternative methods for data analysis is challenging. The interaction between participant prior knowledge and experience, a diverse range of experimental or real-world data sets and a dynamic interaction with the display system presents challenges when seeking timely, affordable and statistically relevant experimentation results. This paper outlines a hybrid approach proposed for experimentation with complex interactive data analysis tools, specifically for computer network traffic analysis. The approach involves a structured survey completed after free engagement with the software platform by expert participants. The survey captures objective and subjective data points relating to the experience with the goal of making an assessment of software performance which is supported by statistically significant experimental results. This work is particularly applicable to field of network analysis for cyber security and also military cyber operations and intelligence data analysis.
The rapid growth of artificial intelligence has contributed a lot to the technology world. As the traditional algorithms failed to meet the human needs in real time, Machine learning and deep learning algorithms have gained great success in different applications such as classification systems, recommendation systems, pattern recognition etc. Emotion plays a vital role in determining the thoughts, behaviour and feeling of a human. An emotion recognition system can be built by utilizing the benefits of deep learning and different applications such as feedback analysis, face unlocking etc. can be implemented with good accuracy. The main focus of this work is to create a Deep Convolutional Neural Network (DCNN) model that classifies 5 different human facial emotions. The model is trained, tested and validated using the manually collected image dataset.
Facial expressions are one of the most powerful, natural and immediate means for human being to present their emotions and intensions. In this paper, we present a novel method for fully automatic facial expression recognition. The facial landmarks are detected for characterizing facial expressions. A graph convolutional neural network is proposed for feature extraction and facial expression recognition classification. The experiments were performed on the three facial expression databases. The result shows that the proposed FER method can achieve good recognition accuracy up to 95.85% using the proposed method.
As a part of body language, facial expression is a psychological state that reflects the current emotional state of the person. Recognition of facial expressions can help to understand others and enhance communication with others. We propose a facial expression recognition method based on convolutional neural network ensemble learning in this paper. Our model is composed of three sub-networks, and uses the SVM classifier to Integrate the output of the three networks to get the final result. The recognition accuracy of the model's expression on the FER2013 dataset reached 71.27%. The results show that the method has high test accuracy and short prediction time, and can realize real-time, high-performance facial recognition.
This study extends previous advances in soft biometrics and describes to what extent soft biometrics can be used for facial profile recognition. The purpose of this research is to explore human recognition based on facial profiles in a comparative setting based on soft biometrics. Moreover, in this work, we describe and use a ranking system to determine the recognition rate. The Elo rating system is employed to rank subjects by using their face profiles in a comparative setting. The crucial features responsible for providing useful information describing facial profiles have been identified by using relative methods. Experiments based on a subset of the XM2VTSDB database demonstrate a 96% for recognition rate using 33 features over 50 subjects.
DATE is a leading international event providing unique networking opportunities, bringing together designers and design automation users, researchers and vendors, as well as specialists in hardware and software design, test and manufacturing of electronic circuits and systems.
Existing cyber security solutions have been basically developed using knowledge-based models that often cannot trigger new cyber-attack families. With the boom of Artificial Intelligence (AI), especially Deep Learning (DL) algorithms, those security solutions have been plugged-in with AI models to discover, trace, mitigate or respond to incidents of new security events. The algorithms demand a large number of heterogeneous data sources to train and validate new security systems. This paper presents the description of new datasets, the so-called ToNİoT, which involve federated data sources collected from Telemetry datasets of IoT services, Operating system datasets of Windows and Linux, and datasets of Network traffic. The paper introduces the testbed and description of TONİoT datasets for Windows operating systems. The testbed was implemented in three layers: edge, fog and cloud. The edge layer involves IoT and network devices, the fog layer contains virtual machines and gateways, and the cloud layer involves cloud services, such as data analytics, linked to the other two layers. These layers were dynamically managed using the platforms of software-Defined Network (SDN) and Network-Function Virtualization (NFV) using the VMware NSX and vCloud NFV platform. The Windows datasets were collected from audit traces of memories, processors, networks, processes and hard disks. The datasets would be used to evaluate various AI-based cyber security solutions, including intrusion detection, threat intelligence and hunting, privacy preservation and digital forensics. This is because the datasets have a wide range of recent normal and attack features and observations, as well as authentic ground truth events. The datasets can be publicly accessed from this link [1].
We leverage deep learning algorithms on various user behavioral information gathered from end-user devices to classify a subject of interest. In spite of the ability of these techniques to counter spoofing threats, they are vulnerable to adversarial learning attacks, where an attacker adds adversarial noise to the input samples to fool the classifier into false acceptance. Recently, a handful of mature techniques like Fast Gradient Sign Method (FGSM) have been proposed to aid white-box attacks, where an attacker has a complete knowledge of the machine learning model. On the contrary, we exploit a black-box attack to a behavioral biometric system based on gait patterns, by using FGSM and training a shadow model that mimics the target system. The attacker has limited knowledge on the target model and no knowledge of the real user being authenticated, but induces a false acceptance in authentication. Our goal is to understand the feasibility of a black-box attack and to what extent FGSM on shadow models would contribute to its success. Our results manifest that the performance of FGSM highly depends on the quality of the shadow model, which is in turn impacted by key factors including the number of queries allowed by the target system in order to train the shadow model. Our experimentation results have revealed strong relationships between the shadow model and FGSM performance, as well as the effect of the number of FGSM iterations used to create an attack instance. These insights also shed light on deep-learning algorithms' model shareability that can be exploited to launch a successful attack.
Cipher Text Policy Attribute Based Encryption which is a form of Public Key Encryption has become a renowned approach as a Data access control scheme for data security and confidentiality. It not only provides the flexibility and scalability in the access control mechanisms but also enhances security by fuzzy fined-grained access control. However, schemes are there which for more security increases the key size which ultimately leads to high encryption and decryption time. Also, there is no provision for handling the middle man attacks during data transfer. In this paper, a light-weight and more scalable encryption mechanism is provided which not only uses fewer resources for encoding and decoding but also improves the security along with faster encryption and decryption time. Moreover, this scheme provides an efficient key sharing mechanism for providing secure transfer to avoid any man-in-the-middle attacks. Also, due to fuzzy policies inclusion, chances are there to get approximation of user attributes available which makes the process fast and reliable and improves the performance of legitimate users.