Biblio
Malware is one of the threats to information security that continues to increase. In 2014 nearly six million new malware was recorded. The highest number of malware is in Trojan Horse malware while in Adware malware is the most significantly increased malware. Security system devices such as antivirus, firewall, and IDS signature-based are considered to fail to detect malware. This happens because of the very fast spread of computer malware and the increasing number of signatures. Besides signature-based security systems it is difficult to identify new methods, viruses or worms used by attackers. One other alternative in detecting malware is to use honeypot with machine learning. Honeypot can be used as a trap for packages that are suspected while machine learning can detect malware by classifying classes. Decision Tree and Support Vector Machine (SVM) are used as classification algorithms. In this paper, we propose architectural design as a solution to detect malware. We presented the architectural proposal and explained the experimental method to be used.
In this paper, we consider one of the approaches to the study of the characteristics of an information system that is under the influence of various factors, and their management using neural networks and wavelet transforms based on determining the relationship between the modified state of the information system and the possibility of dynamic analysis of effects. At the same time, the process of influencing the information system includes the following components: impact on the components providing the functions of the information system; determination of the result of exposure; analysis of the result of exposure; response to the result of exposure. As an input signal, the characteristics of the means that affect are taken. The system includes an adaptive response unit, the input of which receives signals about the prerequisites for changes, and at the output, this unit generates signals for the inclusion of appropriate means to eliminate or compensate for these prerequisites or directly the changes in the information system.
Virtual reality (VR) recently is a promising technique in both industry and academia due to its potential applications in immersive experiences including website, game, tourism, or museum. VR technique provides an amazing 3-Dimensional (3D) experiences by requiring a very high amount of elements such as images, texture, depth, focus length, etc. However, in order to apply VR technique to various devices, especially in mobiles, ultra-high transmission rate and extremely low latency are really big challenge. Considering this problem, this paper proposes a novel combination model by transforming the computing capability of VR device into an equivalent caching amount while remaining low latency and fast transmission. In addition, Classic caching models are used to computing and catching capabilities which is easily apply to multi-user models.
This paper discusses two pieces of software designed for intelligence analysis, the brainstorming tool and the Scenario Planning Advisor. These tools were developed in the Cognitive Immersive Systems Lab (CISL) in conjunction with IBM. We discuss the immersive environment the tools are situated in, and the proposed benefit for intelligence analysis.
In traditional steganographic schemes, RGB three channels payloads are assigned equally in a true color image. In fact, the security of color image steganography relates not only to data-embedding algorithms but also to different payload partition. How to exploit inter-channel correlations to allocate payload for performance enhancement is still an open issue in color image steganography. In this paper, a novel channel-dependent payload partition strategy based on amplifying channel modification probabilities is proposed, so as to adaptively assign the embedding capacity among RGB channels. The modification probabilities of three corresponding pixels in RGB channels are simultaneously increased, and thus the embedding impacts could be clustered, in order to improve the empirical steganographic security against the channel co-occurrences detection. Experimental results show that the new color image steganographic schemes incorporated with the proposed strategy can effectively make the embedding changes concentrated mainly in textured regions, and achieve better performance on resisting the modern color image steganalysis.
Recently Vehicular Cloud Computing (VCC) has become an attractive solution that support vehicle's computing and storing service requests. This computing paradigm insures a reduced energy consumption and low traffic congestion. Additionally, VCC has emerged as a promising technology that provides a virtual platform for processing data using vehicles as infrastructures or centralized data servers. However, vehicles are deployed in open environments where they are vulnerable to various types of attacks. Furthermore, traditional cryptographic algorithms failed in insuring security once their keys compromised. In order to insure a secure vehicular platform, we introduce in this paper a new decoy technology DT and user behavior profiling (UBP) as an alternative solution to overcome data security, privacy and trust in vehicular cloud servers using a fog computing architecture. In the case of a malicious behavior, our mechanism shows a high efficiency by delivering decoy files in such a way making the intruder unable to differentiate between the original and decoy file.
To promote InGaP solar cell efficiency toward the theoretical limit, one promising approach is to incorporate multiple quantum wells (MQWs) into the InGaP host and improve its open-circuit voltage by facilitating radiative carrier recombination owing to carrier confinement. In this research, we demonstrate numerically that a strain-balanced (SB) In1-xGaxP/In1-yGayP MQW enhances confined carrier density while degrades the effective carrier mobility. However, a smart design of the MQW structure is possible by considering quantitatively the trade-off between carrier confinement effect and carrier transport, and MQW can be advantageous over the InGaP bulk material for boosting photovoltaic efficiency.
Mobile crowd sensing (MCS) is a rapidly developing technique for information collection from the users of mobile devices. This technique deals with participants' personal information such as their identities and locations, thus raising significant security and privacy concerns. Accordingly, anonymous authentication schemes have been widely considered for preserving participants' privacy in MCS. However, mobile devices are easy to lose and vulnerable to device capture attacks, which enables an attacker to extract the private authentication key of a mobile application and to further invade the user's privacy by linking sensed data with the user's identity. To address this issue, we have devised a special anonymous authentication scheme where the authentication request algorithm can be obfuscated into an unintelligible form and thus the authentication key is not explicitly used. This scheme not only achieves authenticity and unlinkability for participants, but also resists impersonation, replay, denial-of-service, man-in-the-middle, collusion, and insider attacks. The scheme's obfuscation algorithm is the first obfuscator for anonymous authentication, and it satisfies the average-case secure virtual black-box property. The scheme also supports batch verification of authentication requests for improving efficiency. Performance evaluations on a workstation and smart phones have indicated that our scheme works efficiently on various devices.
We present Ouroboros Crypsinous, the first formally analyzed privacy-preserving proof-of-stake blockchain protocol. To model its security we give a thorough treatment of private ledgers in the (G)UC setting that might be of independent interest. To prove our protocol secure against adaptive attacks, we introduce a new coin evolution technique relying on SNARKs and key-private forward secure encryption. The latter primitive-and the associated construction-can be of independent interest. We stress that existing approaches to private blockchain, such as the proof-of-work-based Zerocash are analyzed only against static corruptions.
DNA cryptography becomes a burgeoning new area of study along with the fast-developing of DNA computing and modern cryptography. Point-doubling, point-addition and point-multiplication are three fundamental point-operations to construct encryption protocols in some cryptosystem over mathematical curves such as elliptic curves and conic curves. This paper proposes a DNA computing model to calculate point-doubling in conic curves cryptosystem over finite held GF(2n). By decomposing and rearranging the computing steps of point-doubling, the assembly process could be fulfilled by using 8 different types of computation tiles performing different functions with 1097 encoding ways. This model could also figure out point-multiplication if its coefficient is 2k. The assembly time complexity is 2kn+n-k-1, and the space complexity is k2n2+kn2-k2n.
Deep Packet Inspection (DPI) is instrumental in investigating the presence of malicious activity in network traffic and most existing DPI tools work on unencrypted payloads. As the internet is moving towards fully encrypted data-transfer, there is a critical requirement for privacy-aware techniques to efficiently decrypt network payloads. Until recently, passive proxying using certain aspects of TLS 1.2 were used to perform decryption and further DPI analysis. With the introduction of TLS 1.3 standard that only supports protocols with Perfect Forward Secrecy (PFS), many such techniques will become ineffective. Several security solutions will be forced to adopt active proxying that will become a big-data problem considering the velocity and veracity of network traffic involved. We have developed an ABAC (Attribute Based Access Control) framework that efficiently supports existing DPI tools while respecting user's privacy requirements and organizational policies. It gives the user the ability to accept or decline access decision based on his privileges. Our solution evaluates various observed and derived attributes of network connections against user access privileges using policies described with semantic technologies. In this paper, we describe our framework and demonstrate the efficacy of our technique with the help of use-case scenarios to identify network connections that are candidates for Deep Packet Inspection. Since our technique makes selective identification of connections based on policies, both processing and memory load at the gateway will be reduced significantly.
In recent trends, privacy preservation is the most predominant factor, on big data analytics and cloud computing. Every organization collects personal data from the users actively or passively. Publishing this data for research and other analytics without removing Personally Identifiable Information (PII) will lead to the privacy breach. Existing anonymization techniques are failing to maintain the balance between data privacy and data utility. In order to provide a trade-off between the privacy of the users and data utility, a Mondrian based k-anonymity approach is proposed. To protect the privacy of high-dimensional data Deep Neural Network (DNN) based framework is proposed. The experimental result shows that the proposed approach mitigates the information loss of the data without compromising privacy.
In this paper, we develop a statistical framework for image steganography in which the cover and stego messages are modeled as multivariate Gaussian random variables. By minimizing the detection error of an optimal detector within the generalized adopted statistical model, we propose a novel Gaussian embedding method. Furthermore, we extend the formulation to cost-based steganography, resulting in a universal embedding scheme that works with embedding costs as well as variance estimators. Experimental results show that the proposed approach avoids embedding in smooth regions and significantly improves the security of the state-of-the-art methods, such as HILL, MiPOD, and S-UNIWARD.
The effects of quantum confinement on the charge distribution in planar Double-Gate (DG) SOI (Siliconon-Insulator) MOSFETs were examined, for sub-10 nm SOI film thicknesses (tsi $łeq$ 10 nm), by modeling the potential experienced by the charge carriers as that of an an-harmonic oscillator potential, consistent with the inherent structural symmetry of nanoscale symmetric DGSOI MOSFETs. By solving the 1-D Poisson's equation using this potential, the results obtained were validated through comparisons with TCAD simulations. The present model satisfactorily predicted the electron density and channel charge density for a wide range of SOI channel thicknesses and gate voltages.
Untethered microrobots actuated by external magnetic fields have drawn extensive attention recently, due to their potential advantages in real-time tracking and targeted delivery in vivo. To control a swarm of microrobots with external fields, however, is still one of the major challenges in this field. In this work, we present new methods to generate ribbon-like and vortex-like microrobotic swarms using oscillating and rotating magnetic fields, respectively. Paramagnetic nanoparticles with a diameter of 400 nm serve as the agents. These two types of swarms exhibits out-of-equilibrium structure, in which the nanoparticles perform synchronised motions. By tuning the magnetic fields, the swarming patterns can be reversibly transformed. Moreover, by increasing the pitch angle of the applied fields, the swarms are capable of performing navigated locomotion with a controlled velocity. This work sheds light on a better understanding for microrobotic swarm behaviours and paves the way for potential biomedical applications.
The relative permittivity (also known as dielectric constant) is one of the physical properties that characterize a substance. The measurement of its magnitude can be useful in the analysis of several fluids, playing an important role in many industrial processes. This paper presents a method for measuring the relative permittivity of fluids, with the possibility of real-time monitoring. The method comprises the immersion of a capacitive sensor inside a tank or duct, in order to have the inspected substance as its dielectric. An electronic circuit is responsible for exciting this sensor, which will have its capacitance measured through a quick analysis of two analog signals outputted by the circuit. The developed capacitance meter presents a novel topology derived from the well-known Howland current source. One of its main advantages is the capacitance-selective behavior, which allows the system to overcome the effects of parasitic resistive and inductive elements on its readings. In addition to an adjustable current output that suits different impedance magnitudes, it exhibits a steady oscillating behavior, thus allowing continuous operation without any form of external control. This paper presents experimental results obtained from the proposed system and compares them to measurements made with proven and calibrated equipment. Two initial capacitance measurements performed with the system for evaluating the sensor's characteristics exhibited relative errors of approximately 0.07% and 0.53% in comparison to an accurate workbench LCR meter.
The difficult of detecting, response, tracing the malicious behavior in cloud has brought great challenges to the law enforcement in combating cybercrimes. This paper presents a malicious behavior oriented framework of detection, emergency response, traceability, and digital forensics in cloud environment. A cloud-based malicious behavior detection mechanism based on SDN is constructed, which implements full-traffic flow detection technology and malicious virtual machine detection based on memory analysis. The emergency response and traceability module can clarify the types of the malicious behavior and the impacts of the events, and locate the source of the event. The key nodes and paths of the infection topology or propagation path of the malicious behavior will be located security measure will be dispatched timely. The proposed IaaS service based forensics module realized the virtualization facility memory evidence extraction and analysis techniques, which can solve volatile data loss problems that often happened in traditional forensic methods.
Digitization has increased exposure and opened up for more cyber threats and attacks. To proactively handle this issue, enterprise modeling needs to include threat management during the design phase that considers antagonists, attack vectors, and damage domains. Agile methods are commonly adopted to efficiently develop and manage software and systems. This paper proposes to use an enterprise architecture repository to analyze not only shipped components but the overall architecture, to improve the traditional designs represented by legacy systems in the situated IT-landscape. It shows how the hidden structure method (with Design Structure Matrices) can be used to evaluate the enterprise architecture, and how it can contribute to agile development. Our case study uses an architectural descriptive language called ArchiMate for architecture modeling and shows how to predict the ripple effect in a damaging domain if an attacker's malicious components are operating within the network.
This paper presents an access control modelling that integrates risk assessment elements in the attribute-based model to organize the identification, authentication and authorization rules. Access control is complex in integrated systems, which have different actors accessing different information in multiple levels. In addition, systems are composed by different components, much of them from different developers. This requires a complete supply chain trust to protect the many existent actors, their privacy and the entire ecosystem. The incorporation of the risk assessment element introduces additional variables like the current environment of the subjects and objects, time of the day and other variables to help produce more efficient and effective decisions in terms of granting access to specific objects. The risk-based attributed access control modelling was applied in a health platform, Project CityZen.
The widespread adoption of social networking and cloud computing has transformed today's Internet to a trove of personal information. As a consequence, data breaches are expected to increase in gravity and occurrence. To counteract unintended data disclosure, a great deal of effort has been dedicated in devising methods for uncovering privacy leaks. Existing solutions, however, have not addressed the time- and data-intensive nature of leak detection. The shift from hardware-specific implementation to software-based solutions is the core idea behind the concept of Network Function Virtualization (NFV). On the other hand, the Software Defined Networking (SDN) paradigm is characterized by the decoupling of the forwarding and control planes. In this paper, an SDN/NFV-enabled architecture is proposed for improving the efficiency of leak detection systems. Employing a previously developed identification strategy, Personally Identifiable Information detector (PIID) and load balancer VNFs are packaged and deployed in OpenStack through an NFV MANO. Meanwhile, SDN controllers permit the load balancer to dynamically redistribute traffic among the PIID instances. In a physical testbed, tests are conducted to evaluate the proposed architecture. Experimental results indicate that the proportions of forwarding and parsing on total overhead is influenced by the traffic intensity. Furthermore, an NFV-enabled system with scalability features was found to outperform a non-virtualized implementation in terms of latency (85.1%), packet loss (98.3%) and throughput (8.41%).
In this time of ubiquitous computing and the evolution of the Internet of Things (IoT), the deployment and development of Radio Frequency Identification (RFID) is becoming more extensive. Proving the simultaneous presence of a group of RFID tagged objects is a practical need in many application areas within the IoT domain. Security, privacy, and efficiency are central issues when designing such a grouping-proof protocol. This work is motivated by our serial-dependent and Sundaresan et al.'s grouping-proof protocols. In this paper, we propose a light, improved offline protocol: parallel-dependency grouping-proof protocol (PDGPP). The protocol focuses on security, privacy, and efficiency. PDGPP tackles the challenges of including robust privacy mechanisms and accommodates missing tags. It is scalable and complies with EPC C1G2.
Increasingly organizations are collecting ever larger amounts of data to build complex data analytics, machine learning and AI models. Furthermore, the data needed for building such models may be unstructured (e.g., text, image, and video). Hence such data may be stored in different data management systems ranging from relational databases to newer NoSQL databases tailored for storing unstructured data. Furthermore, data scientists are increasingly using programming languages such as Python, R etc. to process data using many existing libraries. In some cases, the developed code will be automatically executed by the NoSQL system on the stored data. These developments indicate the need for a data security and privacy solution that can uniformly protect data stored in many different data management systems and enforce security policies even if sensitive data is processed using a data scientist submitted complex program. In this paper, we introduce our vision for building such a solution for protecting big data. Specifically, our proposed system system allows organizations to 1) enforce policies that control access to sensitive data, 2) keep necessary audit logs automatically for data governance and regulatory compliance, 3) sanitize and redact sensitive data on-the-fly based on the data sensitivity and AI model needs, 4) detect potentially unauthorized or anomalous access to sensitive data, 5) automatically create attribute-based access control policies based on data sensitivity and data type.
In recent days, Enterprises are expanding their business efficiently through web applications which has paved the way for building good consumer relationship with its customers. The major threat faced by these enterprises is their inability to provide secure environments as the web applications are prone to severe vulnerabilities. As a result of this, many security standards and tools have been evolving to handle the vulnerabilities. Though there are many vulnerability detection tools available in the present, they do not provide sufficient information on the attack. For the long-term functioning of an organization, data along with efficient analytics on the vulnerabilities is required to enhance its reliability. The proposed model thus aims to make use of Machine Learning with Analytics to solve the problem in hand. Hence, the sequence of the attack is detected through the pattern using PAA and further the detected vulnerabilities are classified using Machine Learning technique such as SVM. Probabilistic results are provided in order to obtain numerical data sets which could be used for obtaining a report on user and application behavior. Dynamic and Reconfigurable PAA with SVM Classifier is a challenging task to analyze the vulnerabilities and impact of these vulnerabilities in heterogeneous web environment. This will enhance the former processing by analysis of the origin and the pattern of the attack in a more effective manner. Hence, the proposed system is designed to perform detection of attacks. The system works on the mitigation and prevention as part of the attack prediction.
The increase of the digitalization taking place in various industrial domains is leading developers towards the design and implementation of more and more complex networked control systems (NCS) supported by Wireless Sensor Networks (WSN). This naturally raises new challenges for the current WSN technology, namely in what concerns improved guarantees of technical aspects such as real-time communications together with safe and secure transmissions. Notably, in what concerns security aspects, several cryptographic protocols have been proposed. Since the design of these protocols is usually error-prone, security breaches can still be exposed and MALICIOUSly exploited unless they are rigorously analyzed and verified. In this paper we formally verify, using ProVerif, three cryptographic protocols used in WSN, regarding the security properties of secrecy and authenticity. The security analysis performed in this paper is more robust than the ones performed in related work. Our contributions involve analyzing protocols that were modeled considering an unbounded number of participants and actions, and also the use of a hierarchical system to classify the authenticity results. Our verification shows that the three analyzed protocols guarantee secrecy, but can only provide authenticity in specific scenarios.