Biblio
While the control of individuals over their personal data is increasingly seen as an essential component of their privacy, the word "control" is usually used in a very vague way, both by lawyers and by computer scientists. This lack of precision may lead to misunderstandings and makes it difficult to check compliance. To address this issue, we propose a formal framework based on capacities to specify the notion of control over personal data and to reason about control properties. We illustrate our framework with social network systems and show that it makes it possible to characterize the types of control over personal data that they provide to their users and to compare them in a rigorous way.
Software-Defined Network (SDN) is a novel architecture created to address the issues of traditional and vertically integrated networks. To increase cost-effectiveness and enable logical control, SDN provides high programmability and centralized view of the network through separation of network traffic delivery (the "data plane") from network configuration (the "control plane"). SDN controllers and related protocols are rapidly evolving to address the demands for scaling in complex enterprise networks. Because of the evolution of modern SDN technologies, production networks employing SDN are prone to several security vulnerabilities. The rate at which SDN frameworks are evolving continues to overtake attempts to address their security issues. According to our study, existing defense mechanisms, particularly SDN-based firewalls, face new and SDN-specific challenges in successfully enforcing security policies in the underlying network. In this paper, we identify problems associated with SDN-based firewalls, such as ambiguous flow path calculations and poor scalability in large networks. We survey existing SDN-based firewall designs and their shortcomings in protecting a dynamically scaling network like a data center. We extend our study by evaluating one such SDN-specific security solution called FlowGuard, and identifying new attack vectors and vulnerabilities. We also present corresponding threat detection techniques and respective mitigation strategies.
The rise of social networks during the last 10 years has created a situation in which up to 100 million new images and photographs are uploaded and shared by users every day. This environment poses an ideal background for those who wish to communicate covertly by the use of steganography. It also creates a new set of challenges for steganalysts, who have to shift their field of work away from a purely scientific laboratory environment and into a diverse real-world scenario, while at the same time having to deal with entirely new problems, such as the detection of steganographic channels or the impact that even a low false positive rate has when investigating the millions of images which are shared every day on social networks. We evaluate how to address these challenges with traditional steganographic and statistical methods, rather then using high performance computing and machine learning. To achieve this we first analyze the steganographic algorithm F5 applied to images with a high degree of diversity, as would be seen in a typical social network. We show that the biggest challenge lies in the detection of images whose payload is less then 50% of the available capacity of an image. We suggest new detection methods and apply these to the problem of channel detection in social network. We are able to show that using our attacks we are able to detect the majority of covert F5 channels after a mix containing 10 stego images has been classified by our scheme.
This paper presents an effective steganalytic scheme based on CNN for detecting MP3 steganography in the entropy code domain. These steganographic methods hide secret messages into the compressed audio stream through Huffman code substitution, which usually achieve high capacity, good security and low computational complexity. First, unlike most previous CNN based steganalytic methods, the quantified modified DCT (QMDCT) coefficients matrix is selected as the input data of the proposed network. Second, a high pass filter is used to extract the residual signal, and suppress the content itself, so that the network is more sensitive to the subtle alteration introduced by the data hiding methods. Third, the \$ 1 $\backslash$times 1 \$ convolutional kernel and the batch normalization layer are applied to decrease the danger of overfitting and accelerate the convergence of the back-propagation. In addition, the performance of the network is optimized via fine-tuning the architecture. The experiments demonstrate that the proposed CNN performs far better than the traditional handcrafted features. In particular, the network has a good performance for the detection of an adaptive MP3 steganography algorithm, equal length entropy codes substitution (EECS) algorithm which is hard to detect through conventional handcrafted features. The network can be applied to various bitrates and relative payloads seamlessly. Last but not the least, a sliding window method is proposed to steganalyze audios of arbitrary size.
Traditional security controls, such as firewalls, anti-virus and IDS, are ill-equipped to help IT security and response teams keep pace with the rapid evolution of the cyber threat landscape. Cyber Threat Intelligence (CTI) can help remediate this problem by exploiting non-traditional information sources, such as hacker forums and "dark-web" social platforms. Security and response teams can use the collected intelligence to identify emerging threats. Unfortunately, when manual analysis is used to extract CTI from non-traditional sources, it is a time consuming, error-prone and resource intensive process. We address these issues by using a hybrid Machine Learning model that automatically searches through hacker forum posts, identifies the posts that are most relevant to cyber security and then clusters the relevant posts into estimations of the topics that the hackers are discussing. The first (identification) stage uses Support Vector Machines and the second (clustering) stage uses Latent Dirichlet Allocation. We tested our model, using data from an actual hacker forum, to automatically extract information about various threats such as leaked credentials, malicious proxy servers, malware that evades AV detection, etc. The results demonstrate our method is an effective means for quickly extracting relevant and actionable intelligence that can be integrated with traditional security controls to increase their effectiveness.
A 2D-Compressive Sensing and hyper-chaos based image compression-encryption algorithm is proposed. The 2D image is compressively sampled and encrypted using two measurement matrices. A chaos based measurement matrix construction is employed. The construction of the measurement matrix is controlled by the initial and control parameters of the chaotic system, which are used as the secret key for encryption. The linear measurements of the sparse coefficients of the image are then subjected to a hyper-chaos based diffusion which results in the cipher image. Numerical simulation and security analysis are performed to verify the validity and reliability of the proposed algorithm.
Compressed sensing (CS) can recover a signal that is sparse in certain representation and sample at the rate far below the Nyquist rate. But limited to the accuracy of atomic matching of traditional reconstruction algorithm, CS is difficult to reconstruct the initial signal with high resolution. Meanwhile, scholar found that trained neural network have a strong ability in settling such inverse problems. Thus, we propose a Super-Resolution Convolutional Neural Network (SRCNN) that consists of three convolutional layers. Every layer has a fixed number of kernels and has their own specific function. The process is implemented using classical compressed sensing algorithm to process the input image, afterwards, the output images are coded via SRCNN. We achieve higher resolution image by using the SRCNN algorithm proposed. The simulation results show that the proposed method helps improve PSNR value and promote visual effect.
Healthcare Internet of Things (HIoT) is transforming healthcare industry by providing large scale connectivity for medical devices, patients, physicians, clinical and nursing staff who use them and facilitate real-time monitoring based on the information gathered from the connected things. Heterogeneity and vastness of this network provide both opportunity and challenges for information collection and sharing. Patient-centric information such as health status and medical devices used by them must be protected to respect their safety and privacy, while healthcare knowledge should be shared in confidence by experts for healthcare innovation and timely treatment of patients. In this paper an overview of HIoT is given, emphasizing its characteristics to those of Big Data, and a security and privacy architecture is proposed for it. Context-sensitive role-based access control scheme is discussed to ensure that HIoT is reliable, provides data privacy, and achieves regulatory compliance.
Despite corporate cyber intrusions attracting all the attention, privacy breaches that we, as ordinary users, should be worried about occur every day without any scrutiny. Smartphones, a household item, have inadvertently become a major enabler of privacy breaches. Smartphone platforms use permission systems to regulate access to sensitive resources. These permission systems, however, lack the ability to understand users’ privacy expectations leaving a significant gap between how permission models behave and how users would want the platform to protect their sensitive data. This dissertation provides an in-depth analysis of how users make privacy decisions in the context of Smartphones and how platforms can accommodate user’s privacy requirements systematically. We first performed a 36-person field study to quantify how often applications access protected resources when users are not expecting it. We found that when the application requesting the permission is running invisibly to the user, they are more likely to deny applications access to protected resources. At least 80% of our participants would have preferred to prevent at least one permission request. To explore the feasibility of predicting user’s privacy decisions based on their past decisions, we performed a longitudinal 131-person field study. Based on the data, we built a classifier to make privacy decisions on the user’s behalf by detecting when the context has changed and inferring privacy preferences based on the user’s past decisions. We showed that our approach can accurately predict users’ privacy decisions 96.8% of the time, which is an 80% reduction in error rate compared to current systems. Based on these findings, we developed a custom Android version with a contextually aware permission model. The new model guards resources based on user’s past decisions under similar contextual circumstances. We performed a 38-person field study to measure the efficiency and usability of the new permission model. Based on exit interviews and 5M data points, we found that the new system is effective in reducing the potential violations by 75%. Despite being significantly more restrictive over the default permission systems, participants did not find the new model to cause any usability issues in terms of application functionality.
At the first Information Hiding Workshop in 1996 we tried to clarify the models and assumptions behind information hiding. We agreed the terminology of cover text and stego text against a background of the game proposed by our keynote speaker Gus Simmons: that Alice and Bob are in jail and wish to hatch an escape plan without the fact of their communication coming to the attention of the warden, Willie. Since then there have been significant strides in developing technical mechanisms for steganography and steganalysis, with new techniques from machine learning providing ever more powerful tools for the analyst, such as the ensemble classifier. There have also been a number of conceptual advances, such as the square root law and effective key length. But there always remains the question whether we are using the right security metrics for the application. In this talk I plan to take a step backwards and look at the systems context. When can stegosystems actually be used? The deployment history is patchy, with one being Trucrypt's hidden volumes, inspired by the steganographic file system. Image forensics also find some use, and may be helpful against some adversarial machine learning attacks (or at least help us understand them). But there are other contexts in which patterns of activity have to be hidden for that activity to be effective. I will discuss a number of examples starting with deception mechanisms such as honeypots, Tor bridges and pluggable transports, which merely have to evade detection for a while; then moving on to the more challenging task of designing deniability mechanisms, from leaking secrets to a newspaper through bitcoin mixes, which have to withstand forensic examination once the participants come under suspicion. We already know that, at the system level, anonymity is hard. However the increasing quantity and richness of the data available to opponents may move a number of applications from the deception category to that of deniability. To pick up on our model of 20 years ago, Willie might not just put Alice and Bob in solitary confinement if he finds them communicating, but torture them or even execute them. Changing threat models are historically one of the great disruptive forces in security engineering. This leads me to suspect that a useful research area may be the intersection of deception and forensics, and how information hiding systems can be designed in anticipation of richer and more complex threat models. The ever-more-aggressive censorship systems deployed in some parts of the world also raise the possibility of using information hiding techniques in censorship circumvention. As an example of recent practical work, I will discuss Covertmark, a toolkit for testing pluggable transports that was partly inspired by Stirmark, a tool we presented at the second Information Hiding Workshop twenty years ago.
Extended interaction oscillators (EIOs) are high-frequency vacuum-electronic sources, capable to generate millimeter-wave to terahertz (THz) radiations. They are considered to be potential sources of high-power submillimeter wavelengths. Different slow-wave structures and beam geometries are used for EIOs. This paper presents a quantitative figure of merit, the critical unloaded oscillating frequency (fcr) for any specific geometry of EIO. This figure is calculated and tested for 2π standing-wave modes (a common mode for EIOs) of two different slowwave structures (SWSs), one double-ridge SWS driven by a sheet electron beam and one ring-loaded waveguide driven by a cylindrical beam. The calculated fcrs are compared with particle-in-cell (PIC) results, showing an acceptable agreement. The derived fcr is calculated three to four orders of magnitude faster than the PIC solver. Generality of the method, its clear physical interpretation and computational rapidity, makes it a convenient approach to evaluate the high-frequency behavior of any specified EIO geometry. This allows to investigate the changes in geometry to attain higher frequencies at THz spectrum.
Data privacy and security is a leading concern for providers and customers of cloud computing, where Virtual Machines (VMs) can co-reside within the same underlying physical machine. Side channel attacks within multi-tenant virtualized cloud environments are an established problem, where attackers are able to monitor and exfiltrate data from co-resident VMs. Virtualization services have attempted to mitigate such attacks by preventing VM-to-VM interference on shared hardware by providing logical resource isolation between co-located VMs via an internal virtual network. However, such approaches are also insecure, with attackers capable of performing network channel attacks which bypass mitigation strategies using vectors such as ARP Spoofing, TCP/IP steganography, and DNS poisoning. In this paper we identify a new vulnerability within the internal cloud virtual network, showing that through a combination of TAP impersonation and mirroring, a malicious VM can successfully redirect and monitor network traffic of VMs co-located within the same physical machine. We demonstrate the feasibility of this attack in a prominent cloud platform - OpenStack - under various security requirements and system conditions, and propose countermeasures for mitigation.
Cyber threats and attacks have significantly increased in complexity and quantity throughout this past year. In this paper, the top fifteen cyber threats and trends are articulated in detail to provide awareness throughout the community and raising awareness. Specific attack vectors, mitigation techniques, kill chain and threat agents addressing Smart Digital Environments (SDE), including Internet of Things (IoT), are discussed. Due to the rising number of IoT and embedded firmware devices within ubiquitous computing environments such as smart homes, smart businesses and smart cities, the top fifteen cyber threats are being used in a comprehensive manner to take advantage of vulnerabilities and launch cyber operations using multiple attack vectors. What began as ubiquitous, or pervasive, computing is now matured to smart environments where the vulnerabilities and threats are widespread.
With the increase of mobile equipment and transmission data, Common Public Radio Interface (CPRI) between Building Base band Unit (BBU) and Remote Radio Unit (RRU) suffers amounts of increasing transmission data. It is essential to compress the data in CPRI if more data should be transferred without congestion under the premise of restriction of fiber consumption. A data compression scheme based on Discrete Sine Transform (DST) and Lloyd-Max quantization is proposed in distributed Base Station (BS) architecture. The time-domain samples are transformed by DST according to the characteristics of Orthogonal Frequency Division Multiplexing (OFDM) baseband signals, and then the coefficients after transformation are quantified by the Lloyd-Max quantizer. The simulation results show that the proposed scheme can work at various Compression Ratios (CRs) while the values of Error Vector Magnitude (EVM) are better than the limits in 3GPP.
User testing is often used to inform the development of user interfaces (UIs). But what if an interface needs to be developed for a system that does not yet exist? In that case, existing datasets can provide valuable input for UI development. We apply a data-driven approach to the development of a privacy-setting interface for Internet-of-Things (IoT) devices. Applying machine learning techniques to an existing dataset of users' sharing preferences in IoT scenarios, we develop a set of "smart" default profiles. Our resulting interface asks users to choose among these profiles, which capture their preferences with an accuracy of 82%—a 14% improvement over a naive default setting and a 12% improvement over a single smart default setting for all users.
Smart meters migrate conventional electricity grid into digitally enabled Smart Grid (SG), which is more reliable and efficient. Fine-grained energy consumption data collected by smart meters helps utility providers accurately predict users' demands and significantly reduce power generation cost, while it imposes severe privacy risks on consumers and may discourage them from using those “espionage meters". To enjoy the benefits of smart meter measured data without compromising the users' privacy, in this paper, we try to integrate distributed differential privacy (DDP) techniques into data-driven optimization, and propose a novel scheme that not only minimizes the cost for utility providers but also preserves the DDP of users' energy profiles. Briefly, we add differential private noises to the users' energy consumption data before the smart meters send it to the utility provider. Due to the uncertainty of the users' demand distribution, the utility provider aggregates a given set of historical users' differentially private data, estimates the users' demands, and formulates the data- driven cost minimization based on the collected noisy data. We also develop algorithms for feasible solutions, and verify the effectiveness of the proposed scheme through simulations using the simulated energy consumption data generated from the utility company's real data analysis.
While advances in cyber-security defensive mechanisms have substantially prevented malware from penetrating into organizational Information Systems (IS) networks, organizational users have found themselves vulnerable to threats emanating from Advanced Persistent Threat (APT) vectors, mostly in the form of spear phishing. In this respect, the question of how an organizational user can differentiate between a genuine communication and a similar looking fraudulent communication in an email/APT threat vector remains a dilemma. Therefore, identifying and evaluating the APT vector attributes and assigning relative weights to them can assist the user to make a correct decision when confronted with a scenario that may be genuine or a malicious APT vector. In this respect, we propose an APT Decision Matrix model which can be used as a lens to build multiple APT threat vector scenarios to identify threat attributes and their weights, which can lead to systems compromise.
Formal verification of infinite-state systems, and distributed systems in particular, is a long standing research goal. In the deductive verification approach, the programmer provides inductive invariants and pre/post specifications of procedures, reducing the verification problem to checking validity of logical verification conditions. This check is often performed by automated theorem provers and SMT solvers, substantially increasing productivity in the verification of complex systems. However, the unpredictability of automated provers presents a major hurdle to usability of these tools. This problem is particularly acute in case of provers that handle undecidable logics, for example, first-order logic with quantifiers and theories such as arithmetic. The resulting extreme sensitivity to minor changes has a strong negative impact on the convergence of the overall proof effort.
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
A new kind of Square Lattice Photonic Crystal Fiber (SLPCF) is proposed, the first ring is formed by elliptical holes filled with ethanol. To regulate the dispersion and the confinement loss we put a circular air-holes with small diameters into the third ring of the cladding area. The diameter of the core is arranged as d2=2*A-d, where A is the pitch and d diameter of the air-holes. After simulations, we got a dispersion low as 0.0494 (ps/Km. nm) and a confinement loss also low as 2.6×10-7(dB/m) at a wavelength of 1.55 $μ$m. At 0.8 $μ$m we obtained a nonlinearity high as 60.95 (1/km. w) and a strong guiding light. Also, we compare the filled ethanol elliptical holes with the air filled elliptical holes of our proposed square lattice photonic crystal fiber. We use as a simulation method in this manuscript the two-dimensional FDTD method. The utilization of the proposed fiber is in the telecommunication transmission because of its low dispersion and low loss at the c-band and in the nonlinear applications.
This paper presents a study on detecting cyber attacks on industrial control systems (ICS) using convolutional neural networks. The study was performed on a Secure Water Treatment testbed (SWaT) dataset, which represents a scaled-down version of a real-world industrial water treatment plant. We suggest a method for anomaly detection based on measuring the statistical deviation of the predicted value from the observed value. We applied the proposed method by using a variety of deep neural network architectures including different variants of convolutional and recurrent networks. The test dataset included 36 different cyber attacks. The proposed method successfully detected 31 attacks with three false positives thus improving on previous research based on this dataset. The results of the study show that 1D convolutional networks can be successfully used for anomaly detection in industrial control systems and outperform recurrent networks in this setting. The findings also suggest that 1D convolutional networks are effective at time series prediction tasks which are traditionally considered to be best solved using recurrent neural networks. This observation is a promising one, as 1D convolutional neural networks are simpler, smaller, and faster than the recurrent neural networks.
Now a days, Cloud computing has brought a unbelievable change in companies, organizations, firm and institutions etc. IT industries is advantage with low investment in infrastructure and maintenance with the growth of cloud computing. The Virtualization technique is examine as the big thing in cloud computing. Even though, cloud computing has more benefits; the disadvantage of the cloud computing environment is ensuring security. Security means, the Cloud Service Provider to ensure the basic integrity, availability, privacy, confidentiality, authentication and authorization in data storage, virtual machine security etc. In this paper, we presented a Local outlier factors mechanism, which may be helpful for the detection of Distributed Denial of Service attack in a cloud computing environment. As DDoS attack becomes strong with the passing of time, and then the attack may be reduced, if it is detected at first. So we fully focused on detecting DDoS attack to secure the cloud environment. In addition, our scheme is able to identify their possible sources, giving important clues for cloud computing administrators to spot the outliers. By using WEKA (Waikato Environment for Knowledge Analysis) we have analyzed our scheme with other clustering algorithm on the basis of higher detection rates and lower false alarm rate. DR-LOF would serve as a better DDoS detection tool, which helps to improve security framework in cloud computing.
Internet users are increasing day by day. The web services and mobile web applications or desktop web application's demands are also increasing. The chances of a system being hacked are also increasing. All web applications maintain data at the backend database from which results are retrieved. As web applications can be accessed from anywhere all around the world which must be available to all the users of the web application. SQL injection attack is nowadays one of the topmost threats for security of web applications. By using SQL injection attackers can steal confidential information. In this paper, the SQL injection attack detection method by removing the parameter values of the SQL query is discussed and results are presented.
The rise of social networks during the last 10 years has created a situation in which up to 100 million new images and photographs are uploaded and shared by users every day. This environment poses a ideal background for those who wish to communicate covertly by the use of steganography. It also creates a new set of challenges for steganalysts, who have to shift their field of work away from a purely scientific laboratory environment and into a diverse real-world scenario, while at the same time having to deal with entirely new problems, such as the detection of steganographic channels or the impact that even a low false positive rate has when investigating the millions of images which are shared every day on social networks. We evaluate how to address these challenges with traditional steganographic and statistical methods, rather then using high performance computing and machine learning. By the double embedding attack on the well-known F5 steganographic algorithm we achieve a false positive rate well below known attacks.
A dynamic DNA for key-based Cryptography that encrypt and decrypt plain text characters, text file, image file and audio file using DNA sequences. Cryptography is always taken as the secure way while transforming the confidential information over the network such as LAN, Internet. But over the time, the traditional cryptographic approaches are been replaced with more effective cryptographic systems such as Quantum Cryptography, Biometric Cryptography, Geographical Cryptography and DNA Cryptography. This approach accepts the DNA sequences as the input to generate the key that going to provide two stages of data security.