Biblio
Head-mounted augmented reality (AR) enables embodied in situ drawing in three dimensions (3D). We explore 3D drawing interactions based on uninstrumented, unencumbered (bare) hands that preserve the user's ability to freely navigate and interact with the physical environment. We derive three alternative interaction techniques supporting bare-handed drawing in AR from the literature and by analysing several envisaged use cases. The three interaction techniques are evaluated in a controlled user study examining three distinct drawing tasks: planar drawing, path description, and 3D object reconstruction. The results indicate that continuous freehand drawing supports faster line creation than the control point based alternatives, although with reduced accuracy. User preferences for the different techniques are mixed and vary considerably between the different tasks, highlighting the value of diverse and flexible interactions. The combined effectiveness of these three drawing techniques is illustrated in an example application of 3D AR drawing.
Our prototype app, Pocket Penjing, built using Unity3D, takes its name from the Chinese "Penjing." These tray plantings of miniature trees pre-date bonsai, often including miniature benches or figures to allude to people's relationship to the tree. App users choose a species, then create and name their tree. Swiping rotates a 3D globe showing flagged locations. Each flag represents a live online air quality monitoring station data stream that the app can scrape. Data is pulled in from the selected station and the AR window loads. The AR tree grows in real-time 3D. Its L-Systems form is determined by the selected live air quality data. We used this prototype as the basis of a two-part formative participatory design workshop with 63 participants.
In rapid continuous software development, time- and cost-effective prototyping techniques are beneficial through enabling software designers to quickly explore and evaluate different design concepts. Regarding low-fidelity prototyping for augmented reality (AR) applications, software designers are so far restricted to non-digital prototypes, which enable the visualization of first design concepts, but can be laborious in capturing interactivity. The lack of empirical values and standards for designing user interactions in AR-software leads to a particular need for applying end-user feedback to software refinement. In this paper we present the concept of a tool for rapid digital prototyping for augmented reality applications, enabling software designers to rapidly design augmented reality prototypes, without requiring programming skills. The prototyping tool focuses on modeling multimodal interactions, especially regarding the interaction with physical objects, as well as performing user-based studies to integrate valuable end-user feedback into the refinement of software aspects.
In traditional steganographic schemes, RGB three channels payloads are assigned equally in a true color image. In fact, the security of color image steganography relates not only to data-embedding algorithms but also to different payload partition. How to exploit inter-channel correlations to allocate payload for performance enhancement is still an open issue in color image steganography. In this paper, a novel channel-dependent payload partition strategy based on amplifying channel modification probabilities is proposed, so as to adaptively assign the embedding capacity among RGB channels. The modification probabilities of three corresponding pixels in RGB channels are simultaneously increased, and thus the embedding impacts could be clustered, in order to improve the empirical steganographic security against the channel co-occurrences detection. Experimental results show that the new color image steganographic schemes incorporated with the proposed strategy can effectively make the embedding changes concentrated mainly in textured regions, and achieve better performance on resisting the modern color image steganalysis.
Least Significant Bit (LSB) as one of steganography methods that already exist today is really mainstream because easy to use, but has weakness that is too easy to decode the hidden message. It is because in LSB the message embedded evenly to all pixels of an image. This paper introduce a method of steganography that combine LSB with clustering method that is Fuzzy C-Means (FCM). It is abbreviated with LSB\_FCM, then compare the stegano result with LSB method. Each image will divided into two cluster, then the biggest cluster capacity will be choosen, finally save the cluster coordinate key as place for embedded message. The key as a reference when decode the message. Each image has their own cluster capacity key. LSB\_FCM has disadvantage that is limited place to embedded message, but it also has advantages compare with LSB that is LSB\_FCM have more difficulty level when decrypted the message than LSB method, because in LSB\_FCM the messages embedded randomly in the best cluster pixel of an image, so to decrypted people must have the cluster coordinate key of the image. Evaluation result show that the MSE and PSNR value of LSB\_FCM some similiar with the pure LSB, it means that LSB\_FCM can give imperceptible image as good as the pure LSB, but have better security from the embedding place.
The security of image steganography is an important basis for evaluating steganography algorithms. Steganography has recently made great progress in the long-term confrontation with steganalysis. To improve the security of image steganography, steganography must have the ability to resist detection by steganalysis algorithms. Traditional embedding-based steganography embeds the secret information into the content of an image, which unavoidably leaves a trace of the modification that can be detected by increasingly advanced machine-learning-based steganalysis algorithms. The concept of steganography without embedding (SWE), which does not need to modify the data of the carrier image, appeared to overcome the detection of machine-learning-based steganalysis algorithms. In this paper, we propose a novel image SWE method based on deep convolutional generative adversarial networks. We map the secret information into a noise vector and use the trained generator neural network model to generate the carrier image based on the noise vector. No modification or embedding operations are required during the process of image generation, and the information contained in the image can be extracted successfully by another neural network, called the extractor, after training. The experimental results show that this method has the advantages of highly accurate information extraction and a strong ability to resist detection by state-of-the-art image steganalysis algorithms.
At the first Information Hiding Workshop in 1996 we tried to clarify the models and assumptions behind information hiding. We agreed the terminology of cover text and stego text against a background of the game proposed by our keynote speaker Gus Simmons: that Alice and Bob are in jail and wish to hatch an escape plan without the fact of their communication coming to the attention of the warden, Willie. Since then there have been significant strides in developing technical mechanisms for steganography and steganalysis, with new techniques from machine learning providing ever more powerful tools for the analyst, such as the ensemble classifier. There have also been a number of conceptual advances, such as the square root law and effective key length. But there always remains the question whether we are using the right security metrics for the application. In this talk I plan to take a step backwards and look at the systems context. When can stegosystems actually be used? The deployment history is patchy, with one being Trucrypt's hidden volumes, inspired by the steganographic file system. Image forensics also find some use, and may be helpful against some adversarial machine learning attacks (or at least help us understand them). But there are other contexts in which patterns of activity have to be hidden for that activity to be effective. I will discuss a number of examples starting with deception mechanisms such as honeypots, Tor bridges and pluggable transports, which merely have to evade detection for a while; then moving on to the more challenging task of designing deniability mechanisms, from leaking secrets to a newspaper through bitcoin mixes, which have to withstand forensic examination once the participants come under suspicion. We already know that, at the system level, anonymity is hard. However the increasing quantity and richness of the data available to opponents may move a number of applications from the deception category to that of deniability. To pick up on our model of 20 years ago, Willie might not just put Alice and Bob in solitary confinement if he finds them communicating, but torture them or even execute them. Changing threat models are historically one of the great disruptive forces in security engineering. This leads me to suspect that a useful research area may be the intersection of deception and forensics, and how information hiding systems can be designed in anticipation of richer and more complex threat models. The ever-more-aggressive censorship systems deployed in some parts of the world also raise the possibility of using information hiding techniques in censorship circumvention. As an example of recent practical work, I will discuss Covertmark, a toolkit for testing pluggable transports that was partly inspired by Stirmark, a tool we presented at the second Information Hiding Workshop twenty years ago.
This paper presents an effective steganalytic scheme based on CNN for detecting MP3 steganography in the entropy code domain. These steganographic methods hide secret messages into the compressed audio stream through Huffman code substitution, which usually achieve high capacity, good security and low computational complexity. First, unlike most previous CNN based steganalytic methods, the quantified modified DCT (QMDCT) coefficients matrix is selected as the input data of the proposed network. Second, a high pass filter is used to extract the residual signal, and suppress the content itself, so that the network is more sensitive to the subtle alteration introduced by the data hiding methods. Third, the \$ 1 $\backslash$times 1 \$ convolutional kernel and the batch normalization layer are applied to decrease the danger of overfitting and accelerate the convergence of the back-propagation. In addition, the performance of the network is optimized via fine-tuning the architecture. The experiments demonstrate that the proposed CNN performs far better than the traditional handcrafted features. In particular, the network has a good performance for the detection of an adaptive MP3 steganography algorithm, equal length entropy codes substitution (EECS) algorithm which is hard to detect through conventional handcrafted features. The network can be applied to various bitrates and relative payloads seamlessly. Last but not the least, a sliding window method is proposed to steganalyze audios of arbitrary size.
Leading steganography systems make use of the Syndrome-Trellis Code (STC) algorithm to minimize a distortion function while encoding the desired payload, but this constrains the distortion function to be additive. The Gibbs Embedding algorithm works for a certain class of non-additive distortion functions, but has its own limitations and is highly complex. In this short paper we show that it is possible to modify the STC algorithm in a simple way, to minimize a non-additive distortion function suboptimally. We use it for two examples. First, applying it to the S-UNIWARD distortion function, we show that it does indeed reduce distortion, compared with minimizing the additive approximation currently used in image steganography, but that it makes the payload more – not less – detectable. This parallels research attempting to use Gibbs Embedding for the same task. Second, we apply it to distortion defined by the output of a specific detector, as a counter-move in the steganography game. However, unless the Warden is forced to move first (by fixing the detector) this is highly detectable.
From the three basic paradigms to implement steganography, the concept to realise the information hiding by modifying preexisting cover objects (i.e. steganography by modification) is by far dominating the scientific work in this field, while the other two paradigms (steganography by cover selection or -synthesis) are marginalised although they inherently create stego objects that are closer to the statistical properties of unmodified covers and therefore would create better (i.e. harder to detect) stego channels. Here, we revisit the paradigm of steganography by synthesis to discuss its benefits and limitations on the example of face morphing in images as an interesting synthesis method. The reason to reject steganography by modification as no longer suitable lies in the current trend of steganography being used in modern day malicious software (malware) families like StuxNet, Duqu or Duqu 2. As a consequence, we discuss here the resulting shift in detection assumptions from cover-only- to cover-stegoattacks (or even further) automatically rendering even the most sophisticated steganography by modification methods useless. In this paper we use the example of face morphing to demonstrate the necessary conditions1 'undetectability' as well as 'plausibility and indeterminism' for characterizing suitable synthesis methods. The widespread usage of face morphing together with the content dependent, complex nature of the image manipulations required and the fact that it has been established that morphs are very hard to detect, respectively keep apart from other (assumedly innocent) image manipulations assures that it can successfully fulfil these necessary conditions. As a result it could be used as a core for driving steganography by synthesis schemes inherently resistant against cover-stego-attacks.
The rise of social networks during the last 10 years has created a situation in which up to 100 million new images and photographs are uploaded and shared by users every day. This environment poses a ideal background for those who wish to communicate covertly by the use of steganography. It also creates a new set of challenges for steganalysts, who have to shift their field of work away from a purely scientific laboratory environment and into a diverse real-world scenario, while at the same time having to deal with entirely new problems, such as the detection of steganographic channels or the impact that even a low false positive rate has when investigating the millions of images which are shared every day on social networks. We evaluate how to address these challenges with traditional steganographic and statistical methods, rather then using high performance computing and machine learning. By the double embedding attack on the well-known F5 steganographic algorithm we achieve a false positive rate well below known attacks.
The rise of social networks during the last 10 years has created a situation in which up to 100 million new images and photographs are uploaded and shared by users every day. This environment poses an ideal background for those who wish to communicate covertly by the use of steganography. It also creates a new set of challenges for steganalysts, who have to shift their field of work away from a purely scientific laboratory environment and into a diverse real-world scenario, while at the same time having to deal with entirely new problems, such as the detection of steganographic channels or the impact that even a low false positive rate has when investigating the millions of images which are shared every day on social networks. We evaluate how to address these challenges with traditional steganographic and statistical methods, rather then using high performance computing and machine learning. To achieve this we first analyze the steganographic algorithm F5 applied to images with a high degree of diversity, as would be seen in a typical social network. We show that the biggest challenge lies in the detection of images whose payload is less then 50% of the available capacity of an image. We suggest new detection methods and apply these to the problem of channel detection in social network. We are able to show that using our attacks we are able to detect the majority of covert F5 channels after a mix containing 10 stego images has been classified by our scheme.
This work explores attack and attacker profiles using a VoIP-based Honeypot. We implemented a low interaction honeypot environment to identify the behaviors of the attackers and the services most frequently used. We watched honeypot for 180 days and collected 242.812 events related to FTP, SIP, MSSQL, MySQL, SSH, SMB protocols. The results provide an in-depth analysis about both attacks and attackers profile, their tactics and purposes. It also allows understanding user interaction with a vulnerable honeypot environment.
Emerging Machine Learning (ML) techniques, such as Deep Neural Network, are widely used in today's applications and services. However, with social awareness of privacy and personal data rapidly rising, it becomes a pressing and challenging societal issue to both keep personal data private and benefit from the data analytics power of ML techniques at the same time. In this paper, we argue that to avoid those costs, reduce latency in data processing, and minimise the raw data revealed to service providers, many future AI and ML services could be deployed on users' devices at the Internet edge rather than putting everything on the cloud. Moving ML-based data analytics from cloud to edge devices brings a series of challenges. We make three contributions in this paper. First, besides the widely discussed resource limitation on edge devices, we further identify two other challenges that are not yet recognised in existing literature: lack of suitable models for users, and difficulties in deploying services for users. Second, we present preliminary work of the first systematic solution, i.e. Zoo, to fully support the construction, composing, and deployment of ML models on edge and local devices. Third, in the deployment example, ML service are proved to be easy to compose and deploy with Zoo. Evaluation shows its superior performance compared with state-of-art deep learning platforms and Google ML services.
Reuse of pre-existing industry datasets for research purposes requires a multi-stakeholder solution that balances the researcher's analysis objectives with the need to engage the industry data custodian, whilst respecting the privacy rights of human data subjects. Current methods place the burden on the data custodian, whom may not be sufficiently trained to fully appreciate the nuances of data de-identification. Through modelling of functional, quality, and emotional goals, we propose a de-identification in the cloud approach whereby the researcher proposes analyses along with the extraction and de-identification operations, while engaging the industry data custodian with secure control over authorising the proposed analyses. We demonstrate our approach through implementation of a de-identification portal for sports club data.
We address the problem of substring searchable encryption. A single user produces a big stream of data and later on wants to learn the positions in the string that some patterns occur. Although current techniques exploit auxiliary data structures to achieve efficient substring search on the server side, the cost at the user side may be prohibitive. We revisit the work of substring searchable encryption in order to reduce the storage cost of auxiliary data structures. Our solution entails a suffix array based index design, which allows optimal storage cost \$O(n)\$ with small hidden factor at the size of the string n. Moreover, we implemented our scheme and the state of the art protocol $\backslash$textbackslashciteChase to demonstrate the performance advantage of our solution with precise benchmark results.
The use of typing biometrics—the characteristic typing patterns of individual keyboard users—has been studied extensively in the context of enhancing multi-factor authentication services. The key starting point for such work has been the collection of high-fidelity local timing data, and the key (implicit) security assumption has been that such biometrics could not be obtained by other means. We show that the latter assumption to be false, and that it is entirely feasible to obtain useful typing biometric signatures from third-party timing logs. Specifically, we show that the logs produced by realtime collaboration services during their normal operation are of sufficient fidelity to successfully impersonate a user using remote data only. Since the logs are routinely shared as a byproduct of the services' operation, this creates an entirely new avenue of attack that few users would be aware of. As a proof of concept, we construct successful biometric attacks using only the log-based structure (complete editing history) of a shared Google Docs, or Zoho Writer, document which is readily available to all contributing parties. Using the largest available public data set of typing biometrics, we are able to create successful forgeries 100% of the time against a commercial biometric service. Our results suggest that typing biometrics are not robust against practical forgeries, and should not be given the same weight as other authentication factors. Another important implication is that the routine collection of detailed timing logs by various online services also inherently (and implicitly) contains biometrics. This not only raises obvious privacy concerns, but may also undermine the effectiveness of network anonymization solutions, such as ToR, when used with existing services.
Cloud-backed file systems provide on-demand, high-availability, scalable storage. Their security may be improved with techniques such as erasure codes and secret sharing to fragment files and encryption keys in several clouds. Attacking the server-side of such systems involves penetrating one or more clouds, which can be extremely difficult. Despite all these benefits, a weak side remains: the client-side. The client devices store user credentials that, if stolen or compromised, may lead to confidentiality, integrity, and availability violations. In this paper we propose RockFS, a cloud-backed file system framework that aims to make the client-side of such systems resilient to attacks. RockFS protects data in the client device and allows undoing unintended file modifications.
Due to the rapid development of internet in our daily life, protecting privacy has become a focus of attention. To create privacy-preserving database and prevent illegal user access the database, oblivious transfer with access control (OTAC) was proposed, which is a cryptographic primitive that extends from oblivious transfer (OT). It allows a user to anonymously query a database where each message is protected by an access control policy and only if the user' s attribute satisfy that access control policy can obtain it. In this paper, we propose a new protocol for OTAC by using elliptic curve cryptography, which is more efficient compared to the existing similar protocols. In our scheme, we also preserves user's anonymity and ensures that the user's attribute is not disclosed to the sender. Additionally, our construction guarantees the user to verify the correctness of messages recovered at the end of each transfer phase.
With the pervasiveness of the Internet of Things (IoT) and the rapid progress of wireless communications, Wireless Body Area Networks (WBANs) have attracted significant interest from the research community in recent years. As a promising networking paradigm, it is adopted to improve the healthcare services and create a highly reliable ubiquitous healthcare system. However, the flourish of WBANs still faces many challenges related to security and privacy preserving. In such pervasive environment where the context conditions dynamically and frequently change, context-aware solutions are needed to satisfy the users' changing needs. Therefore, it is essential to design an adaptive access control scheme that can simultaneously authorize and authenticate users while considering the dynamic context changes. In this paper, we propose a context-aware access control and anonymous authentication approach based on a secure and efficient Hybrid Certificateless Signcryption (H-CLSC) scheme. The proposed scheme combines the merits of Ciphertext-Policy Attribute-Based Signcryption (CP-ABSC) and Identity-Based Broadcast Signcryption (IBBSC) in order to satisfy the security requirements and provide an adaptive contextual privacy. From a security perspective, it achieves confidentiality, integrity, anonymity, context-aware privacy, public verifiability, and ciphertext authenticity. Moreover, the key escrow and public key certificate problems are solved through this mechanism. Performance analysis demonstrates the efficiency and the effectiveness of the proposed scheme compared to benchmark schemes in terms of functional security, storage, communication and computational cost.
Connected cars have received massive attention in Intelligent Transportation System. Many potential services, especially safety-related ones, rely on spatial-temporal messages periodically broadcast by cars. Without a secure authentication algorithm, malicious cars may send out invalid spatial-temporal messages and then deny creating them. Meanwhile, a lot of private information may be disclosed from these spatial-temporal messages. Since cars move on expressways at high speed, any authentication must be performed in real-time to prevent crashes. In this paper, we propose a Fast and Anonymous Spatial-Temporal Trust (FastTrust) mechanism to ensure these properties. In contrast to most authentication protocols which rely on fixed infrastructures, FastTrust is distributed and mostly designed on symmetric-key cryptography and an entropy-based commitment, and is able to fast authenticate spatial-temporal messages. FastTrust also ensures the anonymity and unlinkability of spatial-temporal messages by developing a pseudonym-varying scheduling scheme on cars. We provide both analytical and simulation evaluations to show that FastTrust achieves the security and privacy properties. FastTrust is low-cost in terms of communication and computational resources, authenticating 20 times faster than existing Elliptic Curve Digital Signature Algorithm.
Inter-vehicle communications disclose rich information about vehicle whereabouts. Pseudonymous authentication secures communication while enhancing user privacy thanks to a set of anonymized certificates, termed pseudonyms. Vehicles switch the pseudonyms (and the corresponding private key) frequently; we term this pseudonym transition process. However, exactly because vehicles can in principle change their pseudonyms asynchronously, an adversary that eavesdrops (pseudonymously) signed messages, could link pseudonyms based on the times of pseudonym transition processes. In this poster, we show how one can link pseudonyms of a given vehicle by simply looking at the timing information of pseudonym transition processes. We also propose "mix-zone everywhere": time-aligned pseudonyms are issued for all vehicles to facilitate synchronous pseudonym update; as a result, all vehicles update their pseudonyms simultaneously, thus achieving higher user privacy protection.
Smart meters provide fine-grained electricity consumption reporting to electricity providers. This constitutes an invasive factor into the privacy of the consumers, which has raised many privacy concerns. Although billing requires attributable consumption reporting, consumption reporting for operational monitoring and control measures can be non-attributable. However, the privacy-preserving AMS schemes in the literature tend to address these two categories disjointly — possibly due to their somewhat contradictory characteristics. In this paper, we propose an efficient two-party privacy-preserving cryptographic scheme that addresses operational control measures and billing jointly. It is computationally efficient as it is based on symmetric cryptographic primitives. No online trusted third party (TTP) is required.